llama-stack icon indicating copy to clipboard operation
llama-stack copied to clipboard

What configs input when build from distributions/meta-reference-gpu/build.yaml

Open AlexHe99 opened this issue 4 months ago • 4 comments

System Info

NVIDIA A30

Information

  • [ ] The official example scripts
  • [ ] My own modified scripts

🐛 Describe the bug

I try build by llama stack build --config distributions/meta-reference-gpu/build.yaml and do not know what should be config input for these three item of Configuring provider (remote::pgvector)

Enter value for db (required):
Enter value for user (required): 
Enter value for password (required): 

Any documentation to use distributions/meta-reference-gpu/build.yaml to build out the distribution.

Error logs

Full log as bellow

$ llama stack build --config distributions/meta-reference-gpu/build.yaml

Llama Stack is composed of several APIs working together. For each API served by the Stack,
we need to configure the providers (implementations) you want to use for these APIs.

Configuring API `inference`...
> Configuring provider `(meta-reference)`
Enter value for model (default: Llama3.1-8B-Instruct) (required):
Enter value for torch_seed (optional):
Enter value for max_seq_len (default: 4096) (required):
Enter value for max_batch_size (default: 1) (required):

Configuring API `memory`...
> Configuring provider `(meta-reference)`

> Configuring provider `(remote::chromadb)`
Enter value for host (default: localhost) (required): localhost
Enter value for port (required): 5001

> Configuring provider `(remote::pgvector)`
Enter value for host (default: localhost) (required): localhost
Enter value for port (default: 5432) (required): 5432
Enter value for db (required):
Enter value for user (required): 
Enter value for password (required): 

Expected behavior

Clear and Complete documentation to guide how to use 'distributions/meta-reference-gpu/build.yaml' to build out the distributiuon and how to test it .

AlexHe99 avatar Oct 25 '24 08:10 AlexHe99