GenAIExamples
GenAIExamples copied to clipboard
Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
Many docs in this repo instruct giving HTTP/S proxies on Docker build command line: ``` $ git grep -e "--build-arg.*https*_proxy=" | wc -l 58 ``` IMHO it would be better...
# Problem Throughout the ChatQnA implementation, there are numerous implementations of docker-compose.yaml files, each specific to a hardware orientation (xeon, gaudi, etc.). This results in substantial code bloat and introduces...
**Setup** These errors originally happened with v0.7 ChatQnA Xeon installation [1], but e.g. updating to TEI services from `1.2-cpu` version to latest `1.5-cpu`, and and TGI service from `1.4` version...
I am trying the ChatQnA GenAIExample on docker in Xeon. I am uploading the document https://docs.aws.amazon.com/pdfs/whitepapers/latest/optimizing-postgresql-on-ec2-using-ebs/optimizing-postgresql-on-ec2-using-ebs.pdf?did=wp_card&trk=wp_card. This is a public whitepaper published by AWS. The embedding into vector DB is...
Currently one can get inferencing metrics from TGI and TEI backend services, but there are no E2E metrics for the whole pipeline, e.g. what are the first response, and response...
I would expect seeing pod container `securityContext`s like this: ``` securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true seccompProfile: type: RuntimeDefault capabilities: drop: [ "ALL" ] ``` And `runAsUser` setting for something else...