Aryan Kashyap
Aryan Kashyap
Yeah I know about it but I wanna know more about the LLM call Prompt what is happening. For example how the embeddings are used. I think like first it...
According to this LLM is first generating the question then answer then identifying the chunks from the given data as ground truth. The problem with this approach I can see...
Also one more doubt @omkar-334 that while embedding node does it make sure not to lose any tokens. As embeddings have different context length.