gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

Issue: Csharp sample can't generate text while GPT4All UI can

Open trrahul opened this issue 1 year ago • 1 comments

Issue you'd like to raise.

Input : answer the question based on the provided information.

question: who appointed lou as bishop? information: Born in Fu'an, Fujian, in the 1610s, Luo was baptized in 1633, joined the Dominican Order in 1650, and entered the priesthood in 1654. After the Qing dynasty proscribed Christianity and banished foreign missionaries in 1665, Luo became the only person in charge of the Catholic missions in China. The Holy See first appointed Luo to be a bishop in 1674, but he declined. The Holy See appointed Luo to be a bishop again in 1679. Due to Dominican opposition, Luo was only consecrated as the apostolic vicar of Nanjing in 1685. He died in Nanjing on 27 February 1691.

image

image

using var model =
    modelFactory.LoadModel(@"AppData\Local\nomic.ai\GPT4All\ggml-gpt4all-j-v1.3-groovy.bin");

var input = args.Length > 1
    ? args[1]
    : "answer the question based on the provided information.\r\ninfromrmation: Born in Fu'an, Fujian, in the 1610s, Luo was baptized in 1633, " +
      "joined the Dominican Order in 1650, and entered the priesthood in 1654. " +
      "After the Qing dynasty proscribed Christianity and banished foreign missionaries in 1665, " +
      "Luo became the only person in charge of the Catholic missions in China. " +
      "The Holy See first appointed Luo to be a bishop in 1674, but he declined." +
      " The Holy See appointed Luo to be a bishop again in 1679. Due to Dominican opposition, " +
      "Luo was only consecrated as the apostolic vicar of Nanjing in 1685. He died in Nanjing on 27 February 1691.\r\n\r\nquestion: who appointed lou as bishop?";

var result = await model.GetStreamingPredictionAsync(
    input,
    PredictRequestOptions.Defaults);

await foreach (var token in result.GetPredictionStreamingAsync())
{
    Console.Write(token);
}

Console.WriteLine();
Console.WriteLine("DONE.");

Suggestion:

No response

trrahul avatar May 24 '23 12:05 trrahul

I'll check as soon as I can and let you know, the chat app seems to have some more logic going on under the hood. I think the problem may lie in the fact that in the csharp bindings, at the moment, no type of templating is done on the prompt.

UPDATE

I've investigate the issue an I found two root causes:

  1. it seems the models are very sensitive to how the prompt is formatted
    • actually no template/formatting is applied in the C# bindings
  2. the default prediction parameters seems to be slightly different from the one used in the Chat App

Embedding the prompt in a template similar to the one used in the python bindings i got the model to answer (but it is still messing with the dates, but that's another problem):

Input:

### Instruction: 
The prompt below is a question to answer, a task to complete, or a conversation
to respond to; decide which and write an appropriate response.
### Prompt:
answer the question based on the provided information.
infromrmation: Born in Fu'an, Fujian, in the 1610s, Luo was baptized in 1633, joined the Dominican Order in 1650, and entered the priesthood in 1654. After the Qing dynasty proscribed Christianity and banished foreign missionaries in 1665, Luo became the only person in charge of the Catholic missions in China. The Holy See first appointed Luo to be a bishop in 1674, but he declined. The Holy See appointed Luo to be a bishop again in 1679. Due to Dominican opposition, Luo was only consecrated as the apostolic vicar of Nanjing in 1685. He died in Nanjing on 27 February 1691.

question: who appointed lou as bishop?
### Response:

Response:

The Holy See appointed Luo as bishop in 16799.

I'm opening a PR to enable prompt formatting in the C# bindings soon. In the meantime you can try to use the "templated" version of your input or try the dev branch 707-prompt-templating

NOTE

Just a side note, the ITextPrediction.GetStreamingPredictionAsync exposes just text completion (like generate in the python bindings) . It does not behave like a chat (e.g does not automatically considers the whole conversation). I'm working on an api for proper chat completion.

mvenditto avatar May 24 '23 13:05 mvenditto

By now, this should've been solved by the PR mentioned above. Please open a new issue if the issue is still occurring.

Please always feel free to open more issues as needed.

niansa avatar Aug 14 '23 11:08 niansa