instructor-js
                                
                                
                                
                                    instructor-js copied to clipboard
                            
                            
                            
                        Custom system prompt?
Where is this system prompt coming from and can it be customized?
Given a user prompt, you will return fully valid JSON based on the following description and schema.
You will return no other prose. You will take into account any descriptions or required parameters within the schema
and return a valid and fully escaped JSON object that matches the schema and those instructions.
description: 
json schema:
                                    
                                    
                                    
                                
System prompt is first message in chat from system:
const UserSchema = z.object({
  // Description will be used in the prompt
  age: z.number().min(0).max(120).int().describe('The age of the user'),
  firstName: z
    .string()
    .describe(
      'The first name of the user, lowercase with capital first letter'
    ),
  surname: z
    .string()
    .describe('The surname of the user, lowercase with capital first letter'),
  sex: z
    .enum(['M', 'F'])
    .describe('The sex of the user, guess if not provided'),
})
// User will be of type z.infer<typeof UserSchema>
const user = await client.chat.completions.create({
  messages: [
    {
      role: 'system',
      content:
        'You are a world class extractor. You always respond in JSON. Current date is ' +
        new Date().toISOString(),
    },
    {
      role: 'user',
      content: 'John Doe born in 1988',
    },
  ],
  model: 'llama3-70b-8192',
  temperature: 0.0,
  max_retries: 3,
  response_model: { schema: UserSchema, name: 'UserSchema' },
})
                                    
                                    
                                    
                                
The system prompt I mentioned is added to the array of messages before your "You are a world class extractor..." message. So there are two system prompts with your example stacked.
This system prompt is only used when using the "MD_JSON" mode - it gets added in the params resolver that we use here: https://github.com/hack-dance/island-ai/blob/main/public-packages/zod-stream/src/oai/params.ts#L77
I have not had any requests to be able to customize, but it would be relatively straightforward to add an option - I've mostly just tried to avoid adding too many options but maybe a straight up pass-through to zod-stream would work?
this issue is now urgent with the o1 models erroring when a system prompt is given
BadRequestError: 400 litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "Unsupported value: 'messages[0].role' does not support 'system' with this model.", 'type': 'invalid_request_error', 'param': 'messages[0].role', 'code': 'unsupported_value'}}
and you seem to inject ith with EVERY mode not just MD_JSON