OpenAI ChatGPT (GPT-3.5) API error: "InvalidRequestError: Unrecognized request argument supplied: messages"

Question:

I am currently trying to use OpenAI’s most recent model: gpt-3.5-turbo. I am following a very basic tutorial.

I am working from a Google Collab notebook. I have to make a request for each prompt in a list of prompts, which for sake of simplicity looks like this:

prompts = ['What are your functionalities?', 'what is the best name for an ice-cream shop?', 'who won the premier league last year?']

I defined a function to do so:

import openai

# Load your API key from an environment variable or secret management service
openai.api_key = 'my_API'

def get_response(prompts: list, model = "gpt-3.5-turbo"):
  responses = []

  
  restart_sequence = "n"

  for item in prompts:

      response = openai.Completion.create(
      model=model,
      messages=[{"role": "user", "content": prompt}],
      temperature=0,
      max_tokens=20,
      top_p=1,
      frequency_penalty=0,
      presence_penalty=0
    )

      responses.append(response['choices'][0]['message']['content'])

  return responses

However, when I call responses = get_response(prompts=prompts[0:3]) I get the following error:

InvalidRequestError: Unrecognized request argument supplied: messages

Any suggestions?

Replacing the messages argument with prompt leads to the following error:

InvalidRequestError: [{'role': 'user', 'content': 'What are your functionalities?'}] is valid under each of {'type': 'array', 'minItems': 1, 'items': {'oneOf': [{'type': 'integer'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}]}, 'example': '[1, 1313, 451, {"buffer": "abcdefgh", "shape": [1024], "dtype": "float16"}]'}, {'type': 'array', 'minItems': 1, 'maxItems': 2048, 'items': {'oneOf': [{'type': 'string'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}], 'default': '', 'example': 'This is a test.', 'nullable': False}} - 'prompt'
Asked By: corvusMidnight

||

Answers:

Problem

You used the wrong function to get a completion. When using the OpenAI library (Python or NodeJS), you need to use the right function. Which is the right one? It depends on the model you want to use.

Solution

The tables below will help you figure out which function is the right one for a given OpenAI model.

First, find in the table below which API endpoint is compatible with the model you want to use.

API endpoint Model group Model name
/v1/chat/completions GPT-3.5 and GPT-4 gpt-4, gpt-4-0613, gpt-4-32k, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613
/v1/completions GPT-3 text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001
/v1/edits Edits text-davinci-edit-001, code-davinci-edit-001
/v1/audio/transcriptions Whisper whisper-1
/v1/audio/translations Whisper whisper-1
/v1/fine-tunes GPT-3 davinci, curie, babbage, ada
/v1/embeddings Embeddings text-embedding-ada-002, text-search-ada-doc-001
/v1/moderations Moderation text-moderation-stable, text-moderation-latest

Second, find in the table below which function you need to use.

API endpoint Python function NodeJS function
/v1/chat/completions openai.ChatCompletion.create openai.createChatCompletion
/v1/completions openai.Completion.create openai.createCompletion
/v1/edits openai.Edit.create openai.createEdit
/v1/audio/transcriptions openai.Audio.transcribe openai.createTranscription
/v1/audio/translations openai.Audio.translate openai.createTranslation
/v1/fine-tunes openai.FineTune.create openai.createFineTune
/v1/embeddings openai.Embedding.create openai.createEmbedding
/v1/moderations openai.Moderation.create openai.createModeration

Python working example for the gpt-3.5-turbo (i.e., Chat Completions API)

If you run test.py the OpenAI API will return the following completion:

Hello there! How can I assist you today?

test.py

import openai
import os

openai.api_key = os.getenv('OPENAI_API_KEY')

completion = openai.ChatCompletion.create(
  model = 'gpt-3.5-turbo',
  messages = [
    {'role': 'user', 'content': 'Hello!'}
  ],
  temperature = 0  
)

print(completion['choices'][0]['message']['content'])

NodeJS working example for the gpt-3.5-turbo (i.e., Chat Completions API)

If you run test.js the OpenAI API will return the following completion:

Hello there! How can I assist you today?

test.js

const { Configuration, OpenAIApi } = require('openai');

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});

const openai = new OpenAIApi(configuration);

async function getCompletionFromOpenAI() {
  const completion = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages: [
      { role: 'user', content: 'Hello!' }
    ],
    temperature: 0,
  });

  console.log(completion.data.choices[0].message.content);
}

getCompletionFromOpenAI();
Answered By: Rok Benko
response = openai.ChatCompletion.create(
model='gpt-3.5-turbo',
  messages=[
    {"role": "user", "content": "What is openAI?"}],
max_tokens=193,
temperature=0,
)

print(response)
print(response["choices"][0]["message"]["content"])
Answered By: abdelrahman aboneda

You should define messages=[{"role": "user", "content": prompt}] outside of your response variable and call that in response variable like:

messages=[{"role": "user", "content": prompt}]
for item in prompts:
      response = openai.Completion.create(
      model=model,
      messages=messages,
      temperature=0,
      max_tokens=20,
      top_p=1,
      frequency_penalty=0,
      presence_penalty=0
    )
Answered By: DnyaneshwarN
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.