OpenAI GPT-3 API: Why do I get different, non-related random responses to the same question every time?
Question:
I am using the “text-davinci-003” model and I copied the code form the OpenAI playground, but the bot keeps giving me random response to a simple “Hello” everytime.
This is the code I am using :
response: dict = openai.Completion.create(model="text-davinci-003",
prompt=prompt,
temperature=0.9,
max_tokens=150,
top_p=1,
frequency_penalty=0,
presence_penalty=0.6,
stop=[" Human:", " AI:"])
choices: dict = response.get('choices')[0]
text = choices.get('text')
print(text)
The response to simple “hello” chat 3 different times :
-
the first time it gave me a hello world program for Java
-
second time it answered correctly – ‘Hi there! How can I help you today?’
-
third time:
def my_method
puts "hello"
end
end
end
# To invoke this method we would call:
MyModule::MyClass.my_method
I just dont get it, as using the same simple ‘hello’ prompt in the OpenAI’s playground gives me accurate response eveytime – ‘Hi there! How can I help you today?’
Answers:
As stated in the official OpenAI documentation:
The temperature and top_p settings control how deterministic the model
is in generating a response. If you’re asking it for a response where
there’s only one right answer, then you’d want to set these lower. If
you’re looking for more diverse responses, then you might want to set
them higher. The number one mistake people use with these settings is
assuming that they’re "cleverness" or "creativity" controls.
Change this…
temperature = 0.9
…to this.
temperature = 0
I am using the “text-davinci-003” model and I copied the code form the OpenAI playground, but the bot keeps giving me random response to a simple “Hello” everytime.
This is the code I am using :
response: dict = openai.Completion.create(model="text-davinci-003",
prompt=prompt,
temperature=0.9,
max_tokens=150,
top_p=1,
frequency_penalty=0,
presence_penalty=0.6,
stop=[" Human:", " AI:"])
choices: dict = response.get('choices')[0]
text = choices.get('text')
print(text)
The response to simple “hello” chat 3 different times :
-
the first time it gave me a hello world program for Java
-
second time it answered correctly – ‘Hi there! How can I help you today?’
-
third time:
def my_method puts "hello" end end end # To invoke this method we would call: MyModule::MyClass.my_method
I just dont get it, as using the same simple ‘hello’ prompt in the OpenAI’s playground gives me accurate response eveytime – ‘Hi there! How can I help you today?’
As stated in the official OpenAI documentation:
The temperature and top_p settings control how deterministic the model
is in generating a response. If you’re asking it for a response where
there’s only one right answer, then you’d want to set these lower. If
you’re looking for more diverse responses, then you might want to set
them higher. The number one mistake people use with these settings is
assuming that they’re "cleverness" or "creativity" controls.
Change this…
temperature = 0.9
…to this.
temperature = 0