How to save pre-trained API on GPT-3?

Question:

I have a question about GPT-3. As we know we can give some examples to the network and "adjust" the model.

  1. Show examples to the model.
  2. Save these examples.
  3. Reuse the APIs.

import openai

class Example():
    """Stores an input, output pair and formats it to prime the model."""
def __init__(self, inp, out):
    self.input = inp
    self.output = out

def get_input(self):
    """Returns the input of the example."""
    return self.input

def get_output(self):
    """Returns the intended output of the example."""
    return self.output

def format(self):
    """Formats the input, output pair."""
    return f"input: {self.input}noutput: {self.output}n"


class GPT:
    """The main class for a user to interface with the OpenAI API.
    A user can add examples and set parameters of the API request."""
def __init__(self, engine='davinci',
             temperature=0.5,
             max_tokens=100):
    self.examples = []
    self.engine = engine
    self.temperature = temperature
    self.max_tokens = max_tokens

def add_example(self, ex):
    """Adds an example to the object. Example must be an instance
    of the Example class."""
    assert isinstance(ex, Example), "Please create an Example object."
    self.examples.append(ex.format())

Now when I use "give" examples to the model I have the following code:

gpt2 = GPT(engine="davinci", temperature=0.5, max_tokens=100)
gpt2.add_example(Example('Two plus two equals four', '2 + 2 = 4'))
gpt2.add_example(Example('The integral from zero to infinity', '\int_0^{\infty}'))

prompt1 = "x squared plus y squared plus equals z squared"
output1 = gpt2.submit_request(prompt1)

However, I am not able to save this "pre-trained" API. Every time I have to retrain it – is there any way to reuse it?

Asked By: dnobl

||

Answers:

Every time I have to retrain it – is there any way to reuse it?

No, there isn’t any way to reuse it. You are mixing up the terms: You don’t need to train GPT-3, you need to pass in examples to the prompt. As you don’t have any kind of container in which you could store previous results (and thus "train" your model), it’s required to pass examples including your task each and every time.

To perfect the engineering process (and therefore reduce the cost per request) is a difficult process and will take a long time with trial and error.

Though let’s be honest: Even with passing the examples every time, GPT-3 is extremely cost efficient. Depending on your specific situation, you (on average) only spend a few hundred tokens for a complex completion with Davinci.

Answered By: J. M. Arnold

You need to fine tune the model. Refer to this:
https://beta.openai.com/docs/guides/fine-tuning

but steps are:

1- install openai cli

2-prepare the training file

3-select an existing base model to tune with this training dataset

4- call this tuned model

Answered By: Moh-Spark
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.