Created
May 6, 2023 13:02
-
-
Save rasbt/e47390ecb495da89201217893d0bb1a5 to your computer and use it in GitHub Desktop.
Script using the ChatGPT API to summarize text
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# pip install openai | |
import openai | |
import os | |
# export API key environment variable before | |
# running this script (see https://platform.openai.com/account/api-keys) | |
# E.g., | |
# | |
# export OPENAI_API_KEY=copy_paste_key_from_your_account | |
openai.api_key = os.getenv('OPENAI_API_KEY') | |
def run_chatgpt(prompt, model="gpt-3.5-turbo"): | |
messages = [{"role": "user", "content": prompt}] | |
response = openai.ChatCompletion.create( | |
model=model, | |
messages=messages, | |
temperature=0, | |
) | |
return response.choices[0].message["content"] | |
text_to_summarize = """ | |
Following the original transformer architecture, | |
large language model research started to bifurcate in two directions: | |
encoder-style transformers for predictive modeling tasks such as | |
text classification and decoder-style transformers for generative modeling | |
tasks such as translation, summarization, and other forms of text creation. | |
The BERT paper above introduces the original concept of | |
masked-language modeling, and next-sentence prediction | |
remains an influential decoder-style architecture. | |
If you are interested in this research branch, I recommend | |
following up with RoBERTa, which simplified the pretraining | |
objectives by removing the next-sentence prediction tasks. | |
""" | |
prompt = f""" | |
Summarize the text below, delimited by backticks, | |
in at most 3 sentences. | |
Text: `{text_to_summarize}` | |
""" | |
response = run_chatgpt(prompt) | |
print(response) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment