Introducing OpenAI's Exciting New Generative Text Features with Lower Prices!
June 16 2023
OpenAI has introduced new versions of its text-generating models, GPT-3.5-turbo and GPT-4, with the added capability of function calling, allowing developers to generate code for described programming functions. This feature enhances the creation of chatbots, natural language to database queries conversion, and structured data extraction from text. The company is also launching a GPT-3.5-turbo version with a larger context window of 16,000 tokens, priced at $0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens. Meanwhile, the original GPT-3.5-turbo’s pricing is reduced by 25%, and text-embedding-ada-002, a popular text embedding model, now costs $0.0001 per 1,000 tokens, marking a 75% decrease.
Back to Breaking AI News
What does it mean?
GPT-3.5-turbo: a version of OpenAI's text-generating model that offers powerful language processing capabilities, allowing developers to generate code for described programming functions. GPT-4: the next iteration of OpenAI's text-generating model, offering improved capabilities for various applications, such as code generation, chatbot creation, and natural language to database query conversion. Function calling: the process of invoking a pre-written piece of code or a function to perform a specific task, allowing developers to reuse code efficiently. Chatbots: computer programs that engage in conversation with users, providing information or assistance by simulating human responses. Natural language to database queries conversion: the process of transforming user-provided text in a natural language, like English, into a format that can be understood and executed by a database system to retrieve or modify information. Structured data extraction: the process of identifying and collecting structured information, such as tables or lists, from unstructured text sources, such as documents or web pages. Context window: the number of tokens (words or symbols) that a model can consider at once to understand and generate text. A larger context window allows the model to work with longer pieces of text. Tokens: the smallest units of text that a language model can process, such as words, symbols, or characters. Input tokens: the number of tokens in a given text that is provided to a language model for processing. Output tokens: the number of tokens generated by a language model as a response or output based on the input tokens provided. Text embedding model: a machine learning model that represents text as numerical vectors, enabling easier comparison and manipulation of text data for various natural language processing tasks. Text-embedding-ada-002: a specific text embedding model offered by OpenAI, used to convert text into numerical representations for processing.
Does reading the news feel like drinking from the firehose?
Do you want more curation and in-depth content?
Then, perhaps, you'd like to subscribe to the Synthetic Work newsletter.
Many business leaders read Synthetic Work, including:
CEOs
CIOs
Chief Investment Officers
Chief People Officers
Chief Revenue Officers
CTOs
EVPs of Product
Managing Directors
VPs of Marketing
VPs of R&D
Board Members
and many other smart people.
They are turning the most transformative technology of our times into their biggest business opportunity ever.
What about you?
Do you want more curation and in-depth content?
Then, perhaps, you'd like to subscribe to the Synthetic Work newsletter.
Many business leaders read Synthetic Work, including:
CEOs
CIOs
Chief Investment Officers
Chief People Officers
Chief Revenue Officers
CTOs
EVPs of Product
Managing Directors
VPs of Marketing
VPs of R&D
Board Members
and many other smart people.
They are turning the most transformative technology of our times into their biggest business opportunity ever.
What about you?