Tuesday, November 19, 2024
ad
HomeNewsOpenAI Enhances GPT-4o With New Fine-tuning Feature

OpenAI Enhances GPT-4o With New Fine-tuning Feature

The fine-tuning feature is available for all paid versions of the GPT-4o model. Until September 23, OpenAI will offer every organization 1M training tokens for free per day.

OpenAI announced that it will now allow third-party software developers to fine-tune the custom version of its large multimodal model (LMM), GPT-4o. Earlier, the company introduced fine-tuning in the GPT-4o mini model, which is cheaper and less powerful than the full GPT-4o.

To know more about fine-tuning in GPT-4o mini, read here

Fine-tuning is a machine learning technique for modifying a pre-trained AI model for specific use cases or tasks. Now, developers can use this to train GPT-4o with custom datasets to use the LLM to perform specific tasks as per their requirements. 

OpenAI said that this is just a start. It will continue introducing model customization options for its users. The fine-tuning will greatly impact the model’s performance across domains, such as businesses, coding, or creative writing. 

Read More: OpenAI Enhances ChatGPT with Advanced Voice Mode: Talk and Explore 

GPT-4o services are available in all the paid versions of the model. To use them, developers can go to the fine-tuning dashboard, click Create, and select gpt-4o-2024-08-06 from the base model drop-down list. The cost of fine-tuning training is $25 per million tokens. The cost of inference is $3.75 per million input tokens and $15 per million output tokens.

To encourage the use of fine-tuning in GPT-4o, OpenAI is offering 1M tokens per day for free to every organization until September 23. For the GPT-4o mini model, it is offering 2M training tokens per day for free till September 23. 

Tokens are the numerical representation of words, characters, combinations of words, and punctuations for concepts learned by LLM or LMM. Tokenization is the first step in the AI model training process. 

OpenAI worked with some industry partners for a couple of months to test the efficiency of fine-tuning services. Cosine, an AI software engineering company, used fine-tuned GPT-40 for its AI agent Genie. It achieved a SOTA score of 43.8%  on the new SWE-bench verified benchmark. 

Another firm, Distyl, an AI service partner to Fortune 500 companies, was ranked first on the BIRD-SQL benchmark, the leading text-to-SQL benchmark. Distyl’s fine-tuned GPT-4o model achieved an execution accuracy of 71.83%. It excelled in query reformulation, intent classification, chain-of-thought, self-correction, and SQL generation. 

OpenAI has stated that it will ensure the data privacy of businesses as they will have complete control over their datasets. These datasets will not be shared or used to train other models. The fine-tuned models will be safeguarded through automated evaluations and usage monitoring mechanisms. 

The introduction of fine-tuning features in the GPT-4o model is a significant step by OpenAI to enhance the capabilities of its AI model. The feature will allow users to leverage the high performance of GPT-4o along with customization to develop specialized applications securely. It will also help OpenAI to gain an edge in the highly competitive AI landscape.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Analytics Drift
Analytics Drift
Editorial team of Analytics Drift

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular