OpenAI: Fine tuning feature now live on GPT-3.5 Turbo

Monitoring Desk

NEW YORK: OpenAI has announced fine-tuning for GPT-3.5 Turbo enabling the developers to customize models so that they can make best use of it.

The fine-tuning for GPT-4 is coming this fall.

Meanwhile, a fine-tuned version of GPT-3.5 Turbo “can match, or even outperform, base GPT-4-level capabilities on certain narrow tasks,” according to the company.

Fine-tuning use cases

“Since the release of GPT-3.5 Turbo, developers and businesses have asked for the ability to customise the model to create unique and differentiated experiences for their users.

Through this feature, the developers can now run supervised fine-tuning making the model perform better for their use cases,” OpenAI said in a blog post.

Businesses can make the model follow instructions better with this fine tuning like making outputs terse or always responding in a given language.

If we want to learn this development as an example so it is that developers can use fine-tuning to ensure the model always responds in German when prompted to use that language.

Furthermore, fine-tuning improves the model’s ability to consistently format responses—a crucial aspect for applications demanding a specific response format, such as code completion or composing API calls.

Moreover, fine-tuning also enables businesses to shorten their prompts. “Early testers have reduced prompt size by up to 90 per cent by fine-tuning instructions into the model itself, speeding up each API call and cutting costs,” the company added in its blog post.