Fine-Tuned Newsletter: April-May 2024

May 21, 2024 · less than a minute read
fine-tuned-april-may-2024
Will Van Eaton
Will Van Eaton

Make sure to sign-up for our monthly newsletter to receive all the latest AI and LLM news, tutorials and product updates.

Welcome to this month's Fine-Tuned! Today we're introducing the Fine-tuning Index, a comprehensive assessment of 13 popular open-source LLMs and several leading commercial LLMs evaluated across 31 diverse tasks. Read on to learn more!


Spotlight

After running hundreds of fine-tuning experiments we’ve found Upstage’s Solar LLM to be the best small language model for fine-tuning, and we’re excited to announce it’s available for fine-tuning and serving exclusively on Predibase.

What does it mean to be the best model for fine-tuning? When compared to 15 leading models, including GPT-4, Solar LLM was the best performing model in over 50% of the measured tasks. When evaluated across 31 tasks head to head with GPT-4, fine-tuned Solar LLM came out on top nearly 85% of the time.

Read more about Solar LLM or try it today in Predibase!

Upcoming Events

Fine-tuning an open-source LLM with speculative decoding typically increases inference throughput by more than 2x without sacrificing performance. We’re excited to bring Medusa into the Predibase platform in our next release and invite you to join our upcoming webinar for an early preview and technical Q&A with our CTO Travis Addair and ML Engineer Arnav Garg.

Register today!

May 23rd @ 10:00 am PT

Upcoming Events

Recent Events

Speed-up LLM Development with  Gretel and Predibase
Virtual Workshop

Speed-up LLM Development with Gretel and Predibase

In this workshop, you'll learn how to leverage Gretel and Predibase together to quickly and cost-effectively train LLMs that outperform commercial options. You’ll see how to generate synthetic training data with Gretel Navigator and leverage Predibase to fine-tune an open source LLM using state-of-the-art techniques.

Read full story
How we accelerated LLM fine-tuning by 15x in 15 days
Webinar

How we accelerated LLM fine-tuning by 15x in 15 days

We’ve taken our experiences fine-tuning 1,000s of LLMs to build a state-of-the-art fine-tuning and serving stack and now we’re sharing those best practices with you. You’ll learn about optimization techniques like Flash attention 2, CUDA kernes, batch size tuning, and more.

Watch
Fine-tune Your Own LLMs that Rival GPT-4
Virtual Workshop

Fine-tune Your Own LLMs that Rival GPT-4

In this hands-on virtual workshop, we provide an intro to fine-tuning including use cases, best practices, and techniques for efficient fine-tuning with LoRA. You'll learn how to to use Predibase with $25 in free credits to efficiently fine-tune task-specific models that rival GPT-4 for a series of customer service use cases. Then we show you how to dynamically serve many fine-tuned adapters in real time—all on a single GPU.

Watch

From the Predibase Blog

How-to Fine-tune & Serve Llama 3 for Automated Customer Support
Tutorial

How-to Fine-tune & Serve Llama 3 for Automated Customer Support

In this tutorial, we provide a detailed walkthrough of fine-tuning and serving Llama 3 for a customer support use case using Predibase’s new fine-tuning stack. You’ll learn how to easily and efficiently fine-tune and serve open-source LLMs that perform on par with much larger commercial models for task specific use cases.

Read full story
Try Fine-tuning with up to 10x Faster Training Times on Predibase
Blog Post

Try Fine-tuning with up to 10x Faster Training Times on Predibase

In the past, we used Ludwig in Predibase for all fine-tuning tasks to offer a fast and efficient training engine. With this update, we're now using a new fine-tuning system and a special A100 cluster. This means our training engine is even better and can provide users with the quickest fine-tuning speeds available.

Read full story

Related Articles

Join Our Community!