Fine-Tuned Newsletter: March 2024

March 31, 2024 · less than a minute read
Newsletter LP Banner March

Make sure to sign-up for our monthly newsletter to receive all the latest AI and LLM news, tutorials and product updates.

Welcome to this month's Fine-Tuned! We’re very excited to have recently launched a new short course on DeepLearning.AI developed by our CTO Travis Addair called “Efficiently Serving LLMs”. This edition of Fine-Tuned also covers a host of product updates, including platform optimizations that make fine-tuning up to 5x faster! Read more on our blog or below.


After running hundreds of fine-tuning experiments we’ve found Upstage’s Solar LLM to be the best small language model for fine-tuning, and we’re excited to announce it’s available for fine-tuning and serving exclusively on Predibase.

What does it mean to be the best model for fine-tuning? When compared to 15 leading models, including GPT-4, Solar LLM was the best performing model in over 50% of the measured tasks. When evaluated across 31 tasks head to head with GPT-4, fine-tuned Solar LLM came out on top nearly 85% of the time.

Read more about Solar LLM or try it today in Predibase!

Recent Events + Podcasts

How Fine-tuning Open Source LLMs Solves GenAI Productionization - Piero Molino | Stanford MLSys #94
SEMINAR

How Fine-tuning Open Source LLMs Solves GenAI Productionization - Piero Molino | Stanford MLSys #94

The availability of generative AI APIs changed the game in how quickly AI enhanced demos can be built and how fast AI capabilities can be added to products, but that introduced issue with cost, latency and ownership. This conversation looks at how fine-tuning of open source LLMs is the solution to all these problems and how open source technologies like Ludwig and LoRAX make it easy and available to every developer.

Watch
LoRA Bake-off: Comparing Fine-Tuned Open-source LLMs that Rival GPT-4
WEBINAR

LoRA Bake-off: Comparing Fine-Tuned Open-source LLMs that Rival GPT-4

In February we launched LoRA Land, a collection of 25+ fine-tuned Mistral-7b models that outperform or rival GPT-4 on specific tasks. Since then, we’ve fine-tuned popular open-source LLMs, Gemma, Phi, Llama, and Zephyr, on the same 25+ datasets to provide detailed benchmarks on which models perform best across tasks. This webinar takes a close look at the results and our methodology, and shows how you can efficiently fine-tune and serve your own LLMs that are on par with GPT-4.

Watch

From the Predibase Blog

How-to Fine-tune & Serve Llama 3 for Automated Customer Support
Tutorial

How-to Fine-tune & Serve Llama 3 for Automated Customer Support

In this tutorial, we provide a detailed walkthrough of fine-tuning and serving Llama 3 for a customer support use case using Predibase’s new fine-tuning stack. You’ll learn how to easily and efficiently fine-tune and serve open-source LLMs that perform on par with much larger commercial models for task specific use cases.

Read full story
Try Fine-tuning with up to 10x Faster Training Times on Predibase
Blog Post

Try Fine-tuning with up to 10x Faster Training Times on Predibase

In the past, we used Ludwig in Predibase for all fine-tuning tasks to offer a fast and efficient training engine. With this update, we're now using a new fine-tuning system and a special A100 cluster. This means our training engine is even better and can provide users with the quickest fine-tuning speeds available.

Read full story

Recent Events

Speed-up LLM Development with  Gretel and Predibase
Virtual Workshop

Speed-up LLM Development with Gretel and Predibase

In this workshop, you'll learn how to leverage Gretel and Predibase together to quickly and cost-effectively train LLMs that outperform commercial options. You’ll see how to generate synthetic training data with Gretel Navigator and leverage Predibase to fine-tune an open source LLM using state-of-the-art techniques.

Read full story
How we accelerated LLM fine-tuning by 15x in 15 days
Webinar

How we accelerated LLM fine-tuning by 15x in 15 days

We’ve taken our experiences fine-tuning 1,000s of LLMs to build a state-of-the-art fine-tuning and serving stack and now we’re sharing those best practices with you. You’ll learn about optimization techniques like Flash attention 2, CUDA kernes, batch size tuning, and more.

Watch
Fine-tune Your Own LLMs that Rival GPT-4
Virtual Workshop

Fine-tune Your Own LLMs that Rival GPT-4

In this hands-on virtual workshop, we provide an intro to fine-tuning including use cases, best practices, and techniques for efficient fine-tuning with LoRA. You'll learn how to to use Predibase with $25 in free credits to efficiently fine-tune task-specific models that rival GPT-4 for a series of customer service use cases. Then we show you how to dynamically serve many fine-tuned adapters in real time—all on a single GPU.

Watch

Related Articles

Join Our Community!