Fine-Tuned: February-March 2024

March 4, 2024 · less than a minute read
Newsletter LP Banner Feb-March

Make sure to sign-up for our monthly newsletter to receive all the latest AI and LLM news, tutorials and product updates.

Welcome back to Predibase Fine-Tuned! On February 20th we launched LoRA Land to demonstrate how fine-tuned open-source LLMs can rival or outperform GPT-4 on task-specific use cases for a fraction of the cost! Learn more about LoRA Land and catch up on recent webinars, blog posts, and exciting stories from the community.


Recent Events + Podcasts

LoRA Land: How We Trained 25 Fine-Tuned Mistral-7b Models that Outperform GPT-4
WEBINAR

LoRA Land: How We Trained 25 Fine-Tuned Mistral-7b Models that Outperform GPT-4

LoRA Land is a collection of 25+ fine-tuned Mistral-7b models that outperform GPT-4 in task-specific applications and provides a blueprint for teams looking to quickly and cost-effectively deploy AI systems. Learn how our team built ,[object Object], in this in-depth overview.

Watch
Fine-Tuning Zephyr-7B to Analyze Customer Support Call Logs
WEBINAR

Fine-Tuning Zephyr-7B to Analyze Customer Support Call Logs

In this demo we show how engineering teams can leverage open-source Large Language Models (LLMs) to automate one of the most time consuming tasks of customer support: classifying customer issues. You’ll learn how to efficiently and cost-effectively fine-tune the open-source Zephyr model that accurately predicts the Task Type for customer support requests with just a few lines of code at a fraction of the cost of using a commercial LLM.

Watch
5 Reasons Why Adapters are the Future of Fine-tuning LLMs
WEBINAR

5 Reasons Why Adapters are the Future of Fine-tuning LLMs

Watch this on-demand session and demo with Daliana Liu, Host of ML Real Talk, and Geoffrey Angus, Engineering Leader at Predibase and co-maintainer of popular open-source LLM projects, Ludwig and LoRAX, to deep dive on all things efficient fine-tuning and adapter-based training.

Watch
Data Driven: Powering Real-World AI with Declarative AI and Open Source
PODCAST

Data Driven: Powering Real-World AI with Declarative AI and Open Source

Predibase CEO Devvret Rishi sits down with Frank La Vigne, co-host of the Data Driven Podcast, to talk about the importance of open-source LLMs and declarative ML.

Read full story

Featured Blog Post

LoRA Land: Fine-Tuned Open-Source LLMs that Outperform GPT-4

LoRA Land is a collection of 25 fine-tuned Mistral-7b models that consistently outperform base models by 70% and GPT-4 by 4-15%, depending on the task. LoRA Land’s 25 task-specialized large language models (LLMs) were all fine-tuned with Predibase for less than $8.00 each on average and are all served from a single A100 GPU using LoRAX. Learn more!

Featured Blog Post

From the Predibase Blog

The Definitive Guide to Fine-Tuning LLMs - Insights for tackling the 4 biggest challenges of fine-tuning
PREDIBASE eBOOK

The Definitive Guide to Fine-Tuning LLMs - Insights for tackling the 4 biggest challenges of fine-tuning

Fine-tuning has emerged as a reliable method for improving the accuracy of pre-trained open-source models like Llama-2, cutting down on the time and computational resources needed compared to training a language model from scratch or investing in a costly commercial LLM. Our definitive guide provides practical advice for overcoming the four primary challenges teams face when fine-tuning LLMs.

Read full story
Introducing the first purely serverless solution for fine-tuned LLMs
PREDIBASE BLOG

Introducing the first purely serverless solution for fine-tuned LLMs

Fine-tuning open-source language models has become the de-facto way to customize and build task-specific LLMs today. However, teams have still needed to deploy entire GPUs just to serve these fine-tuned models. Predibase’s Serverless Fine-Tuned Endpoints allow you to simply pay-per-token so you only pay for the compute you use.

Read full story
7 Things You Need to Know About Fine-tuning LLMs
PREDIBASE BLOG

7 Things You Need to Know About Fine-tuning LLMs

This post summarizes a recent webinar where we discuss topics like top use cases for fine-tuning, the top issues teams face when fine-tuning, how to serve fine-tuned LLMs in production, and more.

Read full story
How to Efficiently Fine-Tune CodeLlama-70B-Instruct with Predibase
PREDIBASE BLOG

How to Efficiently Fine-Tune CodeLlama-70B-Instruct with Predibase

This hands on tutorial shows you how to fine-tune CodeLlama-70B-Instruct with Predibase for your specific use case. Follow along with the provided Google Colab notebook and get started with $25 of credits in the Predibase free trial.

Read full story

From the Community

Fine-tuning LLMs for cost effective GenAI inference at scale
COMMUNITY BLOG

Fine-tuning LLMs for cost effective GenAI inference at scale

This blog post walks through how the team at Tryolabs used GPT-4 to produce titles for unstructured text and then fine-tuned an open-source LLM with Predibase to perform accurate, controlled, and cost-effective inference.

Read full story
Predibase joins the AI Alliance
NEWS

Predibase joins the AI Alliance

We’re excited to announce that we've joined IBM, Meta and 50+ organizations as members of the AI Alliance: an international community of leading organizations across industry, academia, research and government, coming together to support open innovation in AI.

Read full story
Fine Tune mistral-7b-instruct on Predibase with Your Own Data and LoRAX
COMMUNITY TUTORIAL

Fine Tune mistral-7b-instruct on Predibase with Your Own Data and LoRAX

Rany ElHousieny shares an in-depth, hands-on tutorial walking through using Predibase’s free trial to fine-tune and serve Mistral-7b-instruct.

Read full story
When to Fine-Tuning Large Language Models?
COMMUNITY BLOG

When to Fine-Tuning Large Language Models?

Thierry Teisseire shares insights on when, and how, to fine-tune LLMs for task-specific use cases.

Read full story

Open Source Updates

How to Efficiently Fine-Tune Gemma-7B with Open-Source Ludwig

Google recently released Gemma, a state-of-the-art LLM, licensed free of charge for research and commercial use. In this short tutorial, we showed you how to easily, reliably, and efficiently fine-tune Gemma-7B-Instruct on readily available hardware using open-source Ludwig. Try it out and share your results with our Ludwig community on Slack.

Open Source Updates

Ludwig 10k Stars LLM Fine-tuning Hackathon Winners

In celebration of our 10,000th star on Github, we invited the Ludwig community to participate in a mini-virtual hackathon. The requirements were simple: fine-tune an open-source LLM for a cool use case or project of your choosing. We’re excited to share our winners and highlight their amazing work including their notebooks so all of the Ludwig community can benefit from their efforts.

Ludwig v0.10.0 and v0.10.1 were released in February!

Updates include support for Google’s Gemma 2B / 7B models, added Phi-2 to model presets, added support for prompt lookup decoding during generation, and more!

Related Articles

Join Our Community!