Fine-Tuned Newsletter: March 2024

March 31, 2024 · less than a minute read
Newsletter LP Banner March

Make sure to sign-up for our monthly newsletter to receive all the latest AI and LLM news, tutorials and product updates.

Welcome to this month's Fine-Tuned! We’re very excited to have recently launched a new short course on DeepLearning.AI developed by our CTO Travis Addair called “Efficiently Serving LLMs”. This edition of Fine-Tuned also covers a host of product updates, including platform optimizations that make fine-tuning up to 5x faster! Read more on our blog or below.


Spotlight

Predibase CTO Travis Addair developed this course in conjunction with the DeepLearning.AI team pulling from his experiences building scalable, cost-efficient and performant serving infra at Predibase and Uber. Take this FREE course to get all of his best practices and deep technical insights on what it takes to productionize LLMs at scale.

Sign up here.

Spotlight

Recent Events + Podcasts

How Fine-tuning Open Source LLMs Solves GenAI Productionization - Piero Molino | Stanford MLSys #94
SEMINAR

How Fine-tuning Open Source LLMs Solves GenAI Productionization - Piero Molino | Stanford MLSys #94

The availability of generative AI APIs changed the game in how quickly AI enhanced demos can be built and how fast AI capabilities can be added to products, but that introduced issue with cost, latency and ownership. This conversation looks at how fine-tuning of open source LLMs is the solution to all these problems and how open source technologies like Ludwig and LoRAX make it easy and available to every developer.

Watch
LoRA Bake-off: Comparing Fine-Tuned Open-source LLMs that Rival GPT-4
WEBINAR

LoRA Bake-off: Comparing Fine-Tuned Open-source LLMs that Rival GPT-4

In February we launched LoRA Land, a collection of 25+ fine-tuned Mistral-7b models that outperform or rival GPT-4 on specific tasks. Since then, we’ve fine-tuned popular open-source LLMs, Gemma, Phi, Llama, and Zephyr, on the same 25+ datasets to provide detailed benchmarks on which models perform best across tasks. This webinar takes a close look at the results and our methodology, and shows how you can efficiently fine-tune and serve your own LLMs that are on par with GPT-4.

Watch

From the Predibase Blog

LoRAX + Outlines: Better JSON Extraction with Structured Generation and LoRA
PREDIBASE BLOG

LoRAX + Outlines: Better JSON Extraction with Structured Generation and LoRA

This blog covers two popular methods for extracting and generating JSON using LLMs: structured generation and fine-tuning. We show how you can specify an Outlines-compatible schema in your request to generate output that adheres to a specific format.

Read full story
Introducing the Command Center for Managing Your LLM Deployments
PREDIBASE BLOG

Introducing the Command Center for Managing Your LLM Deployments

We recently introduced a new Deployments page in the Predibase UI that allows you to view and manage every aspect of your serverless or dedicated deployments.

Read full story
Product Updates - March 2024
PREDIBASE BLOG

Product Updates - March 2024

Learn more about recent Predibase updates, including a series of optimizations that make fine-tuning jobs run 2-5x faster. All SaaS customers–including those on the free trial–now have access to fine-tuning on A100 GPUs at no extra cost.

Read full story

From the Community

Tailoring Intelligence: Fine-tuning, alignment and model merging (pt 1)
COMMUNITY BLOG

Tailoring Intelligence: Fine-tuning, alignment and model merging (pt 1)

Daniel Porras Reyes from Flybridge provides a thorough overview of fine-tuning and details how fine-tuning many smaller, task-specific, open-source models can outperform larger general models at a fraction of the cost.

Read full story
Ludwig Hackathon Winner: Streamlining Customer Support with Ludwig: A Low-code framework for building custom AI models
COMMUNITY BLOG

Ludwig Hackathon Winner: Streamlining Customer Support with Ludwig: A Low-code framework for building custom AI models

Divyansh Mishra, a member of a winning team from our recent Ludwig hackathon wrote about the team’s project where they used Ludwig for fine-tuning LLMs for intent classification of support call transcripts.

Read full story
Ludwig Hackathon Winner: Building a Tax FAQ Chatbot with LLMs
COMMUNITY USE CASE

Ludwig Hackathon Winner: Building a Tax FAQ Chatbot with LLMs

We recently held a hackathon for the Ludwig community focused on fine-tuning LLMs. In this short video, hackathon participant Yogesh Kulkarni shares details on his project focused on building a chatbot for India’s Goods and Services Tax FAQ data.

Watch
Ludwig Hackathon Winner: Assessing Health Data with ML
COMMUNITY USE CASE

Ludwig Hackathon Winner: Assessing Health Data with ML

Luliana Stroia, another Ludwig Hackathon winner, shares more about her winning project focused on predicting some well-being factors–for women in particular–from data coming from an Oura smart ring.

Watch

Related Articles

Join Our Community!