Fine-Tuned: January 2024

January 28, 2024 · less than a minute read
Newsletter LP Banner January

Make sure to sign-up for our monthly newsletter to receive all the latest AI and LLM news, tutorials and product updates.

We hope your 2024 is off to a great start! 2023 was certainly transformative for AI but we think 2024 will see shifts that make AI even more accessible, reliable, and secure for practitioners. To that end, we’re excited to release our AI and LLM Predictions for 2024, and to announce our first webinar of 2024! Read on for more details.


Featured Event

Fine-Tuning Zephyr-7B to Analyze Customer Support Call Logs

Join us on February 1st at 10:00 am PT to learn how you can leverage open source LLMs to automate one of the most time consuming tasks of customer support: classifying customer issues. You will learn how to efficiently and cost-effectively fine-tune an open source LLM with just a few lines of code at a fraction of the cost of using a commercial LLM and how to easily implement efficient fine-tuning techniques like LoRA and quantization.

Featured Event

Recent Events + Podcasts

How Fine-tuning Open Source LLMs Solves GenAI Productionization - Piero Molino | Stanford MLSys #94
SEMINAR

How Fine-tuning Open Source LLMs Solves GenAI Productionization - Piero Molino | Stanford MLSys #94

The availability of generative AI APIs changed the game in how quickly AI enhanced demos can be built and how fast AI capabilities can be added to products, but that introduced issue with cost, latency and ownership. This conversation looks at how fine-tuning of open source LLMs is the solution to all these problems and how open source technologies like Ludwig and LoRAX make it easy and available to every developer.

Watch
LoRA Bake-off: Comparing Fine-Tuned Open-source LLMs that Rival GPT-4
WEBINAR

LoRA Bake-off: Comparing Fine-Tuned Open-source LLMs that Rival GPT-4

In February we launched LoRA Land, a collection of 25+ fine-tuned Mistral-7b models that outperform or rival GPT-4 on specific tasks. Since then, we’ve fine-tuned popular open-source LLMs, Gemma, Phi, Llama, and Zephyr, on the same 25+ datasets to provide detailed benchmarks on which models perform best across tasks. This webinar takes a close look at the results and our methodology, and shows how you can efficiently fine-tune and serve your own LLMs that are on par with GPT-4.

Watch

Featured Blog Post

LoRA Land: Fine-Tuned Open-Source LLMs that Outperform GPT-4

LoRA Land is a collection of 25 fine-tuned Mistral-7b models that consistently outperform base models by 70% and GPT-4 by 4-15%, depending on the task. LoRA Land’s 25 task-specialized large language models (LLMs) were all fine-tuned with Predibase for less than $8.00 each on average and are all served from a single A100 GPU using LoRAX. Learn more!

Featured Blog Post

Customer Spotlight

Deep Learning on Snowflake: How Paradigm Built a Personal Trading Experience with Predibase

Paradigm, one of the world’s largest institutional liquidity networks for cryptocurrencies, was able to use Predibase to build a deep learning-based recommendation system for their traders on top of their existing Snowflake Data Cloud with just a few lines of YAML. Building production models on top of Snowflake data–a task that used to take months–now takes minutes.

Customer Spotlight

From the Community

Fine-tuning LLMs for cost effective GenAI inference at scale
COMMUNITY BLOG

Fine-tuning LLMs for cost effective GenAI inference at scale

This blog post walks through how the team at Tryolabs used GPT-4 to produce titles for unstructured text and then fine-tuned an open-source LLM with Predibase to perform accurate, controlled, and cost-effective inference.

Read full story
Predibase joins the AI Alliance
NEWS

Predibase joins the AI Alliance

We’re excited to announce that we've joined IBM, Meta and 50+ organizations as members of the AI Alliance: an international community of leading organizations across industry, academia, research and government, coming together to support open innovation in AI.

Read full story
Fine Tune mistral-7b-instruct on Predibase with Your Own Data and LoRAX
COMMUNITY TUTORIAL

Fine Tune mistral-7b-instruct on Predibase with Your Own Data and LoRAX

Rany ElHousieny shares an in-depth, hands-on tutorial walking through using Predibase’s free trial to fine-tune and serve Mistral-7b-instruct.

Read full story
When to Fine-Tuning Large Language Models?
COMMUNITY BLOG

When to Fine-Tuning Large Language Models?

Thierry Teisseire shares insights on when, and how, to fine-tune LLMs for task-specific use cases.

Read full story

Open Source Updates

How to Efficiently Fine-Tune Gemma-7B with Open-Source Ludwig

Google recently released Gemma, a state-of-the-art LLM, licensed free of charge for research and commercial use. In this short tutorial, we showed you how to easily, reliably, and efficiently fine-tune Gemma-7B-Instruct on readily available hardware using open-source Ludwig. Try it out and share your results with our Ludwig community on Slack.

Open Source Updates

Ludwig v0.9.2 & v0.9.3: These releases introduces a few bug fixes and several new enhancements:

  • Add support for the official microsoft/phi-2 model
  • Ensure correct padding token for Phi and Pythia models
  • Cast LLMEncoder output to torch.float32, freeze final layer at init
  • Enable IA3 adapters
  • Add batch size tuning for LLMs
  • Per-step token utilization to tensorboard and progress tracker
  • Default LoRA target modules for #Mixtral and #Mixtral-instruct
  • Support for exporting models to Carton (thank you Vivek Panyam!)

LoRAX v0.6: The latest release adds support for multi-turn chat conversations with dynamic LoRA adapter loading. Just replace the "model" parameter with any HF LoRA and you're set. Chat templates can come from the base model, or even LoRA itself if it uses its own custom template. All of this happens dynamically per request.

Featured Product Update

We’re excited launch our new prompting experience in the Predibase UI which allows you to easily prompt serverless endpoints and your fine-tuned adapters without needing to deploy them first. This lets teams test their fine-tuned models and compare model iterations all from the UI, enabling much faster test and review cycles.

Prompt-UI-2

Related Articles

Join Our Community!