Fine-Tuned: January 2024

January 28, 2024 · less than a minute read
Newsletter LP Banner January

Make sure to sign-up for our monthly newsletter to receive all the latest AI and LLM news, tutorials and product updates.

We hope your 2024 is off to a great start! 2023 was certainly transformative for AI but we think 2024 will see shifts that make AI even more accessible, reliable, and secure for practitioners. To that end, we’re excited to release our AI and LLM Predictions for 2024, and to announce our first webinar of 2024! Read on for more details.


Featured Event

Fine-Tuning Zephyr-7B to Analyze Customer Support Call Logs

Join us on February 1st at 10:00 am PT to learn how you can leverage open source LLMs to automate one of the most time consuming tasks of customer support: classifying customer issues. You will learn how to efficiently and cost-effectively fine-tune an open source LLM with just a few lines of code at a fraction of the cost of using a commercial LLM and how to easily implement efficient fine-tuning techniques like LoRA and quantization.

Featured Event

Recent Events + Podcasts

LoRA Land: How We Trained 25 Fine-Tuned Mistral-7b Models that Outperform GPT-4
WEBINAR

LoRA Land: How We Trained 25 Fine-Tuned Mistral-7b Models that Outperform GPT-4

LoRA Land is a collection of 25+ fine-tuned Mistral-7b models that outperform GPT-4 in task-specific applications and provides a blueprint for teams looking to quickly and cost-effectively deploy AI systems. Learn how our team built ,[object Object], in this in-depth overview.

Watch
Fine-Tuning Zephyr-7B to Analyze Customer Support Call Logs
WEBINAR

Fine-Tuning Zephyr-7B to Analyze Customer Support Call Logs

In this demo we show how engineering teams can leverage open-source Large Language Models (LLMs) to automate one of the most time consuming tasks of customer support: classifying customer issues. You’ll learn how to efficiently and cost-effectively fine-tune the open-source Zephyr model that accurately predicts the Task Type for customer support requests with just a few lines of code at a fraction of the cost of using a commercial LLM.

Watch
5 Reasons Why Adapters are the Future of Fine-tuning LLMs
WEBINAR

5 Reasons Why Adapters are the Future of Fine-tuning LLMs

Watch this on-demand session and demo with Daliana Liu, Host of ML Real Talk, and Geoffrey Angus, Engineering Leader at Predibase and co-maintainer of popular open-source LLM projects, Ludwig and LoRAX, to deep dive on all things efficient fine-tuning and adapter-based training.

Watch
Data Driven: Powering Real-World AI with Declarative AI and Open Source
PODCAST

Data Driven: Powering Real-World AI with Declarative AI and Open Source

Predibase CEO Devvret Rishi sits down with Frank La Vigne, co-host of the Data Driven Podcast, to talk about the importance of open-source LLMs and declarative ML.

Read full story

Featured Blog Post

LoRA Land: Fine-Tuned Open-Source LLMs that Outperform GPT-4

LoRA Land is a collection of 25 fine-tuned Mistral-7b models that consistently outperform base models by 70% and GPT-4 by 4-15%, depending on the task. LoRA Land’s 25 task-specialized large language models (LLMs) were all fine-tuned with Predibase for less than $8.00 each on average and are all served from a single A100 GPU using LoRAX. Learn more!

Featured Blog Post

Customer Spotlight

Deep Learning on Snowflake: How Paradigm Built a Personal Trading Experience with Predibase

Paradigm, one of the world’s largest institutional liquidity networks for cryptocurrencies, was able to use Predibase to build a deep learning-based recommendation system for their traders on top of their existing Snowflake Data Cloud with just a few lines of YAML. Building production models on top of Snowflake data–a task that used to take months–now takes minutes.

Customer Spotlight

From the Community

Fine-tuning LLMs for cost effective GenAI inference at scale
COMMUNITY BLOG

Fine-tuning LLMs for cost effective GenAI inference at scale

This blog post walks through how the team at Tryolabs used GPT-4 to produce titles for unstructured text and then fine-tuned an open-source LLM with Predibase to perform accurate, controlled, and cost-effective inference.

Read full story
Predibase joins the AI Alliance
NEWS

Predibase joins the AI Alliance

We’re excited to announce that we've joined IBM, Meta and 50+ organizations as members of the AI Alliance: an international community of leading organizations across industry, academia, research and government, coming together to support open innovation in AI.

Read full story
Fine Tune mistral-7b-instruct on Predibase with Your Own Data and LoRAX
COMMUNITY TUTORIAL

Fine Tune mistral-7b-instruct on Predibase with Your Own Data and LoRAX

Rany ElHousieny shares an in-depth, hands-on tutorial walking through using Predibase’s free trial to fine-tune and serve Mistral-7b-instruct.

Read full story
When to Fine-Tuning Large Language Models?
COMMUNITY BLOG

When to Fine-Tuning Large Language Models?

Thierry Teisseire shares insights on when, and how, to fine-tune LLMs for task-specific use cases.

Read full story

Open Source Updates

How to Efficiently Fine-Tune Gemma-7B with Open-Source Ludwig

Google recently released Gemma, a state-of-the-art LLM, licensed free of charge for research and commercial use. In this short tutorial, we showed you how to easily, reliably, and efficiently fine-tune Gemma-7B-Instruct on readily available hardware using open-source Ludwig. Try it out and share your results with our Ludwig community on Slack.

Open Source Updates

Ludwig v0.9.2 & v0.9.3: These releases introduces a few bug fixes and several new enhancements:

  • Add support for the official microsoft/phi-2 model
  • Ensure correct padding token for Phi and Pythia models
  • Cast LLMEncoder output to torch.float32, freeze final layer at init
  • Enable IA3 adapters
  • Add batch size tuning for LLMs
  • Per-step token utilization to tensorboard and progress tracker
  • Default LoRA target modules for #Mixtral and #Mixtral-instruct
  • Support for exporting models to Carton (thank you Vivek Panyam!)

LoRAX v0.6: The latest release adds support for multi-turn chat conversations with dynamic LoRA adapter loading. Just replace the "model" parameter with any HF LoRA and you're set. Chat templates can come from the base model, or even LoRA itself if it uses its own custom template. All of this happens dynamically per request.

Featured Product Update

We’re excited launch our new prompting experience in the Predibase UI which allows you to easily prompt serverless endpoints and your fine-tuned adapters without needing to deploy them first. This lets teams test their fine-tuned models and compare model iterations all from the UI, enabling much faster test and review cycles.

Prompt-UI-2

Related Articles

Join Our Community!