Fine-Tuned Newsletter: June 2024

July 2, 2024 · less than a minute read
June-2024-Fine-Tuned LP Banner
Will Van Eaton
Will Van Eaton

Make sure to sign-up for our monthly newsletter to receive all the latest AI and LLM news, tutorials and product updates.

Welcome to this month's Fine-Tuned! Today we're talking about the best small language model for fine-tuning, sharing a bit about Apple's reference architecture for GenAI, and sharing 5 reasons why LoRA adapters are the future of fine-tuning. Read on to learn more!


Spotlight

After running hundreds of fine-tuning experiments we’ve found Upstage’s Solar LLM to be the best small language model for fine-tuning, and we’re excited to announce it’s available for fine-tuning and serving exclusively on Predibase.

What does it mean to be the best model for fine-tuning? When compared to 15 leading models, including GPT-4, Solar LLM was the best performing model in over 50% of the measured tasks. When evaluated across 31 tasks head to head with GPT-4, fine-tuned Solar LLM came out on top nearly 85% of the time.

Read more about Solar LLM or try it today in Predibase!

Upcoming Events

Solar LLM: The Best LLM for Fine-tuning that Beats GPT-4
Webinar

Solar LLM: The Best LLM for Fine-tuning that Beats GPT-4

July 11, 2024, 10:00am
We are also excited to announce that Solar LLM is available for fine-tuning exclusively on Predibase and is available for you to try yourself today as a part of our free trial.

Read full story
Arize:Observe
Event

Arize:Observe

July 11, 2024, 9:00 am - 7:00 pm, San Francisco, CA
If you’re planning to attend Observe be sure to catch our ML Engineer Arnav Garg talk about fine-tuning open-source LLMs that outperform popular closed-source models.

Read full story

Recent Events

Snowflake + Predibase: Smaller, faster & cheaper LLMs that beat GPT-4
Webinar

Snowflake + Predibase: Smaller, faster & cheaper LLMs that beat GPT-4

In this webinar, we addressed the challenge of high costs associated with commercial LLMs in production environments and demonstrated how Snowflake and Predibase enabled teams to train small, task-specific LLMs that outperformed commercial models at a lower cost.

Watch
How to 2x LLM Inference Speeds with Speculative Decoding Fine-tuning
Webinar

How to 2x LLM Inference Speeds with Speculative Decoding Fine-tuning

CTO Travis Addair and ML Engineer Arnav Garg talked about how developed the fastest, most cost-effective inference platform for fine-tuned LLMs by building optimizations like speculative decoding. Learn how faster inference leads to more efficient compute utilization and industry leading total cost of ownership.

Watch
Speed Up LLM Development with Synthetic Data and Fine-tuning
Webinar

Speed Up LLM Development with Synthetic Data and Fine-tuning

In this workshop, participants learned to combine Gretel and Predibase for efficient, cost-effective LLM training that surpassed commercial options. They saw how to create synthetic data with Gretel Navigator and use Predibase to fine-tune an open-source LLM using advanced methods.

Watch

From the Community

CB Insights Enterprise AI Roadmap
Report

CB Insights Enterprise AI Roadmap

We were recently featured on the cover of the CB Insights Enterprise AI Roadmap report! The report noted the rising prominence of Small Language Models (SLMs) and their potential to outperform larger commercial LLMs and featured feedback from one of our customers, who expressed high satisfaction with our platform, underscoring our commitment to customer success as our top priority.

Read full story
Building Predibase: AI, Open-Source, and Commercialization with Dev Rishi and Travis Addair
Podcast

Building Predibase: AI, Open-Source, and Commercialization with Dev Rishi and Travis Addair

In this episode of the Founded & Funded podcast, hosted by Vivek Ramaswami of Madrona Ventures, Our CEO Dev and CTO Travis discussed why fine-tuned smaller models are poised to dominate the future, challenging the notion that bigger models are always better. The conversation also explored the key ingredients for building a thriving high-growth startup. Tune in for insights on cutting-edge AI trends and entrepreneurial wisdom from industry experts.

Read full story

From the Predibase Blog

Breaking Down Apple’s Reference Architecture for GenAI: Small, Fine-tuned and Built on LoRA
Blog

Breaking Down Apple’s Reference Architecture for GenAI: Small, Fine-tuned and Built on LoRA

Apple's recent AI announcement showcased their approach to leveraging LLMs: using a single small language model with multiple fine-tuned LoRA adapters for various tasks. This innovative architecture, mirroring Predibase's LoRAX framework, promises GPT-4-like performance at a fraction of the size and cost, potentially revolutionizing how companies deploy AI systems in the future.

Read full story
5 Reasons Why LoRA Adapters are the Future of Fine-tuning
Blog

5 Reasons Why LoRA Adapters are the Future of Fine-tuning

Curious about the future of fine-tuning LLMs? This blog post explores five compelling reasons why LoRA adapters are revolutionizing the field, offering ML engineers insights into achieving GPT-4 level performance with significantly reduced memory footprint, faster training times, and the ability to deploy multiple models efficiently.

Read full story
Introducing the Fine-Tuning Index for LLMs
Blog

Introducing the Fine-Tuning Index for LLMs

Our new Fine-Tuning Index reveals that fine-tuned open-source LLMs can outperform GPT-4 on 85% of tested tasks, while being significantly more cost-effective. ML engineers will find valuable insights from over 700 fine-tuning experiments across 13 popular open-source LLMs and 31 diverse tasks, helping them select the optimal model for their specific needs and accelerate their journey to production.

Read full story

Related Articles

Join Our Community!