Make sure to sign-up for our monthly newsletter to receive all the latest AI and LLM news, tutorials and product updates.
We hope your 2024 is off to a great start! 2023 was certainly transformative for AI but we think 2024 will see shifts that make AI even more accessible, reliable, and secure for practitioners. To that end, we’re excited to release our AI and LLM Predictions for 2024, and to announce our first webinar of 2024! Read on for more details.
Join us on February 1st at 10:00 am PT to learn how you can leverage open source LLMs to automate one of the most time consuming tasks of customer support: classifying customer issues. You will learn how to efficiently and cost-effectively fine-tune an open source LLM with just a few lines of code at a fraction of the cost of using a commercial LLM and how to easily implement efficient fine-tuning techniques like LoRA and quantization.
Recent Events + Podcasts
LLMOps for Your Data: Best Practices to Ensure Safety, Quality, and Cost
TFiR “Let’s Talk”: Predibase Makes It Easy For Anyone To Train LLMs
Caveminds Podcast: Deploy Faster, Cheaper, Smaller AI Fine-Tuned Models
Crafted | The Artium Podcast: Taking GenAI From Prototype to Production
Featured Blog Post
From the coming wave of small language models to the future of fine-tuning and LLM architectures, these predictions represent the collective thoughts of our team of AI experts with experience building ML and LLM applications at Uber, AWS, Google, and more.
Paradigm, one of the world’s largest institutional liquidity networks for cryptocurrencies, was able to use Predibase to build a deep learning-based recommendation system for their traders on top of their existing Snowflake Data Cloud with just a few lines of YAML. Building production models on top of Snowflake data–a task that used to take months–now takes minutes.
From the Community
Top AI startups poised for success in 2024: a Roundup of Innovation
Fine-Tune Mistral with Ludwig - Demo
O’Reilly Course: Responsible Generative AI and Local LLMs
Coding for non-coders
30 AI Libraries For The Modern AI Stack
Open Source Updates
Mixtral 8x7B is one of the first successful open-source implementations of the Mixture of Experts architecture (MoE) and we’ve made it easy to fine-tune for free on commodity hardware using Ludwig—a powerful open-source framework for highly optimized model training through a declarative, YAML-based interface. Follow along with this hands-on tutorial!
Ludwig v0.9.2 & v0.9.3: These releases introduces a few bug fixes and several new enhancements:
- Add support for the official microsoft/phi-2 model
- Ensure correct padding token for Phi and Pythia models
torch.float32, freeze final layer at init
- Enable IA3 adapters
- Add batch size tuning for LLMs
- Per-step token utilization to tensorboard and progress tracker
- Default LoRA target modules for #Mixtral and #Mixtral-instruct
- Support for exporting models to Carton (thank you Vivek Panyam!)
LoRAX v0.6: The latest release adds support for multi-turn chat conversations with dynamic LoRA adapter loading. Just replace the "model" parameter with any HF LoRA and you're set. Chat templates can come from the base model, or even LoRA itself if it uses its own custom template. All of this happens dynamically per request.
Featured Product Update
We’re excited launch our new prompting experience in the Predibase UI which allows you to easily prompt serverless endpoints and your fine-tuned adapters without needing to deploy them first. This lets teams test their fine-tuned models and compare model iterations all from the UI, enabling much faster test and review cycles.