The future is fine-tuned

Create small models tailored for your use case with GPT-4 level performance—at a fraction of the cost.

Efficient and Effortless
 Fine-Tuning

Our default hyperparameters provide great results out of the box and help prevent out of memory errors. We bake in best practices like LoRA, RSLora, DoRA, and Punica, leading to faster, more efficient, and more reliable training jobs so you don’t need to keep up to date with the latest arXiv papers. Advanced users can further customize training jobs to optimize model performance.
  • Intuitive UI
  • SDK

Start fine-tuning in the UI with best practice defaults, no code required.

Fine-Tune Any Leading Model

Choose the base model that's best for your use case from a wide selection of LLMs including Upstage's Solar LLM and open-source LLMs like Llama-3, Phi-3, and Zephyr. You can also bring your own model and serve it as a dedicated deployment.



See the full list of supported models.



Fine-Tune Any Leading Model

First-Class Fine-Tuning Experience

Powerful training engines

Fine-tuning uses A100s by default but you can choose other hardware to further optimize for cost or speed.

Powerful training engines

Serverless fine-tuning infra

Pay per token to ensure you’re only charged for what you use. See our pricing.

Fine-Tuning Pricing (per 1M tokens)
Up to 16B
$0.50
16.1 to 80B
$3.00
Up to 16B - Turbo LoRA
$1.00
16.1 to 80B - Turbo LoRA
$6.00

View essential metrics as you train

Track learning curves in real time as your adapter trains to ensure everything is on track.

chart

Resume training from checkpoints

No need to restart an entire training job from the beginning if it encounters an error or you’re not happy with the training performance.

Resume training from checkpoints

Your Data Stays Your Data

Whether you use our serverless fine-tuning infra or are running Predibase in your VPC, Predibase ensures your privacy by never retaining your data.

  • SaaS
  • VPC
SaaS

Fine-Tune One Base Model For Every Task Type & Serve From A Single Deployment

{"text": .....}
{"text": .....}
{"text": .....}

Completions

Leverage continued pre-training to teach your LLM the nuances of domain-specific language with unlabeled datasets.

Code GenerationCorpus SynthesisSynthetic Data Generation
{"prompt": ...., "completion": .....}
{"prompt": ...., "completion": .....}
{"prompt": ...., "completion": .....}

Instruction Tuning

Train your LLMs on specific tasks with structured datasets consisting of (Input, Output) pairs.

ClassificationSentiment AnalysisInformation ExtractionNamed Entity Recognition
{"prompt": ...., "chosen": ....., "rejected": ......}
{"prompt": ...., "chosen": ....., "rejected": ......}
{"prompt": ...., "chosen": ....., "rejected": ......}
Coming Soon

Direct Preference Optimization (DPO)

Ensure your model’s outputs align with human preferences using a preferences dataset complete with prompts, preferred, and dispreferred responses.

{"messages": [{"role": ..., "content": ...}, {"role": ..., "content": ...}, ...]}
{"messages": [{"role": ..., "content": ...}, {"role": ..., "content": ...}, ...]}
{"messages": [{"role": ..., "content": ...}, {"role": ..., "content": ...}, ...]}
Coming Soon

Chat

Create chat-specific models for conversational AI by fine-tuning with chat transcripts.

Chatbots
Enric Logo Color

By switching from OpenAI to Predibase we’ve been able to fine-tune and serve many specialized open-source models in real-time, saving us over $1 million annually, while creating engaging experiences for our audiences. Best of all we own the models.

Andres Restrepo, Founder and CEO, Enric.ai

Learn More

Ready to efficiently fine-tune and serve your own LLM?