LLM Fine-tuning Use Case

Customer Service Automation

A customer service call can cost an organization upwards of $40. Automating customer support processes can reduce overhead by millions of dollars. Learn how to fine-tune open-source LLMs to automatically classify support issues and generate a customer response.

The Predibase Solution

Streamline customer support operations with LLMs

  • Efficiently fine-tune open-source LLMs like Llama2 on customer support transcripts
  • Instantly deploy and prompt your fine-tuned LLM on serverless endpoints
  • Automate issue identification and generate content for agents to respond

Unestructured Text

Call transcripts
Call transcripts
Customer Emails
Customer Emails
Chat Logs
Chat Logs
Product Knowledge Base
Product Knowledge Base
Customer Service 
Automation with LLMs

Use Cases

Automatically prioritize tickets based on issue
Automatically prioritize tickets based on issue
Classify and route issues to the appropriate teams
Classify and route issues to the appropriate teams
Generate customer responeses to help agents be more efficient
Generate customer responeses to help agents be more efficient

Fine-tune and serve your own LLM for Customer Support

Efficiently fine-tune any open-source LLM with built-in optimizations like quantization, LoRA, and memory-efficient distributed training combined with right-sized GPU engines. Instantly serve and prompt your fine-tuned LLMs with cost-efficent serverless endpoints built on top of open-source LoRAX. Read the full tutorial.

# Kick off the fine-tuning job

llm = pc.LLM("hf://zephyr-7b-beta")
job = llm.finetune(
   prompt_template=prompt_template,
   target="customer_support_issue_type",
   dataset=dataset,
   epochs=5,

# Dynamically load fine-tuned adapter for serverless inference

fine_tuned_result = adapter_deployment.prompt(
    data=test_prompt,
    temperature=0.1,
    max_new_tokens=256,
    bypass_system_prompt=False,
)

Example code in Predibase for illustrative purposes only

Resources to Get Started

Ready to efficiently fine-tune and serve your own LLM?