LLM Fine-tuning Use Case

Structured Information Extraction

Extracting structured information from unstructured text has applications across industries like extracting patient information from medical reports or financial data from quarterly call transcripts. However data extraction can be a manual, time-consuming endeavor. Learn how to fine-tune open-source LLMs to automatically generate structured outputs for downstream use.

The Predibase Solution

Generate structured outputs from unstructured text

  • Efficiently fine-tune open-source LLMs like Llama2 with built-in best practice optimizations like LoRA and quantization
  • Instantly deploy and prompt your fine-tuned LLM on serverless endpoint
  • Automate the generation of structured outputs like JSON

Unestructured Text

Reports
Reports
Call Transcripts
Call Transcripts
Web scrapes
Web scrapes
Documents
Documents
Emails
Emails
Structured Data Extraction with LLMs

Example Use Cases

Patient information from medical reports
Patient information from medical reports
Financial data from quarterly call transcripts
Financial data from quarterly call transcripts
Product metadata from product descriptions
Product metadata from product descriptions

Fine-tune and serve your own LLM for Information Extraction

Easily and efficiently fine-tune Llama-2 with built-in optimizations such as quantization and LoRA to generate structured JSON outputs. Instantly serve and prompt your fine-tuned LLM with cost-efficient serverless endpoints built on top of open-source LoRAX. Read the full tutorial.

# Kick off the fine-tuning job

llm = pc.LLM("hf://llama2")

fine_tuning_prompt = """
	Your task is a Named Entity Recognition (NER) task. Predict the category of
	each entity, then place the entity into the list associated with the 
	category in an output JSON payload. Below is an example:

	Input: EU rejects German call to boycott British lamb .
	Output: {{"person": [], "organization": ["EU"], "location": [], "miscellaneous": ["German", "British"]}}
	Now, complete the task.

	Input: {input}
	Output: 
"""

job = llm.finetune(
   prompt_template=fine_tuning_prompt,
   target="json_output",
   dataset=dataset,

# Dynamically load fine-tuned adapter for serverless inference

fine_tuned_result = adapter_deployment.prompt(
    data=test_prompt,
    temperature=0.1,
    max_new_tokens=512,
)

Example code in Predibase for illustrative purposes only

Resources to Get Started

Ready to efficiently fine-tune and serve your own LLM?