Top 5 Use Cases for Large Language Models Including Video Tutorials
The Rapid Growth of LLM Chatbots
The last year has seen an explosion in LLMs used for chat applications—ChatGPT alone has over 100 million monthly active users. Enterprises looking to capitalize on this trend are experimenting with chatbots to talk to their customers, make their employees more productive, and even use chatbots to interact with other applications to extend automation.
While LLM-powered chat applications are powerful and fun to use, they only represent a small fraction of the potential value LLMs can deliver to the enterprise. In fact, there are many use cases that LLMs are capable of doing in production right now—use cases that can bring your team value in weeks not years. To help you uncover new use cases, we've outlined the top five LLM use cases including short demo videos.
Use Case #1: Text Classification
Text classification is a popular ML technique used to organize and categorize any type of unstructured text. For example, you can use this technique to determine the sentiment of a customer review (e.g. positive or negative), categorize customer support tickets (e.g. billing, product features, etc.), identify the language of a piece of text (e.g. Spanish, German, French, etc), or identify a doctor’s specialty (internal medicine, pediatrics, surgery, etc.). Any task that takes in text and outputs a label is considered Text Classification.
There are many ML algorithms used for text classification, but all of them require labeled data for training. Many organizations lack quality labeled data and manually labeling is time consuming and cost prohibitive. LLMs allow you to quickly create classifier algorithms with low-or-no-labeled data reducing the time it takes to build a classifier by weeks.
Example: Analyzing Customer Sentiment without Labeled Data
Watch this 2 minute tutorial to see how you can use Predibase and open-source LLM, Vicuna-13B, to determine customer sentiment on a series of reviews without any labeled data.
Use Case #2: Information Extraction
It’s estimated that around 80% of all data is unstructured and much of that data is text contained within documents. Information extraction allows us to extract salient bits of information from documents and to put it into a structured format for further analysis. This is useful when you want to extract similar information from a large series of documents like financial data from 10-k filings (e.g. revenues, losses, risk factors, etc) or patient information from clinical trials (e.g. medical condition, medication, dosage, etc).
By using an LLM to query documents in batch, you can extract relevant data into structured tables that are much easier to analyze compared to prompting a chatbot one question at a time. Not to mention, chatbots struggle with analytical questions (e.g. what was the average revenue for these companies over the last 10 years). Alternatively, structured outputs can be combined with database systems to perform complex searches that go beyond traditional keyword-based queries. You can even build ML models on top of structured data for predictive analytics (e.g. predict the likelihood a patient will be readmitted to the hospital based on these 5 factors).
Example: Creating a Resume Skills Table for Hiring Managers
Watch this 1-minute tutorial to see how you can extract an applicant’s skills from their resume with an LLM. This is an HR related use case that is relevant for any hiring manager assessing the skills for a set of candidates. In the demo, we change the token length and run the query across a batch of resumes to get real-time results at scale. You can also try this use case out with our free notebook.
Use Case #3: Content Creation
Content creation has traditionally been a resource intensive task requiring hours of sustained creative thinking. This does not scale well. An LLM can auto-generate content for an entire dataset, based on the given context while maintaining brand identity. Rather than having to start from scratch, content creators can focus on adding creative touch and corrections.
The content created by an LLM is a combination of content and style instructions given by the user. A standard pattern is to provide structured content and instruct the LLM to rewrite it in a certain style, like for example writing a product description for a beauty retailer.
“You are a marketing expert for Sephora. Below is metadata about a product:
Product Name: Rainbow Bar Eau de Parfum Travel Spray Brand Name: 19-69 Category: Rollerballs & Travel Size Variation: 0.25 oz/ 7.5 mL Highlights: ['Unisex/ Genderless Scent', 'Layerable Scent', 'Woody & Earthy Scent']
Write a sleek and concise description of the product for its item page.”
"Embark on a journey of discovery with Rainbow Bar Eau de Parfum Travel Spray. This unisex scent blends warm woods and earthy notes to create a layered, multifaceted fragrance. Layer it with other products for a custom blend or wear it solo for an effortless, uplifting scent.”
Example: Generating Descriptions for Products in a Catalog
Watch this 1-minute tutorial to see how you can generate a description for a series of products for a hypothetical retailer in a batch job using an open-source LLM. The generated content could be used to populate website descriptions.
Use Case #4: Structured Generation
Structured generation tasks are similar to content creation, however for these generative tasks you need the LLM to create highly structured text that can be used by another system for downstream tasks. This could be code that gets sent to a compiler or a json that gets sent to a database. Not only does the output need to be accurate, it also needs to be formatted correctly.
Generative code tasks are exciting because they can simplify some of the most tedious tasks: namely data structuring and data entry. Converting natural language into a form readable by machines and algorithms is historically a manual and labor intensive task done by humans that can take hours. With the right prompt and finetuning, LLMs can provide quality structured outputs that can be leveraged by various teams.
Example: Generating JSON Files from Unstructured Medical Data
Watch this 3-minute tutorial to see how you can use an open-source LLM to generate a structured JSON output from unstructured patient data to support downstream use cases. With a finetuned model this technique can be used to generate any type of code or structured output.
Use Case #5: Question-Answering (Q&A) / Search
Q&A or Search tasks are some of the most common use cases for LLMs. This is useful when you have a large set of internal documents that you would like to use as a source of information when answering questions through a chat interface or generative tasks. There are two types of questions in Q&A tasks: “aggregate questions” and “retrieval questions”.
Aggregate questions require the LLM to search across multiple documents to find an answer. For example, imagine that someone has uploaded 100 documents, each describing different companies. A question like “which company makes the most money?” would require aggregating knowledge of every single company to understand how each relates to the others financially.
Retrieval questions, in contrast, require the algorithm to search across all documents and find the single most relevant piece of information and form an answer based on that.
Example: Q&A with a Privately Hosted LLM
Watch this 1-minute tutorial to see how you can host your own LLM and index proprietary data to answer a series of questions using Predibase.
Start Customizing and Hosting Your Own LLMs
Want to try out one of these use cases or tackle a new one? Predibase makes it easy to build your own custom LLM in just a few minutes with the first low-code AI platform designed for developers.
Get started instantly by signing up for a 14-day free trial.