In celebration of our 10,000th star on Github, we invited the Ludwig community to participate in a mini-virtual hackathon. The requirements were simple: fine-tune an open-source LLM for a cool use case or project of your choosing.
We had great participation from the community, and after careful consideration, our panel of judges selected the five most compelling and complete projects as winners. In the coming weeks, each winner will receive a special edition Ludwig swag pack including a Ludwig hoodie, t-shirt, limited edition sticker set, and Yeti mug.
We’re excited to share our winners and highlight their amazing work including their notebooks so all of the Ludwig community can benefit from their efforts.
Overall Winner
Dickens: an LLM that writes Great Expectations
Team: César García Sáez
Project Overview
The Dickens project leverages a fine-tuned Language Model (LLM) based on Zephyr-7B-Beta to produce data quality checks in Python code. It aims to transform natural language descriptions of data into concrete expectations or assertions about data features. These expectations align with those defined in the Great Expectations open-source Python library which includes over 50 core expectations and more than 250 contributed by the community. Dickens narrows its focus on the 50 core expectations to streamline and enhance data quality validation. The fine-tuned model enables data teams to generate data quality checks that are simple to produce and highly reliable. Dickens helps accelerate the adoption of data quality best practices, reducing the efforts needed to transform business rules into CI/CD data quality checks.
Use of Ludwig
Ludwig was selected for its versatility in fine-tuning LLMs and ease of use with its declarative approach to model training. Fine-tuning Dickens ensures that the model delivers consistent and executable Python code.
We customized Dickens for our task through two complementary approaches:
- Prompt Engineering: The initial step involved using prompt engineering techniques to provide the LLM with the necessary context for generating accurate outputs.
- Fine-Tuning with Ludwig: In order to ensure that the output would produce actual working code, we had to create a new synthetic dataset. This dataset contains over 750 prompt-expectations pairs, covering all the core expectations from the library, incorporating common arguments and keywords. This rich dataset served as the foundational training material used by Ludwig to fine-tune the base Zephyr-7B-Beta model.
In summary, the use of Ludwig in the Dickens project was key to achieving consistent configuration management for all the different experiments, validations, and evaluations in LLM fine-tuning.
Project Notebook: Dickens: an LLM that writes Great Expectations
Additional Contest Winners
Question Answering on FAQs of GST (Goods and Services Tax) in India
Team: Yogesh Kulkarni
Project Overview
GST is a single tax structure that replaces a multitude of previous taxes in India, such as the service tax, central excise duty, VAT, and more. It is an all-in-one tax solution that streamlines the entire tax process in India. The transition from a multitude-tax system to a single-tax system raises lots of questions among taxpayers. These questions, along with their answers, are available as FAQs. Building a chatbot on top of the FAQ using an ML model or a fine-tuned LLM can significantly help taxpayers with questions. This project aims at leveraging Ludwig to build such a model on GST FAQs data.
Use of Ludwig
Ludwig's declarative approach provides a clear and concise methodology for building machine learning models, making it an invaluable tool for unraveling the mysteries of complex domains. It becomes extremely easy to change between various approaches, base LLMs, and more.
Here is Yogesh’s medium post about his experiences using Ludwig: How to Fine-tune LLMs without Coding.
Project Notebook: Question Answering on Goods and Services Tax in India
Intent Classification with LLMs: Fine-Tuning on Support Call Transcripts using Ludwig
Team: Ankit Patil, Divyansh Mishra, Abhishek Kaushik
Project Overview
The project is focused on improving customer support with Ludwig and the Hugging Face Mistral model. The objective is to fine-tune a large language model with Ludwig for precise intent classification using support call transcripts. By automating the categorization of customer queries, the fine-tuned model enables quick and targeted responses. Configured for optimal performance, the model aims to streamline customer support workflows, enhancing overall customer service efficiency.
Use of Ludwig
We chose Ludwig for its versatility and user-friendly design, creating an optimal environment for developing an intent classification model. Ludwig's coding simplicity saved time and minimized the need for extensive coding, aligning with our commitment to a hassle-free development experience.
In this project, Ludwig played a central role in intent classification. We configured the model using a YAML file, initialized LudwigModel, and trained it on support call transcripts. Ludwig's simplicity facilitated an efficient and streamlined development process, making it a key tool in achieving our intent classification goals.
Project Notebook: Intent Classification for Customer Support
Democratize and Automate the Feature Engineering of Tabular Data using fine-tuned LLMs
Team: Sanjay Dasgupta
Project Overview
As foundation models are trained on huge quantities of information drawn from virtually every subject, they have a wisdom “well beyond their years”, and can bring new insights to the feature engineering of tabular data. This can help automate and democratize the practice of feature engineering, which is otherwise a tedious task.
The method we used is applicable in supervised learning scenarios, where the solution is presented as a fine-tuned LLM and can be used for all tabular datasets, in which many columns have names with clear, unambiguous, and well-known meanings. It is based on the hypothesis that an LLM will automatically intuit and use suitable additional (but hidden) features if each tabular data row is first converted into a paragraph of representative text before being presented to the ML algorithm (in this case a fine-tuned LLM).
If we were trying to classify the rows of a dataset containing such data, then an LLM fine-tuned on the text will very likely perform better than a classical ML algorithm trained on the original columns, because the LLM would also be able to apply general knowledge about people, workplaces, and incomes (the subject-matter of the example dataset), which it gained during its pre-training. On the other hand, a classical ML algorithm is disadvantaged by having no knowledge, other than the relationships in the problem-specific data presented for classification.
Use of Ludwig
Ludwid enabled easy and elegant integration of the LLM functionality into the project's code. The project uses Ludwig to fine-tune an LLM as a binary classifier of text data.
Project Notebook: Democratize and Automate the Feature Engineering of Tabular Data using fine-tuned LLMs
Assessing Health Data with ML and Becoming More Aware
Team: Iuliana Stroia
Project Overview
The project aims to predict some well-being factors – for women in particular – from data coming from smart devices (in this case an Oura ring). The project is still a work in progress (this is only the initial phase). The aim is to use resting heart rate, daily readiness scores, and fertile windows as features to predict these values for a period of 30 days into the future. In addition, the project uses the basal temperature reading from the ring to make predictions.
Use of Ludwig
I chose Ludwig for its great versatility, and because it makes it easier to use machine learning in various aspects. I used Ludwig to train the data and to generate predictions. While more work is needed to improve the efficacy of the predictions, Ludwig offers great tools to make it easier.
Project Notebook: Assessing Health Data with LLMs
Getting Started with Ludwig
A big thank you to all of our contest winners! We hope you enjoyed learning more about these projects as much as we did.
To get started fine-tuning LLMs with Ludwig:
- Check out our latest fine-tuning tutorial for Mixtral
- Join the Ludwig community on Slack
- Download Ludwig and check out the docs