An Open Course on LLMs, Led by Practitioners
Today, we are releasing Mastering LLMs, a set of workshops and talks from practitioners on topics like evals, retrieval-augmented-generation (RAG), fine-tuning and more. This course is unique because it is:
- Taught by 25+ industry veterans who are experts in information retrieval, machine learning, recommendation systems, MLOps and data science. We discuss how this prior art can be applied to LLMs to give you a meaningful advantage.
- Focused on applied topics that are relevant to people building AI products.
- Free and open to everyone .
We have organized and annotated the talks from our popular paid course.1 This is a survey course for technical ICs (including engineers and data scientists) who have some experience with LLMs and need guidance on how to improve AI products.
Getting The Most Value From The Course
Prerequisites
The course assumes basic familiarity with LLMs. If you do not have any experience, we recommend watching A Hacker’s Guide to LLMs. We also recommend the tutorial Instruction Tuning llama2 if you are interested in fine-tuning 2.
What Students Are Saying
Here are some testimonials from students who have taken the course3:
Sanyam Bhutani, Partner Engineer @ Meta
There was a magical time in 2017 when fastai changed the deep learning world. This course does the same by extending very applied knowledge to LLMs Best in class teachers teach you their knowledge with no fluff
Laurian, Full Stack Computational Linguist
This course was legendary, still is, and the community on Discord is amazing. I’ve been through these lessons twice and I have to do it again as there are so many nuances you will get once you actually have those problems on your own deployment.!
Andre, CTO
Amazing! An opinionated view of LLMs, from tools to fine-tuning. Excellent speakers, giving some of the best lectures and advice out there! A lot of real-life experiences and tips you can’t find anywhere on the web packed into this amazing course/workshop/conference! Thanks Dan and Hamel for making this happen!
Marcus, Software Engineer
The Mastering LLMs conference answered several key questions I had about when to fine-tune base models, building evaluation suits and when to use RAG. The sessions provided a valuable overview of the technical challenges and considerations involved in building and deploying custom LLMs.
Ali, Principal & Founder, SCTY
The course that became a conference, filled with a lineup of renowned practitioners whose expertise (and contributions to the field) was only exceeded by their generosity of spirit.
Lukas, Software Engineer
The sheer amount of diverse speakers that cover the same topics from different approaches, both praising and/or degrading certain workflows makes this extremely valuable. Especially when a lot of information online, is produced by those, who are building a commercial product behind, naturally is biased towards a fine tune, a RAG, an open source LLM, an open ai LLM etc. It is rather extra ordinary to have a variety of opinions packed like this. Thank you!
Stay Connected
I’m continuously learning about LLMs, and enjoy sharing my findings and thoughts. If you’re interested in this journey, consider subscribing.
What to expect:
- Occasional emails with my latest insights on LLMs
- Early access to new content
- No spam, just honest thoughts and discoveries
Footnotes
https://maven.com/parlance-labs/fine-tuning. We had more than 2,000 students in our first cohort. The students who paid for the original course had early access to the material, office hours, generous compute credits, and a lively Discord community.↩︎
We find that instruction tuning a model to be a very useful educational experience even if you never intend to fine-tune, because it familiarizes you with topics such as (1) working with open weights models (2) generating synthetic data (3) managing prompts (4) fine-tuning (5) and generating predictions.↩︎
These testimonials are taken from https://maven.com/parlance-labs/fine-tuning.↩︎