An Open Course on LLMs, Led by Practitioners

llms
courses
A free survey course on LLMs, taught by practitioners.
Author

Hamel Husain

Published

July 29, 2024

Today, we are releasing Mastering LLMs, a set of workshops and talks from practitioners on topics like evals, retrieval-augmented-generation (RAG), fine-tuning and more. This course is unique because it is:

We have organized and annotated the talks from our popular paid course.1 This is a survey course for technical ICs (including engineers and data scientists) who have some experience with LLMs and need guidance on how to improve AI products.

Speakers include Jeremy Howard, Sophia Yang, Simon Willison, JJ Allaire, Wing Lian, Mark Saroufim, Jane Xu, Jason Liu, Emmanuel Ameisen, Hailey Schoelkopf, Johno Whitaker, Zach Mueller, John Berryman, Ben Clavié, Abhishek Thakur, Kyle Corbitt, Ankur Goyal, Freddy Boulton, Jo Bergum, Eugene Yan, Shreya Shankar, Charles Frye, Hamel Husain, Dan Becker and more

Speakers include Jeremy Howard, Sophia Yang, Simon Willison, JJ Allaire, Wing Lian, Mark Saroufim, Jane Xu, Jason Liu, Emmanuel Ameisen, Hailey Schoelkopf, Johno Whitaker, Zach Mueller, John Berryman, Ben Clavié, Abhishek Thakur, Kyle Corbitt, Ankur Goyal, Freddy Boulton, Jo Bergum, Eugene Yan, Shreya Shankar, Charles Frye, Hamel Husain, Dan Becker and more

Getting The Most Value From The Course

Prerequisites

The course assumes basic familiarity with LLMs. If you do not have any experience, we recommend watching A Hacker’s Guide to LLMs. We also recommend the tutorial Instruction Tuning llama2 if you are interested in fine-tuning 2.

What Students Are Saying

Here are some testimonials from students who have taken the course3:

Sanyam Bhutani, Partner Engineer @ Meta

There was a magical time in 2017 when fastai changed the deep learning world. This course does the same by extending very applied knowledge to LLMs Best in class teachers teach you their knowledge with no fluff

Laurian, Full Stack Computational Linguist

This course was legendary, still is, and the community on Discord is amazing. I’ve been through these lessons twice and I have to do it again as there are so many nuances you will get once you actually have those problems on your own deployment.!

Andre, CTO

Amazing! An opinionated view of LLMs, from tools to fine-tuning. Excellent speakers, giving some of the best lectures and advice out there! A lot of real-life experiences and tips you can’t find anywhere on the web packed into this amazing course/workshop/conference! Thanks Dan and Hamel for making this happen!

Marcus, Software Engineer

The Mastering LLMs conference answered several key questions I had about when to fine-tune base models, building evaluation suits and when to use RAG. The sessions provided a valuable overview of the technical challenges and considerations involved in building and deploying custom LLMs.

Ali, Principal & Founder, SCTY

The course that became a conference, filled with a lineup of renowned practitioners whose expertise (and contributions to the field) was only exceeded by their generosity of spirit.

Lukas, Software Engineer

The sheer amount of diverse speakers that cover the same topics from different approaches, both praising and/or degrading certain workflows makes this extremely valuable. Especially when a lot of information online, is produced by those, who are building a commercial product behind, naturally is biased towards a fine tune, a RAG, an open source LLM, an open ai LLM etc. It is rather extra ordinary to have a variety of opinions packed like this. Thank you!


Course Website

Stay Connected

I’m continuously learning about LLMs, and enjoy sharing my findings and thoughts. If you’re interested in this journey, consider subscribing.

What to expect:

  • Occasional emails with my latest insights on LLMs
  • Early access to new content
  • No spam, just honest thoughts and discoveries

Footnotes

  1. https://maven.com/parlance-labs/fine-tuning. We had more than 2,000 students in our first cohort. The students who paid for the original course had early access to the material, office hours, generous compute credits, and a lively Discord community.↩︎

  2. We find that instruction tuning a model to be a very useful educational experience even if you never intend to fine-tune, because it familiarizes you with topics such as (1) working with open weights models (2) generating synthetic data (3) managing prompts (4) fine-tuning (5) and generating predictions.↩︎

  3. These testimonials are taken from https://maven.com/parlance-labs/fine-tuning.↩︎