04-26, 15:30–17:00 (America/Los_Angeles), Kodiak Theatre
Learn how to use large language models like GPT to automate data-related tasks and make your work more efficient using tools like LangChain. This tutorial covers the basics of prompt engineering and LLMs, provides a step-by-step guide on getting started, and discusses tips & tricks for successful automation.
Large language models have been making headlines for their state-of-the-art capabilities. Tools like ChatGPT are great for standard tasks, but how do you utilize the power of these models for more niche tasks that require more control, ease of automation, and customizability? In this talk, I'll explore the concept of prompt engineering and how it can be used to automate tasks using large language models and tools like LangChain. This tutorial is designed for people who work with data and often wish they could automate time-consuming tasks that are generally not worth creating separate complex ML pipelines for. I'll cover the basics of these models from a practical perspective without too much focus on the underlying technical details. I'll provide a step-by-step guide on how to approach automating a task and discuss the dos and don'ts of using large language models to help you avoid pitfalls. I'll also share tips and best practices to help you get started. By the end of this tutorial, you will better understand how to leverage large language models to automate your tasks in an efficient and controlled way.
No previous knowledge expected
I am an Assistant Professor at the University of Washington, Foster School of Business. I specialize in combining computer science with capital markets research. My expertise is in machine learning and natural language processing. For more information, see my website and Github page: