With the rise of LLMs, prompt engineering has become a highly impactful skill in the AI industry. However, manual prompt tuning is challenging, time-consuming, and not always generalizable across different models. This raises a reasonable question: can prompts be automatically learned from data? The answer is yes, and in this talk, we will explore how.
First, we will provide a high-level overview of various prompt optimization approaches, starting with a simple technique like bootstrapped few-shot, which automatically generates and selects an optimal set of demonstrations for each step in the LLM chain. Then, we will discuss more complex approaches, such as MIPRO and TextGrad, which directly optimize the instructions.
Afterwards, we will move on to a more practical part by showcasing how these techniques can be used via popular frameworks such as DSPy and AdalFlow.
Finally, we will discuss the benefits and trade-offs of these approaches and frameworks in terms of costs, complexity and performance, so the audience can decide whether prompt engineering is truly dead.
Outline: