Prompt engineering is akin to an art form, delicately crafting input requests for Large Language Models (LLMs) that lead to envisioned outputs. Here are several techniques for crafting single or sequences of prompts:

1. Least-To-Most Prompting

Least-To-Most Prompting involves decomposing a complex problem into simpler sub-problems and solving each sequentially. This technique utilizes a progressive sequence of prompts, facilitating the model's ability to handle complex reasoning.

2. Self-Ask Prompting

Self-Ask Prompting involves the LLM decomposing questions into smaller follow-up questions, explicitly showcasing its reasoning process. It allows for the explicit display of LLM reasoning and facilitates the decomposition of questions into manageable parts.

3. Meta-Prompting

Meta-Prompting prompts the agent to reflect on its performance and adjust its instructions accordingly. It utilizes an overarching meta-prompt while causing the agent to self-improve based on its own performance.

4. ReAct

ReAct combines reasoning and action, allowing the model to induce, track, and update action plans while gathering additional information from external sources. It demonstrates effectiveness in language and decision-making tasks, improving interpretability and trustworthiness.

5. Iterative Prompting

Iterative Prompting focuses on contextual prompt refinement through an iterative process. It ensures prompts contain contextual information, reducing the generation of irrelevant facts and hallucinations while enhancing context awareness.

6. Sequential Prompting

Sequential Prompting involves building a recommender system with LLMs, focusing on the ranking stage of recommender systems. It examines subtle differences in recommendation abilities, particularly in large-scale candidate sets.

Last updated