Getty Images/iStockphoto

Tip

Prompt engineering tips for ChatGPT and other LLMs

Master the art of prompt engineering -- from basic best practices to advanced strategies -- with practical tips to get more precise, relevant output from large language models.

As generative AI hype began to build last year, the tech industry saw a spate of job listings seeking prompt engineers to craft effective queries for large language models such as ChatGPT. But as LLMs become more sophisticated and familiar, prompt engineering is evolving into more of a widespread skill than a standalone job.

Prompt engineering is the practice of crafting inputs for generative AI models that are designed to elicit the most useful and accurate outputs. Learning how to formulate questions that steer the AI toward the most relevant, precise and actionable responses can help any user get the most out of generative AI -- even without a technical background.

4 basic prompt engineering tips for ChatGPT and other LLMs

If you're just starting to explore generative AI tools such as ChatGPT, following a few best practices can quickly improve the quality of responses you get from LLM chatbots.

1. Add detail and context

General, nonspecific prompts can elicit similarly broad, vague answers. Clarifying the scope of your question and providing relevant context is essential to getting the best answers out of LLMs. In a business context, including information such as your company's industry and target market can help the AI make better inferences and generate responses that are more specific and useful.

For example, compare a general question such as "What are some online marketing tips?" with the more detailed prompt "Help me build a digital marketing strategy for my small e-commerce business selling home decor." The latter includes several key pieces of context that help narrow the scope of the problem, reducing the likelihood of receiving irrelevant output such as enterprise-scale tips that exceed the resources of a small business.

2. Be clear and concise

Although LLMs are trained on extensive amounts of textual data, they can't truly understand language. ChatGPT doesn't use logical reasoning; instead, the model formulates its responses by predicting the most likely sequence of words or characters based on the input prompt.

Using clear and straightforward language in prompts helps mitigate this limitation. With a more concise and targeted prompt, the model is less likely to get sidetracked by unimportant details or ambiguous wording.

For example, even though the following sentence isn't the most concise, a human being would still likely understand it easily: "I'm looking for help brainstorming ways that might be effective for us to integrate a CRM system within our business's operational framework." An LLM, however, would probably respond better to a more straightforward framing: "What are the steps to implement a customer relationship management system in a midsize B2B company?"

3. Ask logically sequenced follow-up questions

An LLM's context window is the amount of text that the model can take into account at a given point in the conversation. For chatbots such as ChatGPT, the context window updates throughout a chat, incorporating details from newer messages while retaining key elements of previous ones. This means that an ongoing dialogue with ChatGPT can draw on earlier parts of the chat to inform future responses and prompts.

You can take advantage of ChatGPT's sizable context window by breaking down complicated queries into a series of smaller questions, which is a technique known as prompt chaining. Rather than trying to pack every relevant detail into one sprawling initial prompt, start with a broader prompt as a foundation, then follow up with narrower, more targeted queries. This takes advantage of LLMs' tendency to prioritize newer information within the context window when formulating their responses.

For example, you might start by asking a direct question, such as "How is AI used in cybersecurity?" -- an approach likely to result in a straightforward list or description. To dig deeper, you could then try a more open-ended question, such as "Can you explain the debate around the use of AI in cybersecurity?" This phrasing prompts the LLM to take a more nuanced tone and incorporate different viewpoints, including pros and cons.

Screenshots from conversations with ChatGPT, comparing two prompts that ask about AI's role in cybersecurity.
Asking the AI to explain the debate around using AI in cybersecurity results in a more nuanced answer that considers possible risks as well as benefits.

4. Iterate and experiment with different prompt structures

The way you ask a question affects how the LLM responds. Experimenting with prompt structures can give you a firsthand understanding of how different approaches change the AI's responses by drawing on different aspects of the AI's knowledge base and reasoning capabilities.

For example, one way to frame prompts is through comparisons, such as "How does Agile compare with traditional Waterfall software development?" This is particularly useful for decision-making because it encourages responses that include explanatory information as well as analysis and contrasts.

Creative prompting approaches such as roleplaying and scenario building can also yield more unique, detailed responses. Roleplaying prompts ask the LLM to take on a certain persona and respond from that perspective. For example, the following prompt encourages the AI to take a strategic, analytical and business-oriented approach when generating its response: "Imagine you are a tech business consultant. What steps would you recommend that a new SaaS startup take to gain market share?"

Likewise, asking the AI to speculate about a hypothetical scenario can encourage more detailed and creative responses. For example, you might ask, "A medium-sized company is aiming to shift fully to the cloud within a year. What potential benefits, risks and challenges are its IT leaders likely to face?"

Advanced LLM prompt engineering strategies

If you're already comfortable with the basics of prompt engineering, testing out some more complex approaches can further enhance the quality of the responses you get from LLMs.

Few-shot learning

Few-shot learning provides the model with relevant examples before asking it to respond to a query or solve a target problem. It's a prompt engineering strategy that borrows some ideas from the more general machine learning technique of supervised learning, where the model is given both the input and the desired output in order to learn how to approach similar tasks.

Few-shot prompts are especially useful for tasks such as style transfer, which involves changing aspects of a piece of text such as tone or formality without altering the actual content. To experiment with few-shot learning, try including two to four examples of the task you want the model to perform in your initial query, as shown in the example below.

Screenshot from a conversation with ChatGPT, demonstrating the technique of few-shot prompting.
Giving the model examples of accurately performing the target task can help ensure that you receive correct answers in your desired format.

Chain-of-thought prompting

Chain-of-thought prompting is a technique often used for more complicated reasoning tasks, such as solving a complex equation or answering a riddle. LLMs aren't naturally well suited to handle logic and reasoning problems, but chain-of-thought prompting appears to help improve their performance.

To experiment with chain-of-thought prompting, ask the model to think out loud, breaking down its approach to a problem into smaller substeps before generating the final answer. You can also combine few-shot prompting with chain-of-thought prompting to give the model an example of how to work through a problem step by step, as shown in the following example.

Screenshot from a conversation with ChatGPT, demonstrating the technique of chain-of-thought prompting.
This example combines few-shot and chain-of-thought prompting. Giving the model an example of successfully reasoning through a math problem encourages it to exhibit the same behavior in its response.

Meta-prompting

Meta-prompting uses the LLM itself to improve prompts. In a meta-prompt, ask the LLM to suggest the best way to formulate a prompt for a particular task or question -- or, similarly, to optimize a draft prompt generated by the user. For example, you might ask, "I want to ask a language model to generate creative writing exercises for me. What's the most effective way to phrase my query to get detailed ideas?"

Because meta-prompting uses AI to write prompts, this strategy can elicit creative and novel ideas that differ from those that occur to a human user. It's also helpful if you find yourself writing many prompts in your day-to-day workflows because it can automate the sometimes time-consuming process of crafting effective prompts.

Lev Craig covers AI and machine learning as the site editor for TechTarget Editorial's Enterprise AI site. Craig has previously written about enterprise IT, software development and cybersecurity, and graduated from Harvard University.

Dig Deeper on Artificial intelligence platforms

Business Analytics
CIO
Data Management
ERP
Close