Prompt Engineering 101 for Business People

Tips for Crafting the Perfect Prompt for Each Model

Prompt Engineering 101 for Business People

Watch the video for a more entertaining version!

Let’s talk about “talking to AI”! Before we introduce overly complex terminologies and approaches, the rule is simple: if you can precisely explain what you want, your chances of getting what you want increase. That’s the main thing to know, and you’ll get 80% of the job done. The remaining 20% varies for every task and every LLM, and that’s where the techniques we’ll discuss now will come in handy. So here are the best tips to remember when prompting an LLM, what works well, and what doesn’t.

Just to set a common baseline, we’ll define a prompt as the input we provide to an LLM to tell it what we want. It’s often phrased as a question, a task description, or a set of examples and instructions. Think of it as the interface between human intent and machine output. There’s an old principle in computing: garbage in, garbage out. If you feed a poorly constructed prompt, you will likely get a poor answer.

Prompt engineering has now become a standalone job title in some cases — but more broadly it’s more of a new valuable skill, much like knowing how to Google or use a spreadsheet. You don’t need a PhD in machine learning — just practice and use common sense. Whether building products with LLMs or using them in everyday applications, refining your prompting skills will pay off across many roles.

The precise prompting techniques will change, but at the core, this skill is simply developing an intuition for using, instructing, and interacting with LLMs with natural language to get the most productive outputs and benefit from the technology.

While some forms of prompt engineering are relatively brittle “hacks” that people have found to get desired results from the model, the broader discipline is simply effective communication with LLMs, which can be relatively similar to effective communication with humans. In particular, it often requires giving clear and concise instructions to what you can think of as a relatively novice “assistant” or “intern,” where you preemptively think through ways they may make mistakes or get off track.

In practice, you cannot second-guess all of these potential mistakes. There will still be new edge cases where your prompt isn’t followed. So, prompting is an iterative process where you test and improve your prompt over time. Either in the same ongoing discussion or retrying with some edits in the original prompts. Getting the best output requires evaluation of results across a wide range of potential use case scenarios. This is where patience will be your good friend.

In this article, we will only provide an overview of the most popular and valuable prompting techniques we also share in our course AI for Business, but I highly recommend the resource Learn Prompting, where we were early contributors to the materials, for a more in-depth understanding.

First of all, you may wonder: “Why learn how to prompt since I already know how to communicate my intent?”

Simple — because “your intent and the AI’s understanding of “your intent” is slightly different and it makes a huge difference in how well an AI responds to you. If you know how to craft a solid prompt, you’ll guide the AI to give you way better answers and save on tons of time and costs. The way you phrase things, the details you include, and the structure of your request all play a role in getting the best possible response.

Plus, different AI models have their own quirks. Some are chatty, some are straight to the point, and some follow instructions better than others. Knowing these differences helps you tweak your prompts to get exactly what you need.

At the end of the day, prompting is how you get an AI to do what you actually want.

One of the simplest ways to prompt an LLM is just to ask — this is called zero-shot prompting. You don’t give it any examples or extra instructions, just throw your request at it and see what happens. LLMs have gotten pretty good at handling a lot of tasks right out of the box. If you ask it to summarize a paragraph, you might just say, “Summarize the following text:” and paste in what you want summarized. If you need a quick translation, you could say something like “Translate the sentence ‘I am learning how to code’ into Spanish.” Sometimes, that’s all it takes.

No setup, no context — just a straight request. And sometimes, that works perfectly. Other times, though, the AI spits out nonsense, which is why there are more structured ways to prompt, like few-shot prompting.

There are times where LLMs need a little extra help to understand what you’re asking for. Providing a few examples of tasks and responses can make a big difference — this is called in-context learning or few-shot prompting. By seeing some sample inputs and outputs, the AI picks up on patterns and applies them to similar tasks. This helps with both accuracy and keeping a consistent style.

For example, if you want the LLM to summarize novels in a single sentence, you can give it a few examples first, like these:

Moby-Dick could be described in one line as Ahab, a vengeful sea captain, obsessively hunts the white whale that maimed him.

Or Pride and Prejudice could be explained as Elizabeth Bennet navigates love, class, and misunderstandings in Regency England.

And then maybe ask the AI to do the same for Frankenstein

So the LLM might respond with something like this

A scientist creates a sentient creature but recoils in horror, setting off a tragic chain of events.

That’s way better than just saying, “Summarize Frankenstein,” because now the AI understands the format and tone you’re looking for. You’re essentially teaching it how to think by example.

This idea isn’t new. A breakthrough paper, Language Models are Few-Shot Learners (GPT-3), showed how modern AI can perform all kinds of language tasks just by seeing a few examples — something that used to require massive amounts of training data for each specific task. For current LLMs, few-shot learning is common, but back then, this was a game-changer. And it still is a game-changer! I always try this first when tackling a new task.

But Few-shot learning also has limits, especially for complex tasks. That’s where more advanced techniques like chain-of-thought prompting come in, helping the AI think through problems step by step. This is needed because LLMs have a habit of jumping straight to answers, sometimes skipping important reasoning steps. You don’t want to directly answer a complex mathematical equations or a brain-wrecking riddle. You take time and decompose them in multiple steps. Chain-of-thought (CoT) prompting encourages the model to do exactly that and slow down to explain its thinking process before giving a final answer. It’s like making the model “think out loud.” Sometimes, it’s as simple as saying, “Let’s think step by step,” and the model breaks a problem into smaller, logical steps instead of rushing to a conclusion.

You will see the benefit of this technique primarily for logical/reasoning questions like if a farmer has 48 apples, sells 20, and divides the rest equally into four baskets. How many apples are in each basket?” If the LLM tries to solve it in one go, it will probably make a mistake. But if you guide by saying “Let’s think this through step by step,” it will likely respond with something like: “48 minus 20 leaves 28, and 28 divided by 4 is 7.” With that breakdown, the final answer — 7 — is much more reliable.

That said, chain-of-thought prompting doesn’t always work perfectly. Its effectiveness depends on the model and the task. It’s particularly useful for arithmetic, logic, and symbolic reasoning but doesn’t always help with other types of problems.

You are probably thinking that there is a limitation for every benefit, but that’s a great thing because now you can directly use the technique that works for your task category, reducing the trial and error. These techniques will likely be less and less necessary as LLM evolve, but they still help in improving results. New techniques will surface to improve results and we’ll be sure to cover them on the channel and our course.

Nowadays, reasoning models, like OpenAI’s o1 and Deepseek’s r1, do this chain-of-thought-like reasoning automatically with the goal of “spending more tokens” for more complex queries and fewer on easier ones. Just like we spend more brainpower when answering a complicated question.

Another technique that helps is role prompting. Sometimes, AI needs a little roleplay to get into the right mindset.

For example, if you ask, “You are a physics teacher. Explain Newton’s laws of motion in simple terms,” the AI will take on the role of a patient instructor and break things down clearly. But if you say, “You are a stand-up comedian. Explain Newton’s laws of motion,” you’ll get a more humorous take. The core information stays the same, but the tone and framing change depending on the role you assign.

This approach works across different fields because it gives the LLM an implicit set of rules to follow. It’s kind of like how you naturally adjust the way you talk depending on whether you’re speaking to your boss or your pet, hopefully.

Newer models like OpenAI’s o1 are designed differently from traditional chatbots. Instead of relying on tricks like role prompting or chain-of-thought prompting, you don’t have to guide them step by step like an intern — you can treat them more like intermediate or even expert analysts (with some intern-like drawbacks from time to time to keep in mind). The key to getting the best results is providing detailed, structured context upfront. Instead of going back and forth refining a vague prompt, it’s better to start with a well-thought-out brief that includes all relevant information, such as database schemas, past attempts, or specifics about how you want the response formatted. The focus for these models should be on giving it all the details it needs upfront, allowing it to generate a well-reasoned answer without extra guidance. This approach helps o1 generate complex code, structured analyses, or technical explanations in one go. While it’s powerful in these areas, it’s not the best choice for creative tasks like writing a haiku where traditional LLMs handle those faster and cheaper.

There’s also something called System prompts, they are additional, and sometimes hidden, instructions that shape how a model behaves throughout a conversation. Unlike user prompts, which are provided during an interaction, system prompts are set at the beginning and establish general guidelines for the model’s responses. They define the model’s tone, ensure consistency, enforce ethical rules, and guide how information is structured.

For example, in a news summarization tool, a system prompt might instruct the model to always format its responses as a bulleted list. Similarly, OpenAI’s DALL·E uses system prompts to refine image generation — if you ask for an image of an AI robot, the model might automatically expand on that request to ensure the generated image aligns with ethical standards and provides a clear, structured result.

Not all system prompts are visible or editable by users. When chatting with GPT-4o through ChatGPT, the model operates under an unseen system prompt that influences its behavior, and users cannot modify it. However, when interacting with GPT-4o programmatically via an API, developers can provide their own system prompt to customize how the model responds, tailoring its behavior to specific needs or applications.

All these techniques are a great starting point, but if you think using these techniques means you will get the perfect output on the first go, you will be disappointed. Coming up with the perfect prompt on the first try doesn’t always happen. A lot of the time, it takes a few attempts — this is where an iterative approach comes in. You start with a prompt, see what the model gives you, tweak it, and try again. It’s a back-and-forth process where you gradually fine-tune your request to get exactly what you need. Even experts don’t get it right the first time — they test, experiment, and refine.

Say you ask, “Tell me about the French Revolution.” The LLM might give you a long, detailed response when all you really wanted was a quick summary. You could then refine your request with, “Give me three key facts about the French Revolution.” If the answer is still too formal or lengthy, you might tweak it further: “Now explain those facts in one casual paragraph.” By making these small adjustments, you shape the AI’s response to match your expectations.

A good workflow is to draft a prompt, test it, review the output, tweak the wording, and repeat until you get what you want. Each iteration brings you closer to the ideal response. An imperfect answer isn’t a failure — it’s just feedback. If the LLM didn’t interpret your request the way you intended, rephrase it or add more details and try again.

At the end of the day, writing good prompts comes down to being clear and precise. The more specific the instructions, the better the results. If a prompt isn’t working, refining it quickly is key. A good approach is to anticipate how the AI might misinterpret the request, test prompts under different conditions, and avoid making assumptions about what the model understands. Simplicity also helps; prompts should be straightforward without unnecessary complexity. It’s important to balance general cases with edge cases, thinking about how the prompt will perform in real-world situations.

One thing that you always need to keep in mind is that LLMs sometimes make things up — the hallucinations, which happen when the model confidently spits out wrong or misleading information. The most common case is factual errors, like claiming New York is the capital of the United States. Another issue comes from outdated training data. If you ask about the 47th president of the U.S., but the model was last trained before the election, it won’t have the latest info and might guess instead.

Sometimes, hallucinations happen because of the way we phrase a prompt. If we ask an LLM to confirm something that isn’t true or feed it incomplete information, it might take that as fact and build on it. Unfortunately, hallucinations aren’t fully solvable because of how LLMs work — they generate responses based on patterns, not direct access to a real-time knowledge base. That’s why it’s always a good idea to double-check important information yourself, especially for anything factual or high-stakes.

Although now, there’s also a fix for this, you can ask the LLM to provide sources by enabling web search where available — ChatGPT, for example, only cites sources when browsing is turned on. If the chatbot you’re using doesn’t have web access, it won’t be able to fetch real-time references, so relying on it for citations without verification isn’t the best idea. When accuracy matters, take LLM responses with a grain of salt and cross-check with trusted sources.

In the end, getting good results from an LLM is an ongoing process of refinement. Clarity, context, and specificity make all the difference. A vague prompt can lead to irrelevant or even incorrect answers, while a well-structured one gives the LLM a clear path to follow. Since such an advanced language model doesn’t come with a user manual tailored to every request, your prompt acts as that manual. Think of prompt engineering as learning a new way to communicate — one that bridges human intent with machine understanding. It doesn’t take technical expertise, just practice and a willingness to experiment.

Also, as a final note, remember that different models respond differently, so prompt optimization isn’t one-size-fits-all. Reasoning models like o1 require a distinct approach, while traditional LLMs benefit from more structured prompting techniques. If you’ve enjoyed this article, check out our full course for business on the Towards AI Academy!

Thank you for reading, and I will see you in the next one!