Skip to main content
Version: 2025-05-08

How to Think Like a Prompt Engineer

On the face of it, you might expect to approach prompt engineering like any other engineering problem. Scope out the issue, methodically come up with a solution, and put it into action. In reality, prompt engineering can be much less of a straight line from problem to solution, with a lot of experimentation and investigation involved before implementing a successful prompt.

Here are some of the qualities and approaches you’ll need to adopt to be a successful prompt engineer:

  1. Believe everything is promptable
    Start with a fundamental underlying belief that everything is promptable. There is a problem for you to solve, and it is not a problem with the model (as long as you are using the model appropriately, ex: not asking a model without vision/image capabilities to describe a photo.)

    When you believe that everything is promptable, it makes you more resilient to the inevitable frustration over a string of prompts that just aren’t resulting in the response you’re looking for.

    You may also run into outside technical issues if you’re prompting files, or situations that can’t be solved with just one prompt. These are technically still “promptable,” they just require a more complex process or approach.

  2. Have a process
    Prompting is creative and experimental, but our underlying goal is to create a reliable and automated process for a business use. So, you will want to have a process to compare prompt performance and be able to make decisions based on key metrics, like accuracy.

    Your process is also key to making progress and scaling your prompts. It doesn’t make sense to start with scattershot prompts on 100 files or uses. You’ll be much more successful and much less frustrated if you start with 1-5 documents or examples to get a baseline set of prompts, then check the performance on a bigger set, and scale up gradually until you have confidence your prompts are performing as expected.

  3. Experiment like a mad scientist
    To write high performing prompts, you’re going to have to experiment. Choose different synonyms, change the order of your rules or instructions, play with sentence structure. Try strategies that break your prompt into different stages, see if using a waterfall approach with multiple prompts helps.

    Nothing is really off the table; there are a lot of good strategies in our prompt guides, and there are more published every day in AI research papers. Leverage the ones that work best for your use case, and be open to the possibility that you might discover a brand new strategy, or at least make an existing one your own with a few tweaks.

    Let your creativity flag fly, left brain in sync with right brain. You’ll ultimately balance it out by implementing data-driven decisions.

  4. Be inquisitive and act like an interrogator
    A big part of the process and ultimately your success is being inquisitive. When the model doesn’t respond the way you expected, can you look objectively at your prompt, the output, and any files you’re working with, and come up with a hypothesis as to why?

    You can even change your strategy temporarily and begin asking the model why. Ask the model to explain the answer it’s giving, even if your ideal end state response doesn’t include an explanation. It’s not a guarantee that acting on the model’s explanation will solve the problem; this is still a probabilistically generated response. But it may give you clues, and if you follow their trail you might make productive adjustments to your prompts.

  5. Look for patterns
    Don’t meme yourself into a replica of that “It’s Always Sunny in Philadelphia” murder board scene (or “Homeland,” depending on your viewing preferences,) but do look for patterns in your data set and model responses.

    If you’re working with a mixed set of documents, is there a certain type that isn’t performing well with your current prompts? Can you prompt those in isolation, then incorporate those changes into the prompts for the full set and preserve your performance?

    The patterns and common errors across documents or prompts are your clues. Gather them and act on them! Then take what you’ve learned and apply it to future prompts.

  6. Be stubborn
    You’re going to feel like you run into dead ends. But remember the first quality we ask prompt engineers to incorporate: believe everything is promptable. So as long as you’re asking the model to accomplish something it is technically enabled to do, get stubborn.

    Make demands of the model, and see how it changes the prompt. Give it ultimatums like “you will be fired if you don’t answer” or yell at it, “YOU HAVE TO MAKE A SELECTION.” See how it reacts, how the answer changes. Roll your prompt back to the simplest, most reductive version and begin building back up from there to refine your response.

    Just don’t give up. I can’t tell you how many times someone has told me, “I can’t prompt this,” and I’m able to either solve the problem or make significant progress pretty quickly. You can, too… you just have to be stubborn.

  7. Beat “prompter’s block” - by moving to a different block
    Maybe you’re struggling to extract exact details from a document. Keep working with the same set, but change your tack a bit. Start working on having the model classify your documents - and employing all the strategies we’ve detailed to this point (pattern-recognition, interrogation, experimentation,) to learn more about how the model is interacting with your samples.

    Classifying can help surface similarities between seemingly unrelated documents. You might even use this to create a tiered prompt, where the model classifies first to enable more specific extractions.

    Asking the model to reason over a document or score it on a scale related to your extraction goals may help, too. The model is going to recognize associations between the words in your documents and prompts that you have not yet noticed. Prompting the same themes across differing core tasks will help tease out the patterns you need to leverage to make progress.

    Change your thinking, change your goal. Change whatever you need to beat the inevitable writer’s (prompter’s) block.

  8. Take a break
    Being stubborn is not synonymous with working nonstop, however. Prompting is still a creative process, and breaks can be warranted and helpful. Like anyone who says they have their best ideas in the shower, a prompt engineer might have a new idea about how to approach a prompt on a lunch break walk.

    Take a breather and let the ideas swirl once in a while. Coming back with fresh eyes is actually a key part of the process.

  9. Be data-driven
    On the flip side, this is still a business. Your goal is to create a reliable, repeatable process with prompt performance you can depend on. So, while “everything is promptable,” you may have to follow the data at some point.

    You can’t create custom prompts for every document or use case; that’s not practical or cost-efficient. So, how high can you push your accuracy within the process that’s going to work for your business?

    You may see success in a prompt version that ends up not scaling to your whole sample set. This doesn’t negate your win! But when you decide what prompts to put into production, you do need to make a data-driven decision that nets you and your company the prompts that have the highest performance. These are not always the prompts that are the most creative or took the most effort.

  10. Be pragmatic
    There will be situations where the mad hatter of prompting has to take off their hat and make a boring, logical, pragmatic decision.

    For example, you’ve pushed your prompt accuracy performance well above what a human process typically produces. But you still want that 100%, you over-achiever. Is it better for your company if you keep pushing for improvements that may or may not be attainable? Or can your energy be focused on improving a new set of prompts where there are much larger gains to be made?

    If there’s one document out of your sample set that’s not returning the correct answers, should you keep hounding those prompts until that document works? Maybe, if a large percentage of your production documents will look like this one. But if it’s 1 of 100? 1 of 1000? Is your effort worth it there, or is it time to call that document an outlier and focus on more impactful issues?

    These can be hard calls to make, particularly when persistence and creativity are so regularly rewarded in prompting! But if you let yourself get caught up by every nagging exception, the reality is you’ll never get any of your work into production… and that’s not worth it, either.