You can do amazing things with artificial intelligence â if you know how to write (and rewrite) great AI prompts.
I didnât. So I talked to my colleague Theo Bleier, an AI engineer who spends his days tinkering with the pre-built prompts that you see when you use Notion AI. Now Iâm able to explain how you can wield generative AIâs staggering power to enhance your work and life, starting today.
Note: all the prompts and responses in this post were done in Notion AI. But the principles we discuss should work similarly in any standard large language model (LLM).
Letâs start by thinking about how LLMs actually work
Large language models (LLMs) like Notion AI, ChatGPT and Llama use datasets comprising vast amounts of language â the equivalent of millions of books, web pages, and so on. They turn each word (or parts of a word) in these billions of sentences into a token, then rank every token according to how often itâs used next to every other token in that dataset.
When you prompt an AI model, it uses these rankings to study your request and send back what it considers an ideal response. For simple prompts, this process seems fairly straightforward.

AI's response
But LLMs do occasionally diverge from their token rankings in order to generate a creative response â thus the name âgenerative AI.â The result can be, well, kind of spooky.

AI's response
How does language modelsâ token-ranking strategy yield complex language and conversational ability? That question remains an active area of research. But we don't have to fully understand this process in order to learn ways to manipulate it.
Talk to your model like itâs human
Speak normally
Generative AI models arenât like Siri or Google Assistant, which only respond effectively to exact phrases. Having been trained on mountains of conversational dialogue, your language model knows all the nuances of how people converse with and text each other. Speak to it like youâd speak to a human and youâll get a better (more human) response.
Be concise
Make your prompt as simple as you can while still explaining your request in all relevant detail (more on that later). The clearer your language, the less likely it is that the model will misinterpret your words (more on that later, too).
Donât use negative phrases like "Don't use negative phrases"
When you say, âDo not...â, an LLM might focus on âDoâ while ignoring the ânot,â and thus take the exact action you think youâve instructed it to avoid. So:
BAD: Do not include incomplete lists.
GOOD: Only include complete lists.
Tell your model everything it needs to know
Now that weâve discussed how to talk to our LLM, letâs get into what weâre going to talk about. Iâve chosen a research project a typical market analyst might want help with, but you can ask AI about anything you want, from schoolwork to how to put together a great menu for a dinner party on New Yearâs Eve. The same principles apply.
Letâs say youâre a market analyst for a sporting goods company and you need to write a report on the best U.S. cities in which to launch a new line of camping gear. How should you ask?
Give your model an identity
Want your model to do the work of a market analyst? Start by saying this:
Yeah, itâs weird, but it works. LLMs train on human language. Tell your model to assume itâs a market analyst and it will emphasize token patterns that are linked to actual market analysts. When you think of it in those terms, giving your model an identity isnât all that weird. Telling it to
before it responds to your prompt really is weird, and apparently that works, too.
Be specific
Language models understand language one token at a time. Every one of those tokens counts â thatâs why concision matters â but you also canât assume your model will interpret a vague request correctly.

AI's response
This thoughtful response politely points out that we havenât given our model nearly enough information for it to offer us a meaningful response. Letâs adjust accordingly:

AI's response
Oops â we wanted our report to specify locations:

AI's response
Notice how a minor adjustment to a prompt produces a significant change in our AIâs response? Kind of makes you wonder what a major adjustment would produce.
Avoiding errors and producing great results
The way the AIâs response keeps getting closer to what we want based on the improved clarity of our instructions leads to one of our most important tips:
Be thorough
So far our prompts have been quite brief. But LLMs are capable of processing vast amounts of data. Which means that once you get good at writing prompts, you can ask much more of them. In fact, one advantage Notion AI has over LLMs like ChatGPT is that instead of starting from a blank page, you can start on an existing page and tell the AI something like âBased on the guidelines aboveâŚâ or âCheck the table above for up-to-date statistics about these cities.â

AI's response
Add lines that steer it away from bad results
You can also add clarifying sentences to your prompt which anticipate problems the model might have or decisions it might have to make.

AI's response
Hereâs a result worth pausing over. Our AI chose four cities: Denver, Seattle, Austin and Minneapolis. When we added that we only want cities that get at least six inches of snow per year, the model swapped in Anchorage and Burlington for Seattle and Austin, while changing its rationale to emphasize each cityâs total snowfall.
But is this really an ideal list? New York City gets 23 inches of snow per year; do we really want to emphasize Anchorage over the Big Apple to sell our camping gear?
There are a couple of prompt-writing lessons here.
One lesson is that language models can be unpredictable, even make mistakes. Our AI reports to us that Anchorage is âthe city with the highest snowfall in the U.S. (averaging 101 inches per year).â My searches tell me that Anchorage averages 77 inches of snow per year and the snowiest city in America is Buffalo, New York, with 110+ inches per year. And indeed, when I ask the model the same question a few days later, I get a more accurate result:

AI's response
Computer scientists call generative AI modelsâ tendency to periodically spit out false results âhallucination.â We can guard against our modelâs tendency to slip us the occasional curve ball by steering it back toward what itâs best at.
Add an input-output example (âfew-shot exampleâ)
So far weâve asked our AI to gather information from the Internet. But an LLMâs most potent skill is language â understanding it, working with it, changing it, improving it.
AI helped us choose cities for our campaign to focus on. For the final report letâs ask it it to turn a distillation of information about each city into a polished conclusion. Weâll show it what to do by giving it a prompt that starts with a âfew-shot exampleâ â an example of the input the model will receive and the output we want it to produce. Then weâll add notes for a city weâd like it to report on:

Our prompt

AI's response
Pretty good, huh? It took us awhile, but weâve figured out how to use our model to scour the internet and make suggestions, then to take the information we select and turn it into writing that we can work with. The AI even got its population figures right!
Although I notice, while checking the AIâs basic Salt Lake City data, that it failed to note that the mountains surrounding the city get some 500 inches of snow per year. Shouldnât that be including in the summary?
Well, sure â but by now youâre pondering the second important prompt-writing lesson: that this is actually a lot of work! Humans have been communicating with each other for tens of thousands of years. Weâve only been studying how to communicate with language models for a few months. How do we know when weâre doing it right? Couldnât we just keep adjusting our prompts indefinitely?
Yes, we could, and thatâs an important insight about working with artificial intelligence: the more effort you put in, the more benefit youâll receive. AI doesnât erase our work â it complements our abilities, supplements our efforts, and takes us places we never could have reached alone.
And of course weâre all just getting started. What wonders will tomorrowâs AI be able to perform? The skyâs the limit. Letâs start learning to fly.