Eddie Muñoz learned something weird about writing effective questions, or “prompts” for artificial intelligence engines like ChatGPT. If your prompts are polite, AI’s answers might have less bias. They might have fewer mistakes and omissions, too. They might even have fewer hallucinations. 

Yes, AI can hallucinate. We’ll explain. 

The LLM revolution

Eddie is an All Net Connect senior administrator. He’s also on a team using AI to streamline large-scale industrial processes. And he’s looking for ways AI can help small and mid-size businesses in the Black Hills—including our own. At All Net Connect, for example, we use AI for coding, project management, note-taking, and more. Even for writing a blog post. * 


Cartoon created by an LLM.

Eddie’s work includes a type of AI called a “large language model” (LLMs). ChatGPT is the best known LLM, but there are dozens of others. Maybe hundreds.  LLMs use “natural language.” (What people use.)  No coding is necessary. Type a prompt. In seconds the LLM delivers an answer anyone can understand. The process looks like the sort of Internet search we’ve all been doing for decades, but it’s different. 

Some experts mark the start of the LLM revolution as the publication of a 2017 paper titled “Attention is All You Need.” Try the link If you’re curious about “dominant sequence transduction models.” For the rest of us, it’s a complicated story for another day. But soon after, LLMs available to the public started improving at a startling, sometimes worrisome rate. (Search “deep fakes.”) LLMs are mostly designed for text, and they’re getting better and better at digesting complex prompts to deliver detailed, complex answers. 

Domo arigato, ChatGPT (and others)

That’s why creating good prompts matters. Eddie attended an AI workshop on that subject, where he learned it pays to be polite—a suggestion backed up by research in Japan.  

Computer scientists in Tokyo reported their results in 2024 paper titled “Should We Respect LLMs?”. The answer: it couldn’t hurt. They found that courteous prompts increased the quality of responses by as much as 30 percent. Conversely, rude prompts reduced the quality. Rudeness might even result in a “refusal to answer.” But don’t be too polite, either. Flowery, obsequious prompts also can yield poor responses. There’s a sweet spot. 

How did LLMs get so sensitive to manners?  They learn natural language by collecting vast amounts of written text from the Internet. (The process is called “scraping.”) Using these giant databases, LLMs teach themselves language patterns. Along the way, they also absorb culture. The researchers put it this way: 

“Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human communication traits, suggesting they align with human cultural norms.”

More research 

LLMs, in fact, are trained to favor professional, courteous language. Writing that way helps generate responses that are:

     

      • 15 to 20 percent more relevant. (OpenAI Documentation and Research) 

      • 10 to 20 percent more detailed. (Human Factors in Computing Systems Proceedings) 20 to 30 percent better satisfaction scores. (Association for Computational Linguistics)

      • 20 to 30 percent better satisfaction scores. (Association for Computational Linguistics) 

    Thanks, science!

    More tips for better prompts

    There are dozens of other ways to craft better prompts. Eddie has developed his own workshop. Contact us for details. Meanwhile, here’s a short list of other tips you might find useful: 

       

        • Clarity and specificity: These might be even more important than courtesy. And they go hand in hand with it. Politewriting is more likely to be precise and well structured.   

        • Role and tone: Instruct your LLM to answer as a persona—a teacher, for example, or a friend. Should the tone be friendly? Professional? Both? 

        • Context: What are you working on? A staff memo? A report for a supervisor? A social media post? A blog post? * 

        • Examples: Use them to clarify the kind of response you want. Format: Specify word limits, subheads, bullets, paragraph styles. Start with an introduction? End with a summary? Tell your LMM what you want.

      Caution 

      Most LLMs include written warnings. Some examples: 

         

          • “Chat GPT can make mistakes.”  

          • “Gemini can make mistakes, so double-check it.” 

          •  Our favorite is from Claude: “In an attempt to be a helpful assistant, Claude can occasionally produce responses that are incorrect or misleading. This is known as ‘hallucinating’ information.” 

        There it is. Hallucinating!

        Footnote

        * We used four LLMs and dozens of prompts to produce this blog post. Then we cross-checked responses with old-fashioned Internet searches. Finally, we wrote our own copy, adding attributions where appropriate.  (And we produced the cartoon with an LLM called DALL-E. We admit, it was a bit of an adventure.)