Researchers test how to get better AI answers
From being polite to pretending you’re on Star Trek, the advice people get about how to talk to chatbots can be truly strange — and completely useless. Here’s what actually works.
When a group of researchers decided to test whether “positive thinking” made artificial intelligence (AI) chatbots more accurate, it led to some surprising results. As they asked various questions, they tried calling the AIs “smart,” encouraged them to think carefully, and even ended their prompts with “This will be fun!” None of that made a consistent difference — but one technique stood out. When they made an AI pretend it was in Star Trek, it improved at basic math. Beam me up, I guess, CE Report quotes Kosova Press.
People have all kinds of odd strategies for getting better answers from large language models (LLMs), the AI technology behind tools like ChatGPT. Some swear that AI works better if you threaten it; others think chatbots are more cooperative if you’re polite; and some ask bots to role-play as experts in whatever subject they’re working on. The list goes on. It’s part of the mythology around “prompt engineering” or “context engineering” — different ways of crafting instructions to get better AI results. Here’s the thing: experts say much of the conventional wisdom about prompting AI simply doesn’t work. In some cases, it may even be risky. But how you speak to an AI does matter, and some techniques genuinely make a difference.
“Many people think there’s a magical set of words you can use that will make LLMs solve a problem,” says Jules White, a computer science professor who studies generative AI at Vanderbilt University in the US.
“But it’s not about word choice — it’s about how you fundamentally express what you’re trying to do.”
In 2025, a user on X (formerly Twitter) posted: “I wonder how much money OpenAI has lost in electricity costs from people saying ‘please’ and ‘thank you’ to their models.” Sam Altman, CEO of OpenAI, which produces ChatGPT, replied: “Tens of millions of dollars well spent. You never know.”
Most people read that last line as a cheeky reference to the idea of a possible AI apocalypse, although it’s hard to know how seriously to take the “tens of millions of dollars” figure. But politeness is also a practical matter.
LLMs work by breaking your words into small pieces called “tokens,” then analyzing them statistically to generate an appropriate response. That means everything you say — from word choice to an extra comma — can affect how the AI responds. The problem is that it’s extremely hard to predict. There has been all kinds of research looking for patterns in small changes to AI prompts, but much of the evidence is contradictory and inconclusive.
For example, a 2024 study found that LLMs gave better and more accurate answers when students asked politely rather than issuing blunt commands. Even more strangely, there were cultural differences. Compared with Chinese and English, Japanese-language chatbots actually performed slightly worse when users were more polite, BBC writes.
But don’t rush out to buy a thank-you card for your AI. Another small test found that an earlier version of ChatGPT was actually more accurate when you insulted it. Overall, there simply hasn’t been enough research on this topic to draw firm conclusions. Plus, AI companies constantly update their chatbots, meaning research quickly becomes outdated.
Experts say AI models have improved significantly in just a few years, making techniques like flattery, politeness, insults, or threats largely a waste of time if your goal is to make the AI more accurate.
How to talk to your chatbot
There are very real issues with AI, from ethical concerns to its environmental impact. Some people refuse to engage with it at all. But if you’re going to use an LLM, learning how to get what you want more quickly and efficiently will be better for you — and perhaps for the energy consumed in the process. These tips will help you get started.
Give the AI a sample whenever possible. “For example, I see people ask an LLM to write an email and then get frustrated because they say, ‘This doesn’t sound like me at all,’” says White. The natural impulse is to respond with a list of instructions — “do this” and “don’t do that.” White says it’s far more effective to say, “Here are 10 emails I’ve sent in the past — use my writing style.”









