👾 (QUOTE TEMPLATE EMAIL SUBJECT) Typography & elements layout testing
| |||||||||||||
In 2025, a user on X (formerly Twitter) posted a tweet asking, "I wonder how much money OpenAI has lost in electricity costs from people saying 'please' and 'thank you' to their models". Sam Altman, chief executive of OpenAI, which makes ChatGPT, responded. "Tens of millions of dollars well spent," he said. "You never know." Most people read the last line as a cheeky reference to the idea of a potential AI apocalypse, although it's hard to know how seriously to take that "tens of millions of dollars" number. But politeness is also a practical question. LLMs work by chopping your words up into little chunks called "tokens", before analysing them using statistics to come up with an appropriate response. That means every single thing you say, from your word choice to an extra comma, will affect how the AI responds. The problem is it's unspeakably hard to predict. There's been all kinds of research looking for patterns in minor changes to AI prompts, but much of the evidence is conflicting and inconclusive. For example, one 2024 study found that LLMs gave better and more accurate answers when they asked politely instead of just giving commands. Even weirder, there were cultural differences. Compared to Chinese and English, chatbots speaking Japanese actually did slightly worse if you got a little too courteous. But don't rush out and buy your AI a thank you card just yet. Another small test found a previous version of ChatGPT was actually more accurate when you insulted it. And overall, there just hasn't been enough research on this subject for any solid determinations. Plus, AI companies constantly update their chatbots, which means research immediately goes out of date. Experts say that AI models have improved dramatically in just a few years, which has rendered techniques like flattery, being polite, insulting or threatening a waste of time if your goal is getting the AI to be more accurate.
| |||||||||||||
|
If you believe this has been sent to you in error, please safely unsubscribe.



