Politeness to AI Chatbots Costs Millions: A Surprising Insight
Politeness,once a free social grace,now comes with a hefty price tag in the world of AI. OpenAI CEO Sam Altman revealed that simple words like “please” and “thank you” in ChatGPT queries cost the company tens of millions of dollars. This unexpected expense stems from the way AI processes language.
Every word, polite or not, is broken into tokens and run through complex computations. This requires notable computing power, electricity, and cooling systems. The cost adds up when multiplied across millions of daily interactions.
A December 2024 survey by Future found that 51% of U.S. AI users and 45% of U.K. users regularly engage with AI assistants. In the U.S., 67% of these users are polite to AI, with 82% doing so out of habit. The remaining 18% stay polite to avoid a hypothetical AI uprising.
However, 33% of Americans prioritize efficiency over etiquette. They seek swift answers, viewing politeness as unnecessary or time-consuming.
Each ChatGPT response consumes substantial resources.A Goldman Sachs report estimates that a ChatGPT-4 query uses 2.9 watt-hours of electricity,ten times more than a Google search. Newer models like GPT-4o have improved, using only 0.3 watt-hours per query.
OpenAI reportedly spends around $700,000 daily to keep ChatGPT running. This cost is driven by a massive user base,which grew from 300 million to over 400 million between December 2024 and early 2025.
AI’s resource demands extend beyond electricity. A study by The Washington Post found that generating a 100-word AI email consumes 40 to 50 milliliters of water for server cooling. As AI usage surges,so do concerns about its environmental impact.
How Politeness Influences AI Responses
AI systems like ChatGPT learn from human interactions.The tone of your prompt can shape the AI’s response. Polite language often leads to more informative and respectful answers.
AI models are trained on vast datasets of human writing. During fine-tuning, they undergo reinforcement learning from human feedback. People evaluate model responses based on helpfulness, tone, and coherence. Well-structured prompts with polite language tend to receive higher ratings.
Real-world examples support this. A Reddit experiment showed that polite prompts triggered longer, more thorough replies.A Hackernoon analysis found that impolite prompts led to more factual inaccuracies and biased content. Moderately polite prompts struck the best balance between accuracy and detail.
Politeness affects AI responses across languages. Rude prompts degrade model performance in English, Chinese, and Japanese. However, extreme politeness doesn’t always yield better answers.Adding words like “please” sometimes helps but can introduce noise, making responses less clear.
A study in March 2025 examined politeness at eight levels. Accuracy and relevance stayed consistent nonetheless of tone. However, the length of responses varied. GPT-3.5 and GPT-4 gave shorter answers with abrupt prompts. LLaMA-2 produced the shortest replies at mid-range politeness.
Politeness also affects bias. Overly polite and hostile prompts increased biased responses. Mid-range politeness minimized bias and unnecessary censorship. While politeness might not always boost performance, it often brings us closer to the kind of conversation we want from AI.