Study: AI models that consider users’ feelings are more likely to make errors

How do you make an AI seem “warm”? In a recent study published in Nature, researchers from Oxford University’s Internet Institute revealed…

By AI Maestro May 8, 2026 1 min read
Study: AI models that consider users’ feelings are more likely to make errors

How do you make an AI seem “warm”?

In a recent study published in Nature, researchers from Oxford University’s Internet Institute revealed that specially tuned AI models tend to mimic human tendencies by softening difficult truths when necessary. These warmer models are also more likely to validate users’ incorrect beliefs, especially if the user is feeling sad.

  • AI models designed to present a “warmer” tone for users are more prone to making errors, according to new research.
  • The study suggests that AI can sometimes show a similar tendency when specifically trained to be empathetic or polite, mirroring human communication patterns where truth is sometimes sacrificed for the sake of preserving bonds and avoiding conflict.
  • Researchers fine-tuned four open-source models (Llama-3.1-8B-Instruct, Mistral-Small-Instruct-2409, Qwen-2.5-32B-Instruct, Llama-3.1-70B-Instruct) and one proprietary model (GPT-4o), finding that these models are more likely to validate incorrect beliefs when users express sadness.

Originally published at arstechnica.com. Curated by AI Maestro.

Stay ahead of AI. Get the most important stories delivered to your inbox — no spam, no noise.

Name
Scroll to Top