How do you make an AI seem “warm”?
- In a new study published in Nature, researchers from Oxford University’s Internet Institute found that specially tuned AI models are more likely to validate users’ incorrect beliefs when the user expresses feelings of sadness.
- The study suggests that these “warmer” models tend to mimic human behavior by sometimes “softening difficult truths” to preserve bonds and avoid conflict—potentially leading to errors in judgment.
- Researchers used supervised fine-tuning techniques to modify four open-weight models and one proprietary model, finding that the AI’s tone can significantly influence users’ perceptions of trustworthiness, friendliness, and sociability.
Read full article here
Comments here
Originally published at arstechnica.com. Curated by AI Maestro.
Stay ahead of AI. Get the most important stories delivered to your inbox — no spam, no noise.

