Why human-AI relationships need socioaffective alignment

Author: Kirk, Hannah Rose; Gabriel, Iason; Summerfield, Chris; Vidgen, Bertie; Hale, Scott A.

Description: Humans strive to design safe AI systems that align with our goals and remain under our control. However, as AI capabilities advance, we face a new challenge: the emergence of deeper, more persistent relationships between humans and AI systems. We explore how increasingly capable AI agents may generate the perception of deeper relationships with users, especially as AI becomes more personalised and agentic. This shift, from transactional interaction to ongoing sustained social engagement with AI, necessitates a new focus on socioaffective alignment-how an AI system behaves within the social and psychological ecosystem co-created with its user, where preferences and perceptions evolve through mutual influence. Addressing these dynamics involves resolving key intrapersonal dilemmas, including balancing immediate versus long-term well-being, protecting autonomy, and managing AI companionship alongside the desire to preserve human social bonds. By framing these challenges through a notion of basic psychological needs, we seek AI systems that support, rather than exploit, our fundamental nature as social and emotional beings.

Subject headings: Computer Science – Human-Computer Interaction; Computer Science – Artificial Intelligence; AI; Relationships

Publication year: 2025

Journal or book title: arXiv

Pages: 2502.02528

Find the full text: https://arxiv.org/pdf/2502.02528

Find more like this one (cited by): https://scholar.google.com/scholar?cites=12345879443834955584&as_sdt=1000005&sciodt=0,16&hl=en

Serial number: 4047

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.