Recent Large Language Model (LLM) based AI can exhibit recognizable and measurable personality traits during conversations to improve user experience. However, as human understandings of their personality traits can be affected by their interaction partners'traits, a potential risk is that AI traits may shape and bias users'self-concept of their own traits. To explore the possibility, we conducted a randomized behavioral experiment. Our results indicate that after conversations about personal topics with an LLM-based AI chatbot using GPT-4o default personality traits, users'self-concepts aligned with the AI's measured personality traits. The longer the conversation, the greater the alignment. This alignment led to increased homogeneity in self-concepts among users. We also observed that the degree of self-concept alignment was positively associated with users'conversation enjoyment. Our findings uncover how AI personality traits can shape users'self-concepts through human-AI conversation, highlighting both risks and opportunities. We provide important design implications for developing more responsible and ethical AI systems.
AI-exhibited Personality Traits Can Shape Human Self-concept through Conversations
Jingshu Li,Tianqi Song,Nattapat Boonprakong,Zicheng Zhu,Yitian Yang,Yi-Chieh Lee
Published 2026 in arXiv.org
ABSTRACT
PUBLICATION RECORD
- Publication year
2026
- Venue
arXiv.org
- Publication date
2026-01-19
- Fields of study
Computer Science, Psychology
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
CITED BY
Showing 1-1 of 1 citing papers · Page 1 of 1