The Impostor is Among Us: Can Large Language Models Capture the Complexity of Human Personas?

Zusammenfassung

Large Language Models (LLMs) created new opportunities for generating personas, expected to streamline and accelerate the human-centered design process. Yet, AI-generated personas may not accurately represent actual user experiences, as they can miss contextual and emotional insights critical to understanding real users’ needs and behaviors. This introduces a potential threat to quality, especially for novices. This paper examines the differences in how users perceive personas created by LLMs compared to those crafted by humans regarding their credibility for design. We gathered ten human-crafted personas developed by HCI experts according to relevant attributes established in related work. Then, we systematically generated ten personas with an LLM and compared them with human-crafted ones in a survey. The results showed that participants differentiated between human-created and AI-generated personas, with the latter perceived as more informative and consistent. However, participants noted that the AI-generated personas tended to follow stereotypes, highlighting the need for a greater emphasis on diversity when utilizing LLMs for persona creation.

Publikation
In Proceedings of Mensch und Computer 2025 (MuC ’25)
Christopher Katins
Christopher Katins
Doktorand & HCI Researcher