The Good, the Bad, and the Uncanny: Investigating Diversity Aspects of LLM-Generated Personas for Requirements Engineering

Abstract

Personas offer an empathetic approach to capturing user requirements, translating user needs into relatable narratives. However, creating personas manually is time-consuming. Large Language Models (LLMs) can generate personas with convincing natural language, challenging traditional methods. Yet, LLM-generated personas may reflect biases from their training data, potentially compromising diversity. Our study explores how diversity is considered in LLM-generated personas through a qualitative user study with 22 participants. Participants generated personas without specific diversity prompts in the first task, revealing how users naturally interact with LLMs. In the second task, participants were explicitly asked to consider diversity aspects when prompting for personas. Analyzing the prompts and outputs showed that users tend to request less diversity unless explicitly instructed. Meanwhile, LLMs can introduce diversity even when not prompted, potentially broadening representation. However, we also found a critical pitfall: LLM-generated personas may appear diverse due to mentioning various aspects, but fail to translate them into meaningful implications for requirements engineering. This shows the need for a more deliberate approach when using LLMs for persona creation to ensure diversity is not just performative but genuinely informative for design and development.

Publication
In 2025 IEEE 33rd International Requirements Engineering Conference (RE)
Christopher Katins
Christopher Katins
Doctoral Candidate & HCI Researcher