What we learned designing voice UI for elderly users
Voice is the most natural interface for users who struggle with keyboards and touchscreens. Here are the core principles IntuneLabs learned building voice UX.
Voice is the most natural interface for users who struggle with keyboards and touchscreens. After months of building and testing voice experiences for elderly users in Korea, we've distilled our learnings into four core principles.
1. The AI Should Speak First
When an elderly user opens a voice interface and sees a blank screen with a microphone icon, their most common reaction is confusion: "What am I supposed to say?" This moment of hesitation often leads to abandonment.
Our solution: the AI agent initiates the conversation with a warm, contextual greeting. "Good morning! How are you feeling today?" This simple change increased first-session completion rates significantly.
2. Generous Turn-Taking Pauses
Standard voice assistants expect rapid responses. But elderly users often need 2–3 seconds to formulate their thoughts. Our system waits patiently, using subtle audio cues to indicate it's still listening.
3. Explicit Emotion Recognition
When the system detects positive or negative emotional signals, it reflects them back explicitly: "You sound happy today!" or "It seems like something is weighing on you." This creates a feeling of being heard and understood.
4. Auto-Generated Summaries
After each conversation, the system generates a brief summary of topics discussed, mood detected, and any action items (like medication reminders). This reduces the cognitive burden of remembering what was talked about.