A cursor is clear and useful for highlighting a "single" 2D object, and it has been explored through many iterations — from dots to arrows to tilted arrows (see James and Alan Kay’s email for example) — to provide feedforward about where a click will land. But what if the system can’t reliably tell which target the user intends to select? When input is uncertain, as in noisy, everyday real-world AR, what visual feedforward should we use? In this work, we explore Uncertain Pointers that support diambiguation by annotating "multiple" real-world targets.
Our level visualizations modulate pointer intensity (e.g., size, opacity) to reveal the system’s interpretation.
Our identity visualizations add descriptive attributes (e.g., color, letters) to support verbal disambiguation.
Our key idea in exploring Uncertain Pointers is to combine established pointer designs with visual signifiers commonly used for uncertainty visualization. We systematically surveyed 30 years of ACM CHI, UIST, DIS, VRST, SUI, and AutomotiveUI, as well as IEEE TVCG, ISMAR, 3DUI, and VR. We summarized four pointer archetypes: internal, external, boundary, and fill and found the most prevalent uncertainty signifiers: size, color, opacity, and text/numeric/symbol.
We conducted two pre-registered online studies with 100 participants using photorealistic scenes with Gaussian splats to simulate different target complexities (near vs. far, sparse vs. densely-packed) and measured pointer identifiability and target visibility, along with subjective metrics such as user preference and perceived mental effort, etc.
We apply data analysis such as clustering to understand the trade-off and effectiveness of different Uncertain Pointers, distilling design recommendations.
We demonstrate Uncertain Pointers in human-robot interaction: (A–C) show a robot placing objects with linguistic ambiguity resolved via boundary pointers with text labels, and (D–F) show gaze-based tool fetching where level pointers provide feedforward for intent clarification.