A cursor is clear and useful for highlighting a "single" 2D object, and it has been explored through many iterations — from dots to arrows to tilted arrows (see James and Alan Kay’s email for example) — to provide feedforward about where a click will land. But what if the system can’t reliably tell which target the user intends to select? When input is uncertain, as in noisy, everyday real-world AR, what visual feedforward should we use? In this work, we explore Uncertain Pointers that support diambiguation by annotating "multiple" real-world targets.
Identity visualizations add descriptive attributes (e.g., color, letters) to support verbal disambiguation.
Level visualizations that modulate pointer intensity (e.g., size, opacity) to reveal the system’s interpretation.
Our key idea in exploring Uncertain Pointers is to combine established pointer designs with visual signifiers commonly used for uncertainty visualization. We systematically surveyed 30 years of ACM CHI, UIST, DIS, VRST, SUI, and AutomotiveUI, as well as IEEE TVCG, ISMAR, 3DUI, and VR. We summarized four pointer archetypes: internal, external, boundary, and fill and found the most prevalent uncertainty signifiers: size, color, opacity, and text/numeric/symbol.