Distinguished speakers who will share their insights at CHIIR 2026
Emeritus Distinguished University Professor, Dept. of Computer Science & Human-Computer Interaction Lab
University of Maryland, College Park, MD, USA
Abstract:
A new synthesis is emerging that integrates AI technologies with Human-Computer Interaction to produce Human-Centered AI (HCAI). Advocates of this new synthesis design supertools that amplify, augment, enhance human abilities, and empower people. Effective supertools build the user's self-efficacy, support creativity, clarify responsibility, and promote social connections.
The foundational design principles suggest offering comprehensible, predictable, and controllable user interfaces that have compact control panels, clear sequences of action, show abundant information, and allow rapid exploration or alternatives.
Improved search tools will come from researchers who embrace human-centered approaches by giving users control over the scope of search (personal collections as in NotebookLM, time periods as in Google Scholar, limited tasks as in digital navigation), offering rich sets of options (as in Amazon shopping's faceted menus and the Bloomberg Terminal), and ranking/clustering search results (as in Google Search and Netflix). These strategies are especially important when using Generative AI, which is startlingly impressive, but alarmingly flawed.
About the Speaker:
Ben Shneiderman is an Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, NAI, and the Visualization Academy and a Member of the U.S. National Academy of Engineering. He has received six honorary doctorates in recognition of his pioneering contributions to human-computer interaction and information visualization. His widely-used contributions include the clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, and tagging for photos. Shneiderman's information visualization innovations include dynamic query sliders for Spotfire, development of treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis for electronic health records.
Ben is the lead author of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). He co-authored Readings in Information Visualization: Using Vision to Think (1999) and Analyzing Social Media Networks with NodeXL (2nd edition, 2019). His book Leonardo's Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, 2016) describes how research can produce higher impacts. His book, Human-Centered AI, published by Oxford University Press, won the Association of American Publishers award for Computer and Information Systems.
Abstract:
Imagine a personal assistant that, with user permission, persistently remembers moments from daily life—answering questions like "When and where did I see this lady?" or offering personalized suggestions like "You might enjoy The Little Prince—it relates to the statue you liked in Lyon." Realizing this vision requires overcoming major challenges: capturing visual memories under hardware constraints (e.g., memory, battery, thermal limits, bandwidth), extracting meaningful personalization signals from noisy, task-agnostic visual histories, and supporting real-time question answering and recommendations under tight latency requirements.
In this talk, we present our early work toward this goal. Pensieve, our memory-based QA system, improves accuracy by 11% over state-of-the-art multimodal RAG baselines. VisualLens infers user interests from casual photos, outperforming leading recommendation systems by 5-10%. We also share initial results on efficient, event-triggered memory capture and compression. Our work points to a broad landscape of research opportunities in building richer, more context-aware personal assistants capable of learning from and reasoning over users' visual experiences.
About the Speaker:
Xin Luna Dong is a Principal Scientist at Meta Wearables AI, where she leads the Agentic AI efforts for building trustworthy and personalized assistants on wearable devices. Previously, she spent over a decade advancing knowledge graph technology, including the Amazon Product Graph and the Google Knowledge Graph. She is co-author of Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases and Big Data Integration. She was named an ACM Fellow and an IEEE Fellow for "significant contributions to knowledge graph construction and data integration", awarded the VLDB Women in Database Research Award and VLDB Early Career Research Contribution Award, and invited as an ACM Distinguished Speaker. She serves in the PVLDB advisory committee, was a member of the VLDB endowment, a PC co-chair for KDD'2022 ADS track, WSDM'2022, VLDB'2021, and Sigmod'2018.