Scholar's Hub

Award-Winning Papers: HCI

These papers have received best paper awards or distinguished paper awards from renowned computer science conferences in the Human-Computer Interaction field. This collection is sourced from each conference.

If you notice any errors, please contact us.

Illustration: Trending Papers
ASSETS

A Collaborative Approach to Support Medication Management in Older Adults with Mild Cognitive Impairment Using Conversational Assistants (CAs)

  • N. Mathur, Kunal Dhodapkar, Tamara Zubatiy, Jiachen Li, Brian D. Jones, Elizabeth D. Mynatt

  • Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility

  • October 22, 2022

Improving medication management for older adults with Mild Cognitive Impairment (MCI) requires designing systems that support functional independence and provide compensatory strategies as their abilities change. Traditional medication management interventions emphasize forming new habits alongside the traditional path of learning to use new technologies. In this study, we navigate designing for older adults with gradual cognitive decline by creating a conversational “check-in” system for routine medication management. We present the design of MATCHA - Medication Action To Check-In for Health Application, informed by exploratory focus groups and design sessions conducted with older adults with MCI and their caregivers, alongside our evaluation based on a two-phased deployment period of 20 weeks. Our results indicate that a conversational “check-in” medication management assistant increased system acceptance while also potentially decreasing the likelihood of accidental over-medication, a common concern for older adults dealing with MCI.

TLDR

The results indicate that a conversational “check-in” medication management assistant increased system acceptance while also potentially decreasing the likelihood of accidental over-medication, a common concern for older adults dealing with MCI.

CHI

Changes in Research Ethics, Openness, and Transparency in Empirical Studies between CHI 2017 and CHI 2022

  • Kavous Salehzadeh Niksirat, Lahari Goswami, Pooja S. B. Rao, James Tyler, Alessandro Silacci, Sadiq Aliyu, A. Aebli, Chat Wacharamanotham, M. Cherubini

  • Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

  • April 19, 2023

In recent years, various initiatives from within and outside the HCI field have encouraged researchers to improve research ethics, openness, and transparency in their empirical research. We quantify how the CHI literature might have changed in these three aspects by analyzing samples of 118 CHI 2017 and 127 CHI 2022 papers—randomly drawn and stratified across conference sessions. We operationalized research ethics, openness, and transparency into 45 criteria and manually annotated the sampled papers. The results show that the CHI 2022 sample was better in 18 criteria, but in the rest of the criteria, it has no improvement. The most noticeable improvements were related to research transparency (10 out of 17 criteria). We also explored the possibility of assisting the verification process by developing a proof-of-concept screening system. We tested this tool with eight criteria. Six of them achieved high accuracy and F1 score. We discuss the implications for future research practices and education. This paper and all supplementary materials are freely available at https://doi.org/10.17605/osf.io/n25d6.

TLDR

This paper analyzes how the CHI literature might have changed in research ethics, openness, and transparency by analyzing samples of 118 CHI 2017 and 127 CHI 2022 papers—randomly drawn and stratified across conference sessions.

Envisioning the (In)Visibility of Discreet and Wearable AAC Devices

  • Humphrey Curtis, Zihao You, William Deary, Miruna-Ioana Tudoreanu, Timothy Neate

  • Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

  • April 19, 2023

High-tech augmentative and alternative communication (AAC) devices can offer vital communication support for those with complex communication needs (CCNs). Unfortunately, these devices are rarely adopted. Abandonment has been linked to many factors – commonly, stigma resulting from the visibility of the device and its intrusion into other essential modes of communication like body language. However, visible AAC is strategically useful for setting conversational expectations. In this work, we explore how we might envision AAC to address these tensions directly. We conduct user-centred design activities to build three high-fidelity AAC prototypes with different communities with CCNs, specialists and stakeholders. The prototypes demonstrate different form factors, visibility and modes of input/output. Subsequently, we conduct two qualitative focus groups using convergent and divergent co-design methods with people with the language impairment aphasia – supporting ideation of seven discreet and wearable low-fidelity AAC prototypes and critique of the three high-fidelity prototypes.

TLDR

This work conducts user-centred design activities to build three high-fidelity AAC prototypes with different communities with CCNs, specialists and stakeholders, and conducts two qualitative focus groups using convergent and divergent co-design methods with people with the language impairment aphasia.

Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study

  • Perttu Hämäläinen, Mikke Tavast, Anton Kunnari

  • Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

  • April 19, 2023

Collecting data is one of the bottlenecks of Human-Computer Interaction (HCI) research. Motivated by this, we explore the potential of large language models (LLMs) in generating synthetic user research data. We use OpenAI’s GPT-3 model to generate open-ended questionnaire responses about experiencing video games as art, a topic not tractable with traditional computational user models. We test whether synthetic responses can be distinguished from real responses, analyze errors of synthetic data, and investigate content similarities between synthetic and real data. We conclude that GPT-3 can, in this context, yield believable accounts of HCI experiences. Given the low cost and high speed of LLM data generation, synthetic data should be useful in ideating and piloting new experiments, although any findings must obviously always be validated with real data. The results also raise concerns: if employed by malicious users of crowdsourcing services, LLMs may make crowdsourcing of self-report data fundamentally unreliable.

TLDR

It is concluded that GPT-3 can, in this context, yield believable accounts of HCI experiences and the low cost and high speed of LLM data generation should be useful in ideating and piloting new experiments, although any findings must obviously always be validated with real data.

DataParticles: Block-based and Language-oriented Authoring of Animated Unit Visualization

  • Yining Cao, J. E, Zhutian Chen, Haijun Xia

  • Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

  • April 19, 2023

Unit visualizations have been widely used in data storytelling within interactive articles and videos. However, authoring data stories that contain animated unit visualizations is challenging due to the tedious, time-consuming process of switching back and forth between writing a narrative and configuring the accompanying visualizations and animations. To streamline this process, we present DataParticles, a block-based story editor that leverages the latent connections between text, data, and visualizations to help creators flexibly prototype, explore, and iterate on a story narrative and its corresponding visualizations. To inform the design of DataParticles, we interviewed 6 domain experts and studied a dataset of 44 existing animated unit visualizations to identify the narrative patterns and congruence principles they employed. A user study with 9 experts showed that DataParticles can significantly simplify the process of authoring data stories with animated unit visualizations by encouraging exploration and supporting fast prototyping.

TLDR

DataParticles is a block-based story editor that leverages the latent connections between text, data, and visualizations to help creators flexibly prototype, explore, and iterate on a story narrative and its corresponding visualizations.

Deceptive Design Patterns in Safety Technologies: A Case Study of the Citizen App

  • Ishita Chordia, Lena-Phuong Tran, Tala June Tayebi, Emily Parrish, S. Erete, Jason C. Yip, Alexis Hiniker

  • Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

  • April 19, 2023

Deceptive design patterns (known as dark patterns) are interface characteristics which modify users’ choice architecture to gain users’ attention, data, and money. Deceptive design patterns have yet to be documented in safety technologies despite evidence that designers of safety technologies make decisions that can powerfully influence user behavior. To address this gap, we conduct a case study of the Citizen app, a commercially available technology which notifies users about local safety incidents. We bound our study to Atlanta and triangulate interview data with an analysis of the user interface. Our results indicate that Citizen heightens users’ anxiety about safety while encouraging the use of profit-generating features which offer security. These findings contribute to an emerging conversation about how deceptive design patterns interact with sociocultural factors to produce deceptive infrastructure. We propose the need to expand an existing taxonomy of harm to include emotional load and social injustice and offer recommendations for designers interested in dismantling the deceptive infrastructure of safety technologies.

TLDR

The need to expand an existing taxonomy of harm to include emotional load and social injustice is proposed and recommendations for designers interested in dismantling the deceptive infrastructure of safety technologies are offered.

Full-hand Electro-Tactile Feedback without Obstructing Palmar Side of Hand

  • Yudai Tanaka, Alan Shen, Andy Kong, Pedro Lopes

  • Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

  • April 19, 2023

We present a technique to render tactile feedback to the palmar side of the hand while keeping it unobstructed and, thus, preserving manual dexterity during interactions with physical objects. We implement this by applying electro-tactile stimulation only to the back of the hand and to the wrist. In our approach, there are no electrodes on the palmar side, yet that is where tactile sensations are felt. While we place electrodes outside the user's palm, we do so in strategic locations that conduct the electrical currents to the median/ulnar nerves, causing tactile sensations on the palmar side of the hand. In our user studies, we demonstrated that our approach renders tactile sensations to 11 different locations on the palmar side while keeping users’ palms free for dexterous manipulations. Our approach enables new applications such as tactile notifications during dexterous activities or VR experiences that rely heavily on physical props.

TLDR

This work presents a technique to render tactile feedback to the palmar side of the hand while keeping it unobstructed and preserving manual dexterity during interactions with physical objects, and enables new applications such as tactile notifications during dexterous activities or VR experiences that rely heavily on physical props.

CHI PLAY

Why Should Red and Green Never Be Seen? Exploring Color Blindness Simulations as Tools to Create Chromatically Accessible Game

  • Mateus Pinheiro, Windson Viana, Ticianne de Gois Ribeiro Darin

  • Proceedings of the ACM on Human-Computer Interaction

  • September 29, 2023

Video games have become an important aspect of modern culture, especially with the widespread use of mobile devices. Thus, it is important that video games are accessible to all people, but colorblind players are still affected by the use of colors in game interfaces. Some challenges of developing chromatically accessible games are the limited availability of colorblind test subjects and the importance of identifying and considering accessibility threats even in the early stages of development. Thus, digital simulations emerge as possible tools to increase accessibility and awareness. In this paper, we conducted three empirical studies that seek to verify the relationship between the identification of color accessibility problems by people with typical color vision using simulations and people with color blindness, in the context of mobile games. Results indicate concrete uses in which color blindness simulations give advantages to developers with typical vision in identifying chromatic accessibility issues in their games. Additionally, we discuss different possibilities for incorporating simulation tools, accessibility guidelines, and colorblind user participation into a realistic game design life cycle. We also discuss how the incorporation of simulation tools could be beneficial to foment the discussion of accessibility in game design studios.

TLDR

Three empirical studies are conducted that seek to verify the relationship between the identification of color accessibility problems by people with typical color vision using simulations and people with color blindness, in the context of mobile games

Communication Sequences Indicate Team Cohesion: A Mixed-Methods Study of Ad Hoc League of Legends Teams

  • Evelyn T S Tan, Katja Rogers, L. Nacke, Anders Drachen, Alex Wade

  • Proceedings of the ACM on Human-Computer Interaction

  • October 25, 2022

Team cohesion is a widely known predictor of performance and collaborative satisfaction. However, how it develops and can be assessed, especially in fast-paced ad hoc dynamic teams, remains unclear. An unobtrusive and objective behavioural measure of cohesion would help identify determinants of cohesion in these teams. We investigated team communication as a potential measure in a mixed-methods study with 48 teams (n=135) in the digital game League of Legends. We first established that cohesion shows similar performance and satisfaction in League of Legends. teams as in non-game teams and confirmed a positive relationship between communication word frequency and cohesion. Further, we conducted an in-depth exploratory qualitative analysis of the communication sequences in a high-cohesion and a low-cohesion team. High cohesion is associated with sequences of apology->encouragement, suggestion->agree/acknowledge, answer->answer, and answer->question, while low-cohesion is associated with sequences of opinion/analysis->opinion/analysis, disagree->disagree, command->disagree, and frustration->frustration. Our findings also show that cohesion is important to team satisfaction independently of the match outcomes. We highlight that communication sequences are more useful than frequencies to determine team cohesion via player interactions.

TLDR

CSCW

Measuring User-Moderator Alignment on r/ChangeMyView

  • Vinay Koshy, Tanvi Bajpai, Eshwar Chandrasekharan, Hari Sundaram, Karrie Karahalios

  • Proceedings of the ACM on Human-Computer Interaction

  • September 28, 2023

Social media sites like Reddit, Discord, and Clubhouse utilize a community-reliant approach to content moderation. Under this model, volunteer moderators are tasked with setting and enforcing content rules within the platforms' sub-communities. However, few mechanisms exist to ensure that the rules set by moderators reflect the values of their community. Misalignments between users and moderators can be detrimental to community health. Yet little quantitative work has been done to evaluate the prevalence or nature of user-moderator misalignment. Through a survey of 798 users on r/ChangeMyView, we evaluate user-moderator alignment at the level of policy-awareness (does users know what the rules are?), practice-awareness (do users know how the rules are applied?) and policy-/practice-support (do users agree with the rules and how they are applied?). We find that policy-support is high, while practice-support is low -- using a hierarchical Bayesian model we estimate the correlation between community opinion and moderator decisions to range from .14 to .45 across subreddit rules. Surprisingly, these correlations were only slightly higher when users were asked to predict moderator actions, demonstrating low awareness of moderation practices. Our findings demonstrate the need for careful analysis of user-moderator alignment at multiple levels. We argue that future work should focus on building tools to empower communities to conduct these analyses themselves.

TLDR

The findings demonstrate the need for careful analysis of user-moderator alignment at multiple levels and argue that future work should focus on building tools to empower communities to conduct these analyses themselves.

Cura: Curation at Social Media Scale

  • Wanrong He, Mitchell L. Gordon, Lindsay Popowski, Michael S. Bernstein

  • Proceedings of the ACM on Human-Computer Interaction

  • August 26, 2023

How can online communities execute a focused vision for their space? Curation offers one approach, where community leaders manually select content to share with the community. Curation enables leaders to shape a space that matches their taste, norms, and values, but the practice is often intractable at social media scale: curators cannot realistically sift through hundreds or thousands of submissions daily. In this paper, we contribute algorithmic and interface foundations enabling curation at scale, and manifest these foundations in a system called Cura. Our approach draws on the observation that, while curators' attention is limited, other community members' upvotes are plentiful and informative of curators' likely opinions. We thus contribute a transformer-based curation model that predicts whether each curator will upvote a post based on previous community upvotes. Cura applies this curation model to create a feed of content that it predicts the curator would want in the community. Evaluations demonstrate that the curation model accurately estimates opinions of diverse curators, that changing curators for a community results in clearly recognizable shifts in the community's content, and that, consequently, curation can reduce anti-social behavior by half without extra moderation effort. By sampling different types of curators, Cura lowers the threshold to genres of curated social media ranging from editorial groups to stakeholder roundtables to democracies.

TLDR

SUMMIT: Scaffolding Open Source Software Issue Discussion through Summarization

  • Saskia Gilmer, Avinash Bhat, Shuvam Shah, Kevin Cherry, Jinghui Cheng, Jin L. C. Guo

  • Proceedings of the ACM on Human-Computer Interaction

  • August 5, 2023

Issue Tracking Systems (ITS) often support commenting on software issues, which creates a space for discussions centered around bug fixes and improvements to the software. For Open Source Software (OSS) projects, issue discussions serve as a crucial collaboration mechanism for diverse stakeholders. However, these discussions can become lengthy and entangled, making it hard to find relevant information and make further contributions. In this work, we study the use of summarization to aid users in collaboratively making sense of OSS issue discussion threads. Through an empirical investigation, we reveal a complex picture of how summarization is used by issue users in practice as a strategy to help develop and manage their discussions. Grounded on the different objectives served by the summaries and the outcome of our formative study with OSS stakeholders, we identified a set of guidelines to inform the design of collaborative summarization tools for OSS issue discussions. We then developed SUMMIT, a tool that allows issue users to collectively construct summaries of different types of information discussed, as well as a set of comments representing continuous conversations within the thread. To alleviate the manual effort involved, SUMMIT uses techniques that automatically detect information types and summarize texts to facilitate the generation of these summaries. A lab user study indicates that, as the users of SUMMIT, OSS stakeholders adopted different strategies to acquire information on issue threads. Furthermore, different features of SUMMIT effectively lowered the perceived difficulty of locating information from issue threads and enabled the users to prioritize their effort. Overall, our findings demonstrated the potential of SUMMIT, and the corresponding design guidelines, in supporting users to acquire information from lengthy discussions in ITSs. Our work sheds light on key design considerations and features when exploring crowd-based and machine-learning-enabled instruments for asynchronous collaboration on complex tasks such as OSS development.

TLDR

The findings demonstrated the potential of SUMMIT, and the corresponding design guidelines, in supporting users to acquire information from lengthy discussions in ITSs, and sheds light on key design considerations and features when exploring crowd-based and machine-learning-enabled instruments for asynchronous collaboration on complex tasks such as OSS development.

Towards Intersectional Moderation: An Alternative Model of Moderation Built on Care and Power

  • Sarah A. Gilbert

  • Proceedings of the ACM on Human-Computer Interaction

  • May 18, 2023

Shortcomings of current models of moderation have driven policy makers, scholars, and technologists to speculate about alternative models of content moderation. While alternative models provide hope for the future of online spaces, they can fail without proper scaffolding. Community moderators are routinely confronted with similar issues and have therefore found creative ways to navigate these challenges. Learning more about the decisions these moderators make, the challenges they face, and where they are successful can provide valuable insight into how to ensure alternative moderation models are successful. In this study, I perform a collaborative ethnography with moderators of r/AskHistorians, a community that uses an alternative moderation model, highlighting the importance of accounting for power in moderation. Drawing from Black feminist theory, I call this "intersectional moderation." I focus on three controversies emblematic of r/AskHistorians' alternative model of moderation: a disagreement over a moderation decision; a collaboration to fight racism on Reddit; and a period of intense turmoil and its impact on policy. Through this evidence I show how volunteer moderators navigated multiple layers of power through care work. To ensure the successful implementation of intersectional moderation, I argue that designers should support decision-making processes and policy makers should account for the impact of the sociotechnical systems in which moderators work.

TLDR

Data Subjects’ Perspectives on Emotion Artificial Intelligence Use in the Workplace: A Relational Ethics Lens

  • Shanley Corvite, Kat Roemmich, T. Rosenberg, Nazanin Andalibi

  • Proc. ACM Hum. Comput. Interact.

  • April 14, 2023

The workplace has experienced extensive digital transformation, in part due to artificial intelligence's commercial availability. Though still an emerging technology, emotional artificial intelligence (EAI) is increasingly incorporated into enterprise systems to augment and automate organizational decisions and to monitor and manage workers. EAI use is often celebrated for its potential to improve workers' wellbeing and performance as well as address organizational problems such as bias and safety. Workers subject to EAI in the workplace are data subjects whose data make EAI possible and who are most impacted by it. However, we lack empirical knowledge about data subjects' perspectives on EAI, including in the workplace. To this end, using a relational ethics lens, we qualitatively analyzed 395 U.S. adults' open-ended survey (partly representative) responses regarding the perceived benefits and risks they associate with being subjected to EAI in the workplace. While participants acknowledged potential benefits of being subject to EAI (e.g., employers using EAI to aid their wellbeing, enhance their work environment, reduce bias), a myriad of potential risks overshadowed perceptions of potential benefits. Participants expressed concerns regarding the potential for EAI use to harm their wellbeing, work environment and employment status, and create and amplify bias and stigma against them, especially the most marginalized (e.g., along dimensions of race, gender, mental health status, disability). Distrustful of EAI and its potential risks, participants anticipated conforming to (e.g., partaking in emotional labor) or refusing (e.g., quitting a job) EAI implementation in practice. We argue that EAI may magnify, rather than alleviate, existing challenges data subjects face in the workplace and suggest that some EAI-inflicted harms would persist even if concerns of EAI's accuracy and bias are addressed.

TLDR

The Value of Activity Traces in Peer Evaluations: An Experimental Study

  • W. Shi, Sneha R. Krishna Kumaran, Hari Sundaram, B. Bailey

  • Proceedings of the ACM on Human-Computer Interaction

  • April 14, 2023

Peer evaluations are a well-established tool for evaluating individual and team performance in collaborative contexts, but are susceptible to social and cognitive biases. Current peer evaluation tools have also yet to address the unique opportunities that online collaborative technologies provide for addressing these biases. In this work, we explore the potential of one such opportunity for peer evaluations: data traces automatically generated by collaborative tools, which we refer to as "activity traces". We conduct a between-subjects experiment with 101 students and MTurk workers, investigating the effects of reviewing activity traces on peer evaluations of team members in an online collaborative task. Our findings show that the usage of activity traces led participants to make more and greater revisions to their evaluations compared to a control condition. These revisions also increased the consistency and participants' perceived accuracy of the evaluations that they received. Our findings demonstrate the value of activity traces as an approach for performing more reliable and objective peer evaluations of teamwork. Based on our findings as well as qualitative analysis of free-form responses in our study, we also identify and discuss key considerations and design recommendations for incorporating activity traces into real-world peer evaluation systems.

TLDR

Hate Raids on Twitch: Echoes of the Past, New Modalities, and Implications for Platform Governance

  • C. Han, Joseph Seering, Deepak Kumar, Jeffrey T. Hancock, Z. Durumeric

  • Proc. ACM Hum. Comput. Interact.

  • January 10, 2023

In the summer of 2021, users on the livestreaming platform Twitch were targeted by a wave of "hate raids," a form of attack that overwhelms a streamer's chatroom with hateful messages, often through the use of bots and automation. Using a mixed-methods approach, we combine a quantitative measurement of attacks across the platform with interviews of streamers and third-party bot developers. We present evidence that confirms that some hate raids were highly-targeted, hate-driven attacks, but we also observe another mode of hate raid similar to networked harassment and specific forms of subcultural trolling. We show that the streamers who self-identify as LGBTQ+ and/or Black were disproportionately targeted and that hate raid messages were most commonly rooted in anti-Black racism and antisemitism. We also document how these attacks elicited rapid community responses in both bolstering reactive moderation and developing proactive mitigations for future attacks. We conclude by discussing how platforms can better prepare for attacks and protect at-risk communities while considering the division of labor between community moderators, tool-builders, and platforms.

TLDR

It is shown that the streamers who self-identify as LGBTQ+ and/or Black were disproportionately targeted and that hate raid messages were most commonly rooted in anti-Black racism and antisemitism.

IMX
IUI

Deep Learning Uncertainty in Machine Teaching

  • Téo Sanchez, Baptiste Caramiaux, Pierre Thiel, W. Mackay

  • 27th International Conference on Intelligent User Interfaces

  • March 22, 2022

Machine Learning models can output confident but incorrect predictions. To address this problem, ML researchers use various techniques to reliably estimate ML uncertainty, usually performed on controlled benchmarks once the model has been trained. We explore how the two types of uncertainty—aleatoric and epistemic—can help non-expert users understand the strengths and weaknesses of a classifier in an interactive setting. We are interested in users’ perception of the difference between aleatoric and epistemic uncertainty and their use to teach and understand the classifier. We conducted an experiment where non-experts train a classifier to recognize card images, and are tested on their ability to predict classifier outcomes. Participants who used either larger or more varied training sets significantly improved their understanding of uncertainty, both epistemic or aleatoric. However, participants who relied on the uncertainty measure to guide their choice of training data did not significantly improve classifier training, nor were they better able to guess the classifier outcome. We identified three specific situations where participants successfully identified the difference between aleatoric and epistemic uncertainty: placing a card in the exact same position as a training card; placing different cards next to each other; and placing a non-card, such as their hand, next to or on top of a card. We discuss our methodology for estimating uncertainty for Interactive Machine Learning systems and question the need for two-level uncertainty in Machine Teaching.

TLDR

It is explored how the two types of uncertainty—aleatoric and epistemic—can help non-expert users understand the strengths and weaknesses of a classifier in an interactive setting and the need for two-level uncertainty in Machine Teaching is questioned.

Hand Gesture Recognition for an Off-the-Shelf Radar by Electromagnetic Modeling and Inversion

  • Arthur Sluÿters, S. Lambot, J. Vanderdonckt

  • 27th International Conference on Intelligent User Interfaces

  • March 22, 2022

Microwave radar sensors in human-computer interactions have several advantages compared to wearable and image-based sensors, such as privacy preservation, high reliability regardless of the ambient and lighting conditions, and larger field of view. However, the raw signals produced by such radars are high-dimension and relatively complex to interpret. Advanced data processing, including machine learning techniques, is therefore necessary for gesture recognition. While these approaches can reach high gesture recognition accuracy, using artificial neural networks requires a significant amount of gesture templates for training and calibration is radar-specific. To address these challenges, we present a novel data processing pipeline for hand gesture recognition that combines advanced full-wave electromagnetic modelling and inversion with machine learning. In particular, the physical model accounts for the radar source, radar antennas, radar-target interactions and target itself, i.e.,, the hand in our case. To make this processing feasible, the hand is emulated by an equivalent infinite planar reflector, for which analytical Green’s functions exist. The apparent dielectric permittivity, which depends on the hand size, electric properties, and orientation, determines the wave reflection amplitude based on the distance from the hand to the radar. Through full-wave inversion of the radar data, the physical distance as well as this apparent permittivity are retrieved, thereby reducing by several orders of magnitude the dimension of the radar dataset, while keeping the essential information. Finally, the estimated distance and apparent permittivity as a function of gesture time are used to train the machine learning algorithm for gesture recognition. This physically-based dimension reduction enables the use of simple gesture recognition algorithms, such as template-matching recognizers, that can be trained in real time and provide competitive accuracy with only a few samples. We evaluate significant stages of our pipeline on a dataset of 16 gesture classes, with 5 templates per class, recorded with the Walabot, a lightweight, off-the-shelf array radar. We also compare these results with an ultra wideband radar made of a single horn antenna and lightweight vector network analyzer, and a Leap Motion Controller.

TLDR

A novel data processing pipeline for hand gesture recognition that combines advanced full-wave electromagnetic modelling and inversion with machine learning is presented that enables the use of simple gesture recognition algorithms, such as template-matching recognizers, that can be trained in real time and provide competitive accuracy with only a few samples.

SIGGRAPH

Image features influence reaction time

  • Budmonde Duinkharjav, Praneeth Chakravarthula, Rachel M. Brown, Anjul Patney, Qi Sun

  • ACM Transactions on Graphics (TOG)

  • May 5, 2022

We aim to ask and answer an essential question "how quickly do we react after observing a displayed visual target?" To this end, we present psychophysical studies that characterize the remarkable disconnect between human saccadic behaviors and spatial visual acuity. Building on the results of our studies, we develop a perceptual model to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Specifically, we implement a neurologically-inspired probabilistic model that mimics the accumulation of confidence that leads to a perceptual decision. We validate our model with a series of objective measurements and user studies using an eye-tracked VR display. The results demonstrate that our model prediction is in statistical alignment with real-world human behavior. Further, we establish that many sub-threshold image modifications commonly introduced in graphics pipelines may significantly alter human reaction timing, even if the differences are visually undetectable. Finally, we show that our model can serve as a metric to predict and alter reaction latency of users in interactive computer graphics applications, thus may improve gaze-contingent rendering, design of virtual experiences, and player performance in e-sports. We illustrate this with two examples: estimating competition fairness in a video game with two different team colors, and tuning display viewing distance to minimize player reaction time.

TLDR

This work develops a perceptual model to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image, and implements a neurologically-inspired probabilistic model that mimics the accumulation of confidence that leads to a perceptual decision.

CLIPasso: Semantically-Aware Object Sketching

  • Yael Vinker, Ehsan Pajouheshgar, Jessica Y. Bo, Roman Bachmann, Amit H. Bermano, D. Cohen-Or, A. Zamir, Ariel Shamir

  • ACM Transactions on Graphics (TOG)

  • February 11, 2022

Abstraction is at the heart of sketching due to the simple and minimal nature of line drawings. Abstraction entails identifying the essential visual properties of an object or scene, which requires semantic understanding and prior knowledge of high-level concepts. Abstract depictions are therefore challenging for artists, and even more so for machines. We present CLIPasso, an object sketching method that can achieve different levels of abstraction, guided by geometric and semantic simplifications. While sketch generation methods often rely on explicit sketch datasets for training, we utilize the remarkable ability of CLIP (Contrastive-Language-Image-Pretraining) to distill semantic concepts from sketches and images alike. We define a sketch as a set of B\'ezier curves and use a differentiable rasterizer to optimize the parameters of the curves directly with respect to a CLIP-based perceptual loss. The abstraction degree is controlled by varying the number of strokes. The generated sketches demonstrate multiple levels of abstraction while maintaining recognizability, underlying structure, and essential visual components of the subject drawn.

TLDR

CLIPasso is presented, an object sketching method that can achieve different levels of abstraction, guided by geometric and semantic simplifications, and utilize the remarkable ability of CLIP (Contrastive-Language-Image-Pretraining) to distill semantic concepts from sketches and images alike.

Spelunking the Deep: Guaranteed Queries on General Neural Implicit Surfaces via Range Analysis

  • Nicholas Sharp, A. Jacobson

  • February 5, 2022

Neural implicit representations, which encode a surface as the level set of a neural network applied to spatial coordinates, have proven to be remarkably effective for optimizing, compressing, and generating 3D geometry. Although these representations are easy to fit, it is not clear how to best evaluate geometric queries on the shape, such as intersecting against a ray or finding a closest point. The predominant approach is to encourage the network to have a signed distance property. However, this property typically holds only approximately, leading to robustness issues, and holds only at the conclusion of training, inhibiting the use of queries in loss functions. Instead, this work presents a new approach to perform queries directly on general neural implicit functions for a wide range of existing architectures. Our key tool is the application of range analysis to neural networks, using automatic arithmetic rules to bound the output of a network over a region; we conduct a study of range analysis on neural networks, and identify variants of affine arithmetic which are highly effective. We use the resulting bounds to develop geometric queries including ray casting, intersection testing, constructing spatial hierarchies, fast mesh extraction, closest-point evaluation, evaluating bulk properties, and more. Our queries can be efficiently evaluated on GPUs, and offer concrete accuracy guarantees even on randomly-initialized net- works, enabling their use in training objectives and beyond. We also show a preliminary application to inverse rendering.

TLDR

This work presents a new approach to perform queries directly on general neural implicit functions for a wide range of existing architectures, using automatic arithmetic rules to bound the output of a network over a region and conducting a study of range analysis on neural networks.

Instant neural graphics primitives with a multiresolution hash encoding

  • T. Müller, Alex Evans, Christoph Schied, A. Keller

  • ACM Transactions on Graphics (TOG)

  • January 16, 2022

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.

TLDR

A versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations is introduced, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.

DeepPhase: periodic autoencoders for learning motion phase manifolds

  • S. Starke, I. Mason, T. Komura, Zhaoming Xie, Hung Yu Ling, M. V. D. Panne, Yiwei Zhao, F. Zinno

  • ACM Transactions on Graphics (TOG)

  • December 31, 2021

Learning the spatial-temporal structure of body movements is a fundamental problem for character motion synthesis. In this work, we propose a novel neural network architecture called the Periodic Autoencoder that can learn periodic features from large unstructured motion datasets in an unsupervised manner. The character movements are decomposed into multiple latent channels that capture the non-linear periodicity of different body segments while progressing forward in time. Our method extracts a multi-dimensional phase space from full-body motion data, which effectively clusters animations and produces a manifold in which computed feature distances provide a better similarity measure than in the original motion space to achieve better temporal and spatial alignment. We demonstrate that the learned periodic embedding can significantly help to improve neural motion synthesis in a number of tasks, including diverse locomotion skills, style-based move- ments, dance motion synthesis from music, synthesis of dribbling motions in football, and motion query for matching poses within large animation databases.

TLDR

It is demonstrated that the learned periodic embedding can significantly help to improve neural motion synthesis in a number of tasks, including diverse locomotion skills, style-based move- ments, dance motion synthesis from music, synthesis of dribbling motions in football, and motion query for matching poses within large animation databases.

TEI
Ubi Comp/ISWC

Detecting Receptivity for mHealth Interventions in the Natural Environment

  • Varun Mishra, F. Künzler, Jan-Niklas Kramer, E. Fleisch, T. Kowatsch, D. Kotz

  • Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

  • November 16, 2020

Just-In-Time Adaptive Intervention (JITAI) is an emerging technique with great potential to support health behavior by providing the right type and amount of support at the right time. A crucial aspect of JITAIs is properly timing the delivery of interventions, to ensure that a user is receptive and ready to process and use the support provided. Some prior works have explored the association of context and some user-specific traits on receptivity, and have built post-study machine-learning models to detect receptivity. For effective intervention delivery, however, a JITAI system needs to make in-the-moment decisions about a user's receptivity. To this end, we conducted a study in which we deployed machine-learning models to detect receptivity in the natural environment, i.e., in free-living conditions. We leveraged prior work regarding receptivity to JITAIs and deployed a chatbot-based digital coach - Ally - that provided physical-activity interventions and motivated participants to achieve their step goals. We extended the original Ally app to include two types of machine-learning model that used contextual information about a person to predict when a person is receptive: a static model that was built before the study started and remained constant for all participants and an adaptive model that continuously learned the receptivity of individual participants and updated itself as the study progressed. For comparison, we included a control model that sent intervention messages at random times. The app randomly selected a delivery model for each intervention message. We observed that the machine-learning models led up to a 40% improvement in receptivity as compared to the control model. Further, we evaluated the temporal dynamics of the different models and observed that receptivity to messages from the adaptive model increased over the course of the study.

TLDR

A study in which machine-learning models were deployed to detect receptivity in the natural environment, i.e., in free-living conditions and deployed a chatbot-based digital coach that provided physical-activity interventions and motivated participants to achieve their step goals.

UIST

GenAssist: Making Image Generation Accessible

  • Mina Huh, Yi-Hao Peng, Amy Pavel

  • ACM Symposium on User Interface Software and Technology

  • July 14, 2023

Blind and low vision (BLV) creators use images to communicate with sighted audiences. However, creating or retrieving images is challenging for BLV creators as it is difficult to use authoring tools or assess image search results. Thus, creators limit the types of images they create or recruit sighted collaborators. While text-to-image generation models let creators generate high-fidelity images based on a text description (i.e. prompt), it is difficult to assess the content and quality of generated images. We present GenAssist, a system to make text-to-image generation accessible. Using our interface, creators can verify whether generated image candidates followed the prompt, access additional details in the image not specified in the prompt, and skim a summary of similarities and differences between image candidates. To power the interface, GenAssist uses a large language model to generate visual questions, vision-language models to extract answers, and a large language model to summarize the results. Our study with 12 BLV creators demonstrated that GenAssist enables and simplifies the process of image selection and generation, making visual authoring more accessible to all.

TLDR

The study with 12 BLV creators demonstrated that GenAssist enables and simplifies the process of image selection and generation, making visual authoring more accessible to all.

Generative Agents: Interactive Simulacra of Human Behavior

  • J. Park, Joseph C. O'Brien, Carrie J. Cai, M. Morris, Percy Liang, Michael S. Bernstein

  • ACM Symposium on User Interface Software and Technology

  • April 7, 2023

Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents: computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty-five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors. For example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture—observation, planning, and reflection—each contribute critically to the believability of agent behavior. By fusing large language models with computational interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.

TLDR

This work describes an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior.

Grid-Coding: An Accessible, Efficient, and Structured Coding Paradigm for Blind and Low-Vision Programmers

  • Md Ehtesham-Ul-Haque, Syed Mostofa Monsur, Syed Masum Billah

  • Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology

  • October 28, 2022

Sighted programmers often rely on visual cues (e.g., syntax coloring, keyword highlighting, code formatting) to perform common coding activities in text-based languages (e.g., Python). Unfortunately, blind and low-vision (BLV) programmers hardly benefit from these visual cues because they interact with computers via assistive technologies (e.g., screen readers), which fail to communicate visual semantics meaningfully. Prior work on making text-based programming languages and environments accessible mostly focused on code navigation and, to some extent, code debugging, but not much toward code editing, which is an essential coding activity. We present Grid-Coding to fill this gap. Grid-Coding renders source code in a structured 2D grid, where each row, column, and cell have consistent, meaningful semantics. Its design is grounded on prior work and refined by 28 BLV programmers through online participatory sessions for 2 months. We implemented the Grid-Coding prototype as a spreadsheet-like web application for Python and evaluated it with a study with 12 BLV programmers. This study revealed that, compared to a text editor (i.e., the go-to editor for BLV programmers), our prototype enabled BLV programmers to navigate source code quickly, find the context of a statement easily, detect syntax errors in existing code effectively, and write new code with fewer syntax errors. The study also revealed how BLV programmers adopted Grid-Coding and demonstrated novel interaction patterns conducive to increased programming productivity.

TLDR

This study revealed that the Grid-Coding prototype enabled BLV programmers to navigate source code quickly, find the context of a statement easily, detect syntax errors in existing code effectively, and write new code with fewer syntax errors.

CrossA11y: Identifying Video Accessibility Issues via Cross-modal Grounding

  • Xingyu Liu, Ruolin Wang, Dingzeyu Li, Xiang 'Anthony' Chen, Amy Pavel

  • Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology

  • August 23, 2022

Authors make their videos visually accessible by adding audio descriptions (AD), and auditorily accessible by adding closed captions (CC). However, creating AD and CC is challenging and tedious, especially for non-professional describers and captioners, due to the difficulty of identifying accessibility problems in videos. A video author will have to watch the video through and manually check for inaccessible information frame-by-frame, for both visual and auditory modalities. In this paper, we present CrossA11y, a system that helps authors efficiently detect and address visual and auditory accessibility issues in videos. Using cross-modal grounding analysis, CrossA11y automatically measures accessibility of visual and audio segments in a video by checking for modality asymmetries. CrossA11y then displays these segments and surfaces visual and audio accessibility issues in a unified interface, making it intuitive to locate, review, script AD/CC in-place, and preview the described and captioned video immediately. We demonstrate the effectiveness of CrossA11y through a lab study with 11 participants, comparing to existing baseline.

TLDR

This paper presents CrossA11y, a system that helps authors efficiently detect and address visual and auditory accessibility issues in videos by using cross-modal grounding analysis and automatically measures accessibility of visual and audio segments in a video by checking for modality asymmetries.

Going Incognito in the Metaverse: Achieving Theoretically Optimal Privacy-Usability Tradeoffs in VR

  • V. Nair, Gonzalo Munilla-Garrido, D. Song

  • ACM Symposium on User Interface Software and Technology

  • August 11, 2022

Virtual reality (VR) telepresence applications and the so-called “metaverse” promise to be the next major medium of human-computer interaction. However, with recent studies demonstrating the ease at which VR users can be profiled and deanonymized, metaverse platforms carry many of the privacy risks of the conventional internet (and more) while at present offering few of the defensive utilities that users are accustomed to having access to. To remedy this, we present the first known method of implementing an “incognito mode” for VR. Our technique leverages local ε-differential privacy to quantifiably obscure sensitive user data attributes, with a focus on intelligently adding noise when and where it is needed most to maximize privacy while minimizing usability impact. Our system is capable of flexibly adapting to the unique needs of each VR application to further optimize this trade-off. We implement our solution as a universal Unity (C#) plugin that we then evaluate using several popular VR applications. Upon faithfully replicating the most well-known VR privacy attack studies, we show a significant degradation of attacker capabilities when using our solution.

TLDR

This work presents the first known method of implementing an “incognito mode” for VR, which leverages local ε-differential privacy to quantifiably obscure sensitive user data attributes and shows a significant degradation of attacker capabilities when using this solution.

VRST

Intuitive User Interfaces for Real-Time Magnification in Augmented Reality

  • Ryan Schubert, G. Bruder, Gregory F. Welch

  • Virtual Reality Software and Technology

  • October 9, 2023

Various reasons exist why humans desire to magnify portions of our visually perceived surroundings, e.g., because they are too far away or too small to see with the naked eye. Different technologies are used to facilitate magnification, from telescopes to microscopes using monocular or binocular designs. In particular, modern digital cameras capable of optical and/or digital zoom are very flexible as their high-resolution imagery can be presented to users in real-time with displays and interfaces allowing control over the magnification. In this paper, we present a novel design space of intuitive augmented reality (AR) magnifications where an AR head-mounted display is used for the presentation of real-time magnified camera imagery. We present a user study evaluating and comparing different visual presentation methods and AR interaction techniques. Our results show different advantages for unimanual, bimanual, and situated AR magnification window interfaces, near versus far vergence distances for the image presentation, and five different user interfaces for specifying the scaling factor of the imagery.

TLDR

A novel design space of intuitive augmented reality (AR) magnifications where an AR head-mounted display is used for the presentation of real-time magnified camera imagery and a user study evaluating and comparing different visual presentation methods and AR interaction techniques is presented.

Walk This Beam: Impact of Different Balance Assistance Strategies and Height Exposure on Performance and Physiological Arousal in VR

  • Dennis Dietz, Carl Oechsner, Changkun Ou, Francesco Chiossi, F. Sarto, Sven Mayer, A. Butz

  • Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology

  • November 29, 2022

Dynamic balance is an essential skill for the human upright gait; therefore, regular balance training can improve postural control and reduce the risk of injury. Even slight variations in walking conditions like height or ground conditions can significantly impact walking performance. Virtual reality is used as a helpful tool to simulate such challenging situations. However, there is no agreement on design strategies for balance training in virtual reality under stressful environmental conditions such as height exposure. We investigate how two different training strategies, imitation learning, and gamified learning, can help dynamic balance control performance across different stress conditions. Moreover, we evaluate the stress response as indexed by peripheral physiological measures of stress, perceived workload, and user experience. Both approaches were tested against a baseline of no instructions and against each other. Thereby, we show that a learning-by-imitation approach immediately helps dynamic balance control, decreases stress, improves attention focus, and diminishes perceived workload. A gamified approach can lead to users being overwhelmed by the additional task. Finally, we discuss how our approaches could be adapted for balance training and applied to injury rehabilitation and prevention.

TLDR

Latest News & Updates

Case Study: Iterative Design for Skimming Support

Case Study: Iterative Design for Skimming Support

How might we help researchers quickly assess the relevance of scientific literature? Take a closer look at Skimming, Semantic Reader’s latest AI feature, and the collaborative design process behind it.

Behind the Scenes of Semantic Scholar’s New Author Influence Design

Behind the Scenes of Semantic Scholar’s New Author Influence Design

We released a new version of Author Influence interface to help scholars better discover other scholars in their fields. Here's how we identified user insights and made those design choices.

Artificial-intelligence search engines wrangle academic literature

Artificial-intelligence search engines wrangle academic literature

Nature had a chat with Dan Weld, Chief Scientist at Semantic Scholar, to discuss how search engines are helping scientists explore and innovate by making it easier to draw connections from a massive collection of scientific literature.

Experience a smarter way to search and discover scholarly research.

Create Your Account