We are PEACH (Privacy-Enabling AI and Computer- Human Interaction) Lab at the Northeastern University .

We are a group of researchers passionate about exploring the intersection of HCI, AI, and privacy .
We believe that addressing privacy issues raised by AI requires not only model-centered approaches—approaches that improve the models—but also human-centered approaches—approaches that empower people.

News

Two paper accepted at CHI 2025 🌸⛩️

Apr 25, 2025

PEACH lab have a full paper and a workshop position paper accepted at this year's CHI conference!

Welcome new members! 🎉👋

Jun 1, 2025

We're excited to welcome two new PhD students, Aaron and Jianing!

Research

Privacy Leakage Overshadowed by Views of AI: A Study on Human Oversight of Privacy in Language Model Agent

Privacy Leakage Overshadowed by Views of AI: A Study on Human Oversight of Privacy in Language Model Agent

Zhiping Zhang, Bingcan Guo, Tianshi Li

 Preprint   Language model (LM) agents can boost productivity in personal tasks like replying to emails but pose privacy risks. We present the first study (N=300) on people’s ability to oversee LM agents’ privacy implications in asynchronous communication. Participants sometimes preferred agent-generated responses with greater privacy leakage, increasing harmful disclosures from 15.7% to 55.0%. We identified six privacy profiles reflecting different concerns, trust, and preferences. Our findings inform the design of agentic systems that support privacy-preserving interactions and better align with users’ privacy expectations.

     arXiv  


Rescriber: Smaller-LLM-Powered User-Led Data Minimization for LLM-Based Chatbots

Rescriber: Smaller-LLM-Powered User-Led Data Minimization for LLM-Based Chatbots

Jijie Zhou, Eryue Xu, Yaoyao Wu, Tianshi Li

 CHI 2025   The rise of LLM-based conversational agents has led to increased disclosure of sensitive information, yet current systems lack user control over privacy-utility tradeoffs. We present Rescriber, a browser extension that enables user-led data minimization by detecting and sanitizing personal information in prompts. In a study (N=12), Rescriber reduced unnecessary disclosures and addressed user privacy concerns. Users rated the Llama3-8B-powered system comparably to GPT-4o. Trust was shaped by the tool’s consistency and comprehensiveness. Our findings highlight the promise of lightweight, on-device privacy controls for enhancing trust and protection in AI systems.

     arXiv       Video  


The Obvious Invisible Threat: LLM-Powered GUI Agents' Vulnerability to Fine-Print Injections

The Obvious Invisible Threat: LLM-Powered GUI Agents' Vulnerability to Fine-Print Injections

Chaoran Chen, Zhiping Zhang, Bingcan Guo, Shang Ma, Ibrahim Khalilov, Simret A Gebreegziabher, Yanfang Ye, Ziang Xiao, Yaxing Yao, Tianshi Li, Toby Jia-Jun Li

 Preprint   A Large Language Model (LLM) powered GUI agent is a specialized autonomous system that performs tasks on the user’s behalf according to high-level instructions. It does so by perceiving and interpreting the graphical user interfaces (GUIs) of relevant apps, often visually, inferring necessary sequences of actions, and then interacting with GUIs by executing the actions such as clicking, typing, and tapping. To complete real-world tasks, such as filling forms or booking services, GUI agents often need to process and act on sensitive user data. However, this autonomy introduces new privacy and security risks. Adversaries can inject malicious content into the GUIs that alters agent behaviors or induces unintended disclosures of private information. These attacks often exploit the discrepancy between visual saliency for agents and human users, or the agent’s limited ability to detect violations of contextual integrity in task automation. In this paper, we characterized six types of such attacks, and conducted an experimental study to test these attacks with six state-of-the-art GUI agents, 234 adversarial webpages, and 39 human participants. Our findings suggest that GUI agents are highly vulnerable, particularly to contextually embedded threats. Moreover, human users are also susceptible to many of these attacks, indicating that simple human oversight may not reliably prevent failures. This misalignment highlights the need for privacy-aware agent design. We propose practical defense strategies to inform the development of safer and more reliable GUI agents.

     arXiv  


Toward a Human-centered Evaluation Framework for Trustworthy LLM-powered GUI Agents

Toward a Human-centered Evaluation Framework for Trustworthy LLM-powered GUI Agents

Chaoran Chen*, Zhiping Zhang*, Ibrahim Khalilov, Bingcan Guo, Simret A Gebreegziabher, Yanfang Ye, Ziang Xiao, Yaxing Yao, Tianshi Li, Toby Jia-Jun Li

 CHI 2025 HEAL Workshop   The rise of LLM-powered GUI agents has advanced automation but introduced significant privacy and security risks due to limited human oversight. This position paper identifies three key risks unique to GUI agents, highlights gaps in current evaluation practices, and outlines five challenges in integrating human evaluators. We advocate for a human-centered evaluation framework that embeds risk assessments, in-context consent, and privacy and security considerations into GUI agent design.

     arXiv  


People

Tianshi Li

Tianshi Li

Principal Investigator

Zhiping (Arya) Zhang

Zhiping (Arya) Zhang

Ph.D. Student

Jianing Wen

Jianing Wen

Ph.D. Student

Ziwen (Aaron) Li

Ziwen (Aaron) Li

Ph.D. Student

Zeya Chen

Zeya Chen

Intern

Bingcan (Gloria) Guo

Bingcan (Gloria) Guo

Intern

Eryue (Yolanda) Xu

Eryue (Yolanda) Xu

Intern

Jijie (Gigi) Zhou

Jijie (Gigi) Zhou

Intern

Alumni

William Namgyal

William Namgyal

Former Intern
Next Position: Undergrad @ UC Berkeley

Yaoyao Wu

Yaoyao Wu

Former Intern
Next Position at Apple

Our work has been supported by the following funding agencies: National Science Foundation, Google, and CMU CyLab.

NSF Logo Google Logo CMU Logo