Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper AI Developers,Privacy Engineers,Cybersecurity Experts,NLP Researchers 2 days ago

Semantically-Aware LLM Agent to Enhance Privacy in Conversational AI Services

ai-safety › privacy
📄 Abstract

Abstract: With the increasing use of conversational AI systems, there is growing concern over privacy leaks, especially when users share sensitive personal data in interactions with Large Language Models (LLMs). Conversations shared with these models may contain Personally Identifiable Information (PII), which, if exposed, could lead to security breaches or identity theft. To address this challenge, we present the Local Optimizations for Pseudonymization with Semantic Integrity Directed Entity Detection (LOPSIDED) framework, a semantically-aware privacy agent designed to safeguard sensitive PII data when using remote LLMs. Unlike prior work that often degrade response quality, our approach dynamically replaces sensitive PII entities in user prompts with semantically consistent pseudonyms, preserving the contextual integrity of conversations. Once the model generates its response, the pseudonyms are automatically depseudonymized, ensuring the user receives an accurate, privacy-preserving output. We evaluate our approach using real-world conversations sourced from ShareGPT, which we further augment and annotate to assess whether named entities are contextually relevant to the model's response. Our results show that LOPSIDED reduces semantic utility errors by a factor of 5 compared to baseline techniques, all while enhancing privacy.
Authors (2)
Jayden Serenari
Stephen Lee
Submitted
October 30, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

Introduces the LOPSIDED framework, a semantically-aware privacy agent that safeguards PII in LLM conversations by replacing it with semantically consistent pseudonyms. It preserves contextual integrity and depseudonymizes responses, unlike prior work that degrades quality.

Business Value

Enables the secure deployment of conversational AI in sensitive domains (e.g., healthcare, finance) by protecting user privacy, fostering trust and compliance with regulations like GDPR.