Convenience vs. caution: How AI at work is colliding with privacy fears
AI tools at work: Rapid adoption with unclear consequences for privacy

Nym’s survey of over 1,000 professionals across industries reveals just how embedded AI has become in daily work life. Nearly half (40%) of respondents say they use AI chatbots like ChatGPT or Copilot regularly at work, with another 13% admitting they've experimented with these tools even if they’re not yet daily users.
Perhaps most concerning is how these AI interactions happen. A staggering 62% of AI users report accessing chatbots through personal accounts rather than secure, company-provided systems. Only 15% use a company-paid, individual account, while some even admit to borrowing logins from colleagues or simply using whatever account is available.
The result? Sensitive information is flowing into AI platforms with little oversight and very few protections.
Sensitive data shared with AI: A growing risk
Despite widespread concerns about digital security, the reality of AI use in the workplace tells a different story.
When asked what kind of information they've entered into AI chatbots at work:
- 25% admitted to sharing customer data like names and email addresses.
- 17% have entered internal company documents.
- 15% shared company financial information.
- Alarmingly, nearly 18% disclosed personal salary details.
- And 15% entered personal information like their home address or date of birth.
It doesn’t stop at work-related data. One in three AI users has shared personal health information with chatbots, while 28% have disclosed financial details like debt, salary, or mortgage information. Even highly sensitive data like credit card numbers and bank account details have made their way into AI tools.
This isn’t just a hypothetical risk. It’s happening every day, all because of convenience. But the risks to privacy are not necessarily being considered.
The privacy paradox
Interestingly, the same professionals relying on AI to speed up their work also express deep discomfort about the risks. When asked about their concerns regarding AI tools at work:
- 44% fear sensitive company data could be stored or reused by AI platforms.
- 37% are worried about breaches of confidentiality.
- 34% fear AI tools could unintentionally expose private customer data.
- 25% even worry AI might contribute to job insecurity.
But despite these concerns, AI use continues, often with little thought given to privacy or security safeguards. It’s a clear example of what Nym’s CEO Harry Halpin calls the “convenience over caution” problem:
"AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security," Halpin warns.
Metadata awareness: The missing piece of the puzzle
Compounding the problem is a widespread lack of understanding about metadata: the invisible but highly revealing trail of information AI interactions leave behind.
Only 28% of survey respondents said they truly understand what metadata is. Yet, when asked what metadata includes, many mistakenly believed it only covers basic technical details like device type or time stamps, rather than the detailed behavioral profiles AI tools and other services can construct.
This lack of awareness is dangerous. Metadata — information about who you communicate with, when, where, and how often — is often more revealing than message content itself. It fuels targeted advertising, algorithmic manipulation, and now AI powered surveillance of all Internet activity.
And yet, most people have taken minimal steps to protect this layer of their digital footprint. Just **23% report using a VPN to mask their location and IP address. Fewer still use privacy-focused browsers, encrypted messaging apps, or tracker blockers.
A growing gap in trust: People don’t feel safe using AI
Given these patterns, it’s no surprise that skepticism towards AI runs deep. Half of survey respondents describe themselves as "cautious" — recognizing AI's usefulness but unwilling to trust it with sensitive information. Another 25% say they don’t trust AI tools with personal or company data at all.
Perhaps more troubling, 24% worry that most people don't even realize how much data they're giving away when they use AI tools. It's a sentiment echoed across Nym's broader research on digital privacy, which consistently shows that people care about protecting their data, but feel powerless to do so effectively.
What would make AI feel safer?
There is a clear appetite for change. When asked what would make them feel more secure using AI tools at work, respondents pointed to:
- The ability to permanently delete chat history (42%).
- End-to-end encryption and secure data transmission (41%).
- Using VPNs or data-masking tools when accessing AI platforms (28%).
- Clear company guidelines on safe AI use (24%).
Basically people want control, transparency, and meaningful protections — not vague promises from AI vendors.
For Nym, decentralization is the solution we need
At Nym, we believe that privacy shouldn’t be a luxury or afterthought: it should be the default for our lives online. But today's AI tools, built on centralized infrastructure, store and process user data in ways that make meaningful privacy nearly impossible.
That's why NymVPN was created: not just as another VPN, but as a radically different approach to privacy. Built on a decentralized, mixnet-powered network, NymVPN makes it impossible to maintain full traffic logs, even for us. It obscures metadata and disrupts the patterns that surveillance and profiling depend on.
For employees and businesses alike, combining privacy-protecting tools like NymVPN with clear AI policies and privacy training is essential now more than ever.
Concrete steps to take when using AI
- Use a decentralized VPN like NumVPN to disconnect your shared data with your metadata
- Use a company AI subscription rather than a personal account, since these can have data sharing protections
- Avoid sharing sensitive information with the AI bot
Don’t trade your privacy for productivity
It’s clear that AI is reshaping how we work. But as the lines blur between convenience and caution, it’s up to individuals and organizations to insist on privacy by design. We shouldn’t wait until a data breach makes headlines.
Because once your health details, financial information, or company secrets are uploaded to an AI system, you may have lost control of where they go and who sees them.
The choice is clear: productivity doesn’t have to come at the cost of privacy. But it requires action, awareness, and the right tools to make the internet — and AI at work — safer and private for everyone.
Methodology
An independent survey of 1,000 workers in the United States and Britain was commissioned by NymVPN and conducted by Cint. Results were weighted with various demographics to create a nationally-representative sample. Full details of the questions asked are available on request.
Share
Table of contents
Keep Reading...
Nym is more than a VPN
The first app that protects you from AI surveillance thanks to a noise-generating mixnet

Who is tracking your internet activity, and why?
Your every move online is being tracked. Decentralized VPNs can better protect our privacy.