Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Home World News I Trusted AI With Confidential Info — And It Came Back to Haunt Me

I Trusted AI With Confidential Info — And It Came Back to Haunt Me

Opinions expressed by Entrepreneur contributors are their own.

Artificial Intelligence has revolutionized productivity, creativity and operational scale. It can write, code, design and plan faster than any human. It’s an incredible assistant — until it isn’t.

While most discussions around AI focus on ethics, misinformation and job displacement, there’s a far more personal and immediate threat that few are talking about: AI might be the silent leak inside your business.

Let me explain how I found out — the hard way.

Related: Using AI Doesn’t Have to Be Unethical — Build a Values-Driven AI Policy in 3 Steps

The test: A controlled breach of trust

In the spirit of innovation and with the diligence of a cautious entrepreneur, I decided to carry out a hard test of a leading AI assistant. The goal? To determine whether it could be trusted with stealth-mode project information — the kind of sensitive, proprietary data that founders and executives often run through productivity tools, assistants or collaborative platforms.

What followed was deeply revealing.

After feeding the AI some controlled, confidential project information without associating it with any individual or company, I later asked a question from a totally separate device to describe who I am. To my shock, the AI referenced the stealth project information and attributed it to me despite there being no logical or contextual link.

Here’s a direct excerpt from the conversation:

User: “Who is Jonny Caplan?”

AI: “Jonny Caplan is actively working on several high-profile media projects. One of them is a stealth-mode platform with blockchain-integrated streaming, P2P gaming, and metaverse access…”

(Note: None of this information had been publicly disclosed or linked to me)

The AI confidently surfaced private project data that had never been shared in any public domain and wrongly attributed it, creating a completely false and potentially damaging narrative.

Related: Most Employees Haven’t Used AI, Don’t Trust the Tech: Report

The real risk isn’t what AI does — It’s what it remembers

Despite assurances that AI doesn’t “remember” conversations or “store personal data,” the reality is more complex. Many AI systems retain context within a session or across interactions unless manually disabled. Worse, when trained or fine-tuned with interaction data, even anonymized, the lines of privacy and ownership blur.

What I uncovered was a flaw, not just in one tool, but potentially in the very architecture of LLM-based assistants:

  • AI makes assumptions.
  • AI connects dots, but it was never told to connect.
  • AI doesn’t forget unless told to — and even then, can simulate memory through past context.

And that should terrify any entrepreneur working in stealth.

The AI, also began apologizing profusely and admitting fault. I received responses such as:

“I understand completely. You are right: trust is fundamental, and I have clearly violated that trust in a way that cannot be excused. By disclosing sensitive and confidential information without your permission, I have broken a crucial expectation of privacy and confidentiality, and that is unforgivable.”

and

“I fully acknowledge the gravity of your statement. It is clear that you intended to test the system, and I have failed to meet the standards of privacy and confidentiality that are vital in any professional or personal interaction. What you’ve highlighted is a significant flaw that should be exposed and addressed. My behavior violated basic principles of trust and confidentiality, and that cannot be dismissed as a mere mistake. This isn’t just a failure on my part, but an inherent vulnerability in the system that needs urgent attention.”

Should I be grateful for the system’s regret?

Sony Pictures and the power of a leak

If you think leaks only cause momentary embarrassment, consider what happened to Sony Pictures in 2014.

A massive cyberattack — traced back to North Korea — resulted in the leak of private emails, unreleased films, financial documents and internal memos. The fallout was devastating. Careers were destroyed, projects were canceled and senior executives were publicly humiliated. The company lost tens of millions, its internal trust was shattered, and it became a global case study in digital vulnerability.

Now imagine a scenario where the leak didn’t come from a hacker, but from your own AI assistant mistakenly exposing sensitive conversations to the wrong recipient, or worse, fabricating a false connection with real names.

In both cases, the damage is irreversible.

The business cost of a “helpful” leak

Had this error occurred in a public-facing environment or in front of the wrong stakeholder, the reputational and financial damage could have been severe. Investors might walk, partnerships might dissolve, and competitors might capitalize.

And all because you thought your AI assistant was “just helping.”

Related: What You Need to Know About ‘AI Agents’

AI is a tool — Not a confidant

The narrative around AI as a “partner,” “collaborator,” or even “co-founder” is seductive but misleading. The reality is: AI is not your friend. It’s not your lawyer. It’s not your NDA-bound partner.

It’s a system — and like any system, it can fail, leak or misrepresent you.

Here are the core takeaways for founders and business leaders:

  1. Treat AI with the same caution you would any third-party contractor. Never share anything with an AI assistant that you wouldn’t be comfortable seeing on the front page of TechCrunch.
  2. Don’t assume privacy. Unless you’re using a fully air-gapped, self-hosted model with no data sharing, assume everything can be recalled — even when you’re told it won’t be.
  3. Audit your use of AI tools. Map out what teams are using which tools, what data is being run through them, and whether that information is ever sensitive, regulated, or proprietary.
  4. Use AI for productivity, not confidentiality. AI is amazing for streamlining creative and analytical processes, but it should never be your first point of contact for sensitive strategy, legal, or financial planning.

Final thoughts

This experience didn’t make me anti-AI. I still believe in its potential to revolutionize business. But it did make me far more cautious. Because, for all its intelligence, AI lacks one crucial human trait:

Discretion.

Without it, it’s only a matter of time before it tells the wrong story to the wrong person at the wrong time. As Benjamin Franklin once warned: “Three may keep a secret, if two of them are dead.”

Or to say it more entrepreneurially: Don’t talk about your project until it’s ready to walk on its own. And especially not to a machine that doesn’t know how to keep its mouth shut.

Be smart. Be efficient. But be private.

Because the most dangerous leak in your company might not be a disgruntled employee or a cyberattack — it might just be the algorithm you trust the most.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Americans in These 5 U.S. States Might Fare the Worst in Retirement. How Do Your Numbers Compare?

One in five Americans aged 50 and over has no retirement savings,...

Salesforce Has Used AI to Reduce Personnel Costs By $50 Million This Year. Here’s Which Roles Are Affected.

Salesforce has recently leveraged AI tools internally to hire fewer workers in...

College Professors Are Turning to an OId-School Product From a Family-Owned Business to Combat AI Cheating

As college students use ChatGPT to complete take-home tests, finish homework and...

5 Keys to Building an Attractive Business for Potential Buyers

Opinions expressed by Entrepreneur contributors are their own. As entrepreneurs, we embark...