|
Privacy Preserving Prompt Writing
|
|
|
In this issue
Strategies for Building Transparent AI
Transparency vs IP, Security & Privacy
How to Test for Transparency
|
|
|
|
|
|
The Moment Everything Changed
I still remember the sinking feeling. Four months ago, I was working with a design team when it happened. They'd been analysing weeks of user research—properly de-identified, of course. No names, no personal details, just behavioural patterns and anonymised quotes. Professional work by professional researchers.
Someone asked ChatGPT to help identify themes across the sessions. They pasted in the sanitised transcripts, asking for pattern analysis. Nothing that would violate user privacy. Nothing that seemed risky.
But those behavioural insights? That's the company's competitive intelligence. Those anonymised quotes revealing pain points with existing solutions? That's proprietary market research. The patterns showing how users actually behave versus what they say? That's the kind of insight companies spend millions to uncover.
The researcher hadn't done anything careless with user data. They'd followed every privacy protocol. But they'd just shared their organisation's hard-won knowledge with OpenAI's systems. Corporate IP, handed over in pursuit of efficiency.
What used to require deliberate corporate espionage now only requires naïve enthusiasm and a helpful AI assistant.
Two days later, after I'd explained the implications to their team, the design director pulled me aside. "I haven't slept for two nights," they said. "I keep thinking about all that research, all those insights we've built our strategy on. It's out there now. We can't get it back."
Here's what struck me: that fear the director expressed—lying awake, feeling physically sick about lost control—is exactly what users feel when we ask them to trust our AI systems. Same vulnerability, different sides of the screen. We can't build trustworthy AI experiences if we don't understand this fear from the inside.
We're all one careless prompt away from a Samsung moment.
|
|
|
When Good Intentions Go Wrong
April 2023: Samsung had just given engineers permission to use ChatGPT to boost productivity. Within 20 days, three separate incidents unfolded that would change how the company—and the industry—thinks about AI privacy.
An engineer debugging semiconductor code pasted proprietary source code into ChatGPT. Another shared optimisation algorithms for detecting chip defects. A third fed meeting transcripts into the system to generate professional minutes.
The engineers worked normally, unaware they'd done anything wrong. ChatGPT was helpful. The code suggestions were useful. The meeting minutes looked professional.
But here's the devastating part: by default, ChatGPT retains user input for training unless users opt out, and data already submitted cannot be easily deleted or retrieved. Samsung's trade secrets weren't just temporarily exposed—they were now potentially part of ChatGPT's training data, impossible to claw back from OpenAI's servers.
When Samsung discovered the incidents through internal investigation, their response was swift: an immediate company-wide ban on all generative AI tools.
This wasn't corporate espionage. It wasn't a traditional security breach. It was three professionals using AI to do their jobs better, completely unaware they were publishing their company's intellectual property to the world—permanently.
(Samsung has since evolved their approach, implementing approved AI tools with proper safeguards—showing that transformation is possible when we learn from these moments.)
The Designer's Dilemma
Here's the uncomfortable truth: AI tools are genuinely transformative for design work, but our casual approach to sharing information creates risks we're only beginning to understand.
The problem isn't that AI is dangerous—it's that we're treating these tools like trusted colleagues rather than what they actually are: third-party services with their own data storage, usage policies, and business models.
Every time we interact with AI, we face the same choice: share enough context to get valuable help, but not so much that we compromise privacy. Get it wrong in either direction, and we either waste time on generic advice or accidentally publish confidential information.
This is where the Goldilocks principle becomes our north star—not perfection, but thoughtful progress.
|
|
|
The Three Zones of Information Sharing
Like the fairy tale, we're looking for the approach that's "not too hot, not too cold, but just right." But here's the crucial insight: sometimes there simply isn't a viable middle ground.
Zone 1: Too Cold—Sharing Too Little
"I'm working on improving an onboarding flow that has high drop-off rates. Can you suggest some general best practices for reducing friction in multi-step processes?"
When you're overly cautious, you share so little context that AI can't provide meaningful value. You get generic UX principles you could find in any article—nothing tailored to your specific users, business model, or drop-off patterns. This isn't protecting privacy—it's just getting advice too broad to implement effectively.
Zone 2: Too Hot—Sharing Too Much
"We're designing a new onboarding for our business banking customers as our current onboarding flow has a high drop-off rate. Here is the interactive prototype tested by customers (attached) and here are the complete interview transcripts (attached).
Can you analyse the prototype and review the transcripts?
Please identify all feedback themes, list how many times themes appear across interviews, detail challenges and moments of delight, and provide specific quotes for each theme."
This gives AI everything it needs and far more than necessary. You're exposing customer interview data, proprietary design patterns, business metrics, and potentially identifiable user quotes—all at maximum risk when a more targeted approach could provide similar value.
Zone 3: Just Right—The Two-Step Abstraction Method
The Goldilocks zone requires a systematic approach to sharing just enough information for valuable assistance whilst protecting what matters most.
Level 1: Generic Abstraction Remove specifics but maintain the essence of your challenge:
- Replace names (companies, products, people) with general terms
- Generalise locations, dates, or proprietary details
- Use relative metrics rather than exact numbers
- Substitute industry-specific terms with understandable alternatives
Level 2: Pattern Abstraction Focus on methods and transferable principles:
- Clearly identify the type of challenge
- Emphasise "how" and "why" rather than "what" happened
- Frame questions to address approach and methodology
- Ask for best practices rather than specific solutions
Just Right Example:
"I'm analysing user feedback from usability testing of a multi-step business account opening process. Testing revealed significant drop-off at verification stages, with users reporting confusion about document requirements and frustration with the length of the process. Several participants mentioned abandoning the flow when asked for information they didn't have readily available.
What are proven methods for streamlining complex financial onboarding whilst maintaining necessary compliance requirements?
How can I identify which verification steps are truly essential versus those that create unnecessary friction?"
This works because it includes meaningful context about the user problem and business constraints without revealing specific research data, company details, or proprietary design patterns.
|
|
|
When There Is No "Just Right"
Let's be honest about something crucial: for some design activities, there may not be a viable middle ground.
User Research Synthesis presents a perfect example of this dilemma. Even "anonymised" research data carries risks—behavioural patterns, verbatim quotes, and detailed observations can still identify participants or reveal proprietary insights. What seems safely abstracted to us might contain enough detail for AI systems to connect dots we never intended to share.
Other "No Go" Scenarios:
- Analysing customer support transcripts (even without names)
- Reviewing design systems with proprietary visual elements
- Processing any financial or health-related user data
- Evaluating accessibility issues tied to specific user needs
- Synthesising competitive research or market analysis
- Internal company financials, metrics or strategies
- Product roadmaps or pre-release feature details
In these situations, the responsible conclusion is clear: this is not an appropriate use case for public AI tools. Full stop.
Part of ethical AI use is being honest about its limitations and recognising when no viable middle ground exists.
Sometimes the right answer is to keep sensitive analysis within approved systems or rely on traditional collaborative methods that have served design teams effectively for decades.
|
|
|
Your Privacy-First Framework |
|
|
When you're tempted to share information with AI, run through this decision process:
The 5-Step Privacy Check
Step 1: Define Your Goal What specific outcome do you need? Be precise about your objective before considering what to share.
Step 2: Check Organisational Boundaries Have you verified organisational AI policies? Do you understand how your data will be used and stored?
Step 3: Identify Sensitive Elements Flag personal information, organisational data, and user information in what you're planning to share.
Step 4: Can You Transform? Can sensitive information be safely abstracted whilst preserving the context you need?
- If no: This is not appropriate for external AI tools. Stop.
- If yes: Apply the Two-Step Abstraction Method.
Step 5: Final Risk Assessment Is the remaining risk justified by the value gained? Would you be comfortable explaining this decision to stakeholders?
|
|
|
If You're Reading This at 3am
If you're here because you think you've just compromised everything. Forgive yourself for something you didn't know about. I have done it too, and that is why i am sharing what i know.
On a practical note, here's what to do :
- Document what happened - Write down exactly what you shared and when
- Check the platform's data policies - Some AI tools allow deletion requests within certain timeframes
- Inform your team lead or privacy officer - Transparency now prevents bigger problems later
- Review what was actually exposed - Often our 3am fears are worse than reality
Tomorrow isn't ruined. You're not the first, and you won't be the last. But you can be smarter going forward.
Six months after witnessing that privacy scare, I've helped teams develop practices that let them use AI thoughtfully without the 3am fears. You can too.
The Reality Check
Here's what nobody talks about: this careful abstraction process might not be sustainable long-term. As AI becomes more integrated into our design tools—built into Figma, embedded in research platforms, woven throughout our workflows—the luxury of careful prompt crafting may disappear.
We're in a transitional moment. The practices I'm advocating buy us time whilst the industry develops better privacy-preserving AI solutions: local models, enterprise systems with proper data governance, and tools designed specifically for sensitive design work.
But right now, with the tools most of us use daily, this Goldilocks approach is our best defence against naïve enthusiasm creating lasting privacy problems.
|
|
|
The designers who will thrive in the AI era won't be those who avoid these tools or those who use them recklessly. They'll be the ones who learn to harness AI's power whilst maintaining the trust that makes great design possible.
Your Goldilocks approach to AI privacy isn't just about avoiding Samsung-style incidents—though it will do that. It's about building the confidence to use AI boldly and appropriately, knowing you're protecting what matters whilst leveraging what helps.
This week, take action:
- Try the 5-step framework with one AI interaction
- Share this article with your team—start the conversation about AI privacy in your next design review
- Document one near-miss or lesson learned
- Tag me in your insights—I'm collecting patterns to help us all navigate better
Because here's what I've learned: no one navigates this transformation alone. Every time we share our near-misses and lessons learned, we make the entire design community stronger.
The future belongs to designers who understand that when we interact with AI, we're not just getting assistance—we're making decisions about trust, privacy, and professional responsibility that define both our work and our industry.
Find your "just right." Then help someone else find theirs.
|
|
|