Understanding privacy and AI in health tech

Artificial intelligence is changing how people interact with their health benefits, making it easier, faster, and more personalized. But as AI becomes more embedded in health navigation tools, benefits leaders are asking important questions, especially when it comes to privacy.
We sat down with our Information and Security team to answer some of the most common questions Castlight Health hears from employers about AI in health tech. It’s not a technical deep dive; it’s a plainspoken, practical overview designed to help you understand how AI is shaping health tech and digital health navigation and what Castlight is doing to protect privacy every step of the way.
What does AI even mean?
It’s important to define some common terms swirling around AI, especially since “AI” is increasingly being used as a catchall term.
Artificial intelligence (AI): AI is a computer’s ability to do tasks that usually require human thinking, like learning, problem-solving, or understanding language. The term encompasses several different technologies, like machine learning and natural language processing, which help machines to improve over time.
Machine learning: Machine learning is a subfield of artificial intelligence that focuses on developing algorithms that allow computers to learn from and make predictions based on data. It also enables machines to improve their performance over time as they are exposed to more data.
Generative AI: Generative AI creates new, original content (like text or images or videos) in response to user prompts by learning from existing data patterns. ChatGPT, for example, uses generative AI.
Agentic AI: Agentic AI refers to AI systems that act on their own to achieve goals, making decisions and adapting with little or no human input. Unlike reactive tools, these systems are proactive and goal-driven, capable of complex, multi-step problem-solving. Self-driving cars that make real-time driving decisions without human intervention are an example.
How does Castlight use AI in health tech?
While people may casually call it “AI,” at Castlight we use machine learning to estimate care costs, identify care gaps, and improve navigation based on general patterns not individual member profiles. As one of the first digital health companies to invest in machine learning more than 10 years ago, we’ve spent the past decade training our systems to aggregate data and notice patterns from provider, health plan, claims, and procedural information to better identify member needs and predict care costs.
At Castlight, we’re committed to transparency. We’ve been using machine learning responsibly for years, and we understand the stakes when it comes to healthcare data. Our goal is to explain how these tools work in practice and, most importantly, what we do to protect privacy.
What privacy questions or concerns do you hear most from clients about AI?
At a high level, clients want to know: Is our data safe? At Castlight, the answer is yes.
Health data is deeply personal and heavily regulated. That makes data privacy in health tech more complex and more important than in other sectors. Both our clients (companies, including large, highly-regulated Fortune 500 companies) and our members (company employees) trust us with sensitive information, and we take that responsibility seriously.
This is where our long history of using machine learning and our “Privacy by Design” principles come into play. We build safeguards into our systems from the ground up, not as an afterthought. That includes:
- Clear data separation: Employers only see aggregated, anonymized data. Never individual member details unless a member opts in (like for a wellness challenge).
- Opt-in transparency: Members control what personal information is stored in the Castlight app and if or what they share with others like partners in our ecosystem for specialized care, or importing data from fitness trackers.
- Adhering to industry regulations: We follow all major healthcare regulations, including the Health Insurance Portability and Accountability Act (HIPAA) as well as the General Data Protection Regulation (GDPR), designing our systems with privacy and security built in from the start.
- Platform configuration: We can structure our solutions to meet your organization’s unique, internal data and privacy requirements.
- Data minimization and consent: We collect minimal personal data and always explicitly require informed consent, ensuring compliance with data protection laws while optimizing data-driven solutions.
- Anti-fraud protections: Our system scans for unusual logins and bad actors to protect members’ access and information.
- No data selling: Ever. Period.
This structure respects the line between the employer-employee relationship and the Castlight-member relationship.
Do Castlight’s machine learning models learn and store member information?
No.
Castlight’s machine learning models do not “learn” from individual member data in the way many people imagine, like a chatbot storing information or a social media platform tracking behavior over time to then regurgitate at some future point. We do not train our AI models on member-specific data. Instead, our machine learning models are built using general healthcare patterns, not personal health histories.
We also ensure that our machine learning models are transparent and auditable to support trust and compliance, especially with sensitive health data. We work closely with regulatory bodies to make sure that our systems meet the highest ethical and operational standards.
How do you give employers the data they need without exposing member-level information?
We provide powerful analytics, including usage trends, program performance, engagement data, but it’s always aggregated and anonymized unless otherwise agreed to by the member.
This gives benefits leaders insight into what’s working while keeping individual health information private.
Can you give an example of how Castlight protects privacy while still delivering personalization?
Sure. Let’s say our platform sees that a member is overdue for a preventive screening. The system can surface that reminder inside the app and suggest relevant resources the member can then choose whether or not to pursue, all without sharing that data with the employer or storing it in an external profile.
Another example: Our cost prediction models estimate provider charges using public and plan data, not individual identifiable claims. The member gets helpful information, but their data stays protected.
What should benefits leaders ask vendors who are using AI?
Here are a few smart questions to ask:
- Are you using AI to generate or personalize content?
- Are your models trained on member-level data?
- What controls are in place to prevent unintended use of data?
- Can members opt out of certain experiences?
- Do you sell or share user data with third parties?
If a vendor can’t clearly and simply explain how they use AI or how they protect data, that’s a red flag.
What’s next for AI and privacy in health navigation?
The AI landscape is evolving quickly. We see a future where opt-in AI tools can help answer questions (like about claims or gathering relevant provider options) after hours, support care navigation, and summarize doctor visits. But with every innovation, we’ll continue to lead with privacy and clarity.
New regulations and rising client expectations will also shape the conversation, and that’s a good thing. Castlight is ready.
What’s the bottom line for benefits leaders?
AI is a powerful tool, but it must be used responsibly and usefully, especially in healthcare. At Castlight, we’re not simply jumping on the AI bandwagon just because it seems like all companies need to suddenly say they use AI. We’ve been building smart, secure, privacy-first tools that provide real value for both our clients and members for more than a decade.
We’re here to help you navigate the future safely and with confidence.
Learn more about privacy, security, and compliance at Castlight in our Trust Center. Have questions? Request a demo here to learn more.



