We scan new podcasts and send you the top 5 insights daily.
While storing audio could be valuable for training models, Granola only stores transcripts. This preempts user fears of their voice data being misused or held against them, signaling a commitment to privacy over data hoarding.
Despite processing 15 million clinical charts, Datycs doesn't use this data for model training. Their agreements explicitly respect that data belongs to the patient and the client—an ethical choice that prevents them from building large, aggregated language models from customer data.
Microsoft's case management AI avoids training directly on private customer data. Instead, it operates on a "bring your own knowledge" model, using only the knowledge articles and resources explicitly provided by the customer. This approach sidesteps major privacy and data governance concerns common in enterprise AI adoption.
Unlike bots that join calls, Granola records audio at the OS level. This makes it universally compatible and positions it as a private tool, like Voice Memos, placing the onus of disclosure on the user, not the software.
As AI personalization grows, user consent will evolve beyond cookies. A key future control will be the "do not train" option, letting users opt out of their data being used to train AI models, presenting a new technical and ethical challenge for brands.
To lower the activation energy for user adoption, OpenAI deliberately will not use data connected to ChatGPT Health to train its foundation models. This strategic choice is designed to remove any tension between privacy and utility, assuring users their sensitive information is not being used for other purposes and building the trust necessary for scaled impact in the healthcare domain.
By running AI models directly on the user's device, the app can generate replies and analyze messages without sending sensitive personal data to the cloud, addressing major privacy concerns.
Tools like Granola.ai offer a key advantage by recording locally without joining calls. This privacy, combined with the ability to search across all meeting transcripts for specific topics, turns meeting notes into a queryable knowledge base for the user, rather than just a simple record.
The founder suggests that AI systems should mimic human forgetfulness. Having an agent's memory fidelity drop off over time could be a key feature, naturally "diffusing" sensitive information from old transcripts or emails, making the system safer and more aligned with social norms.
For AI to function as a "second brain"—synthesizing personal notes, thoughts, and conversations—it needs access to highly sensitive data. This is antithetical to public cloud AI. The solution lies in leveraging private, self-hosted LLMs that protect user sovereignty.
Instead of tackling complex knowledge work, Granola focuses on perfecting menial tasks. This avoids the common failure mode of AI assistants that are "almost" right but ultimately useless, building user trust through consistent, reliable performance on lower-stakes jobs.