Microsoft has uncovered a privacy vulnerability in AI chatbots, dubbed Whisper Leak, that could expose sensitive topics even when conversations are encrypted. This side-channel attack leverages the pattern of data flow between users and AI services, akin to deciphering a silhouette through a frosted window. The streaming feature, which enhances natural conversation, inadvertently reveals information about the conversation topic. Microsoft's research, led by Jonathan Bar Or and Geoff McDonald, along with the Microsoft Defender Security Research Team, demonstrates that this vulnerability affects how AI chatbots display responses word by word, rather than all at once. The attack analyzes the size and timing of encrypted data packets, enabling attackers to make educated guesses about conversation topics with over 98% accuracy. The longer an attacker monitors conversations, the more effective the attack becomes, as the detection software improves with each example. However, major AI providers like OpenAI, Microsoft, and Mistral have implemented a solution by adding random gibberish to responses, disrupting the pattern that attackers rely on. To enhance privacy, Microsoft recommends avoiding sensitive topics on public Wi-Fi, using VPNs, checking for Whisper Leak protections, and considering the security of the network when discussing confidential matters. These findings emphasize the importance of addressing both the content and the patterns of communication in AI security, as encryption alone does not guarantee complete privacy.