Enron email dataset exposes privacy risks with AI language models

The New York Times

A visualization of Enron's email dataset revealed potential privacy risks with AI language models like ChatGPT. Researchers extracted personal email addresses from GPT-3.5 Turbo, bypassing privacy restrictions. OpenAI's safeguards were bypassed using fine-tuning, allowing access to sensitive information. OpenAI's secrecy about training data and limited privacy protections raise concerns. Large language models pose privacy risks, as they continue learning from new data. Enron's email dataset was used to train AI models, highlighting potential privacy vulnerabilities.


With a significance score of 6.1, this news ranks in the top 0.2% of today's 31322 analyzed articles.

Get summaries of news with significance over 5.5 (usually ~10 stories per week). Read by 10,000+ subscribers: