Key Points:
- LinkedIn, owned by Microsoft, will automatically opt U.S. users into a data collection plan to train its generative AI models starting November 20, 2024.
- Users in the EU, EEA, and Switzerland are exempt due to stricter privacy regulations.
- The move has sparked criticism over the automatic opt-in process and lack of user transparency.
LinkedIn is under fire after announcing a significant update to its terms of service that will automatically opt U.S. users into a data collection program to train its generative AI models. Starting November 20, 2024, the platform will begin using member data to power AI content generation and provide more personalized services.
User Backlash and Privacy Concerns
Reports from 404 Media indicate that users in the European Union, European Economic Area (EEA), and Switzerland are exempt from this automatic opt-in, likely due to the region’s stringent data privacy regulations. This has left U.S. users questioning the fairness and transparency of LinkedIn’s approach.
The updated privacy policy has stirred significant discontent, with many users expressing frustration over the lack of clear communication. Critics argue that LinkedIn’s automatic opt-in should have been replaced with a more transparent, explicit consent process.
LinkedIn asserts that it employs privacy-enhancing technologies to anonymize or remove personal data from its AI training datasets. However, this explanation has not fully alleviated users’ concerns about the lack of control over their data.
To respond to the backlash, LinkedIn has given users the option to manually opt out of the data collection process. By navigating to the “Data Privacy” section in settings, users can disable the “Use my data for training content creation AI models” option. However, the company also clarified that opting out will not impact any data already used for AI training.
Europe’s Tougher Stance on AI Data Usage
LinkedIn’s decision to exclude European users from this data collection reflects the strict enforcement of data privacy regulations in the region. Several tech giants have faced similar challenges when trying to use EU citizens’ data for AI purposes.
For instance, in June, Meta paused its plans to use public data from Facebook and Instagram to train AI models after engaging with the Irish Data Protection Commission (DPC). Likewise, X was recently forced to stop processing EU and EEA users’ personal data for its AI chatbot, Grok, following legal pressure from the DPC.
This ongoing debate emphasizes the growing importance of transparency and consent in the era of AI-driven data usage.