OpenAI has claimed that the creation of ChatGPT would have been impossible without the use of copyrighted material to train its algorithms. It is no secret that AI has relied heavily on social media content. Indeed, AI has become the name of the game for social media platforms.
Now, LinkedIn is using its users’ resumes to tweak its AI model, and Snapchat says that, if users utilize a certain AI feature, they might end up in one of their ads. Users’ social media posts and pictures are now being used to train AI systems, and most users are unaware of this.
Social media platforms are the perfect training ground for AI systems
Companies running AI models are interested in making their systems as “natural” as possible, with one characteristic, in particular, being conversational. This is true, especially in the case of AI chatbots.
Social media platforms offer the perfect training ground for these models, given that most of the content on them is human-made. This means AI systems can access everyday speech and be constantly updated on world events. This is crucial for producing reliable and valid systems.
Nonetheless, one should keep in mind that AI corporations are training their systems for free using our content. Our vacation photos, birthday selfies, and cute posts are in fact being used for profit.
Users, however, can opt out of these services. The manner whereby users may opt out varies from platform to platform, but there is no guarantee that one’s content will always be safe, as third parties do indeed have access to content posted online.
In the past week, the United States Federal Trade Commission (FTC) revealed that social media platforms do a poor job of regulating their utilization of user data. It has come to light that some of the major social media platforms are using data obtained to train AI systems.
In the case of LinkedIn, the company has said user content on the platform may be used by the social media platform or its affiliates, with the aim of redacting or removing personal data from training data sets. LinkedIn users can opt-out of this by going to their Settings and Privacy tab and accessing the Data Privacy tab, but the platform notes that opting out will not affect the data they have already accessed from you.
Social media platform X is an interesting case, as well. Unlike LinkedIn, X is using user data and user posts to train its chatbot, Grok, which he has also covered in previous stories over how out of the rails it was.
Elon Musk’s social media platform has explained that its startup xAI uses people’s posts on X and their conversations with Grok to improve the chatbot’s ability to provide “accurate, relevant, and engaging responses.” Allegedly, this is done to help the bot develop a “sense of humor” and wit.
If you want to opt out of this service, you would have to go to the Data Sharing and Personalization tab under Privacy and Safety in settings. Then, in the “Grok” tab, you’d simply uncheck the box that allowed for the social media platform’s use of your data for AI purposes.
Whatever the case might be, one needs to be consistently mindful of how his or her online content might be used in training by AI companies.