In 2025, social networks will "change their skin" to return to their true purpose of being "platforms for the common good of society" (prosocial media). This is the trend predicted by technology news sites. This also seems to be the desire of many social network users who are fed up with the chaotic world of the Internet.
In terms of content, the current mainstream social media platforms are nothing more than mirrors of each other. They all have the same features and sometimes even the same content. The lack of innovation makes social media lose its novelty.
In terms of impact, by now, anyone who uses social media is no stranger to its harmful effects on human mental health. A growing body of psychological and sociological research has shown that time spent on platforms like Instagram and TikTok increases the risk of anxiety, depression, negative self-image, and low self-esteem. These research findings are ubiquitous on streaming platforms, like common knowledge. The good news is, many optimists believe that social media users will learn to save themselves. Many young people are wondering what their lives, personal health, emotions, and mental health would be like without social media. In 2023, technology research firm Gartner predicted that 50% of users will abandon or significantly reduce their social media usage by 2025. Looking at popular trends online, writer Jessica Byrne of thred.com believes that this prediction is highly likely to come true. Of course, millions of users will not delete their social media accounts immediately or overnight, but the change will start with users no longer interacting continuously on platforms. Byrne believes that Generation Z (born between 1996 and 2012) will lead this change.
Longing for a reality they never experienced—a “pre-internet” world—Gen Z is reviving hobbies that have been lost since people moved their lives online. Young people are joining running groups and book clubs on social media. They are finding new ways to connect with their peers beyond just liking and sharing posts. They are searching for meaning in life. Their curiosity gives them an intrinsic drive to seek out new experiences.
Wired magazine writer Audrey Tang calls this shift prosocial media. Tang explains: Prosocial media is media that not only captures users' attention but also promotes mutual understanding among them, empowers all voices, and promotes the ability to listen to differences. One of the first steps that social networks themselves have taken in recent years is to create a feature that allows people to collectively add context (Community Notes) to potentially misleading information.
In Taiwan, Cofacts, a crowdsourced fact-checking platform, is taking this concept further by empowering users to contextualize information in private groups. Launched in 2017 by the g0v tech community, the platform was successfully rolled out in Thailand in 2019. Research from Cornell University found that Cofacts processed misinformation queries faster and more accurately than professional fact-checking sites. Prosocial media also addresses the problem of centralized control in the hands of a few tech giants by using decentralized social media protocols that allow content to flow seamlessly between different social media platforms. For example, last year Meta’s Threads joined the Fediverse, a group of interoperable social media platforms that includes Mastodon and WordPress. Threads users can now follow accounts and post on other social networks. In February, another decentralized platform, Bluesky, launched to the public, funded by Twitter founder Jack Dorsey. Decentralization promises a more democratic online space, where people have more control over their data and their experiences. This is a factor that is increasingly valued by users. A study at the University of Cincinnati found that this is a major reason why users decide to join a decentralized social network like Mastodon. It’s all just speculation. Everyone has a million different reasons to stay on social media. But it’s entirely possible that these changes will happen by 2025 and will last at least until the next big thing comes along.
Meta's "AI users" will also have profile pictures, introduce themselves, post and share AI-generated content on two social networks with a combined 5 billion users worldwide. A future where humans interact with algorithms in human guise has emerged. Meta's move is said to increase interaction and retain young users. Connor Hayes, vice president of generative AI products at Meta, said Meta's top priority in the next two years is to make its applications "more interesting and engaging," including making interactions with AI more social. Meta's reliance on AI is understandable, but in an era where AI-generated content is so overwhelming that it's impossible to distinguish between real and fake, Mark Zuckerberg's desire for humans to play social networks with AI only makes people more worried.
“Without strong safeguards, platforms risk amplifying false narratives through AI-driven accounts,” Becky Owen, global head of marketing and innovation at creative agency Billion Dollar Boy, told the Financial Times. Owen, formerly head of creator innovation at Meta, stressed that while AI characters could become a “new creative entertainment format,” there is also the risk of them flooding platforms with low-quality content, undermining the creative value of content producers and eroding user trust. “Unlike human creators, AI characters do not have human life experiences, emotions, or the capacity for empathy,” he added. In fact, over the past few years, the internet has been flooded with low-quality AI-generated content, posted everywhere to attract engagement. Analysts have a word for this type of content: slop (similar to spam). Slop is low-quality, AI-generated content (both text and images) whose primary purpose is to attract advertising revenue and improve search engine rankings. AI can build a better future, but we have to worry about blocking the crap they create. The "social media for society" movement will also have a hard time getting very far if AI bots with no humanity and emotions are everywhere. What the world needs now, as CNET tech reporter Katelyn Chedraoui puts it, is a better AI labeling system. Some flagging and warning solutions have been implemented, such as "AI content" or watermarks on photos, but they are not enough.
In the age of AI content explosion, everyone must learn to protect themselves and hone their skills in detecting AI-generated products. But as AI continues to improve, even experts will have a hard time accurately assessing images. What’s worrying, according to Chedraoui, is that improving the visuality of labels is at the bottom of many AI companies’ priority list. “2025 should be the year we develop a better system for recognizing and labeling AI images,” she urges.
Comment (0)