‘AI Slop’ Warning
Asst. Prof. Dr. Sarphan Uzunoğlu, faculty member at Izmir University of Economics (IUE), Department of New Media and Communication, has issued warnings, pointing out that it is becoming increasingly difficult to distinguish whether content spreading on social media is real or not. Stating that the process, also known as ‘AI slop’ (AI-generated pollution), where low-quality videos, images, and sounds are rapidly produced and shared could negatively affect children and young people, Asst. Prof. Dr. Uzunoğlu said: “Dependence on high-engagement content is increasing. The perception of reality and attention spans are eroding. What is true, important, or reliable is becoming increasingly blurred for users.”
According to the Digital 2025 Turkey Report, prepared by We Are Social and Meltwater, the number of active internet users in Turkey has reached 77.3 million. While Turkey ranks among the countries where internet use is most widespread, interest in social media has begun to increase every day. As of 2025, the number of social media users in Turkey has approached 60 million.
SCREEN TIME IS RISING
As time spent in front of screens and on social media rapidly increases, Asst. Prof. Dr. Sarphan Uzunoğlu stated that one must be cautious of misleading content produced by artificial intelligence. Expressing that certain content can be spread for ‘manipulative’ purposes, especially on social media platforms, Asst. Prof. Dr. Uzunoğlu offered advice to young users aged 10–20, as well as to families and companies wishing to maintain their credibility.
“PRESSURE FROM TWO DIRECTIONS”
Stating that ‘AI slop’ content is not just a problem created by users or content creators, Asst. Prof. Dr. Uzunoğlu said: “Today, social media platforms reward the potential for circulation rather than the accuracy of content. Low and medium-quality, fast-consumed, and emotion-triggering content becomes visible for exactly this reason. In the coming period, platforms will face pressure from two directions. On one hand, regulations and public pressure will increase; on the other hand, the dependence on high-engagement content will continue to sustain their business models. This dual structure makes the platforms' ‘neutral intermediary’ narrative unconvincing. Therefore, the issue is not just about AI producing more content; it's about the editorial decisions regarding what platforms choose to highlight.”
“BUILT ON RAPID RESPONSE”
Asst. Prof. Dr. Uzunoğlu continued as follows: “One of the fundamental problems for users is the removal of the time, attention, and mental distance required to distinguish whether content is real or fake. Digital platforms call on the user to react quickly rather than think. The constant flow of content, notifications, suggestions, and trend lists transforms the user's relationship with content into reflexive consumption rather than a conscious evaluation process. In this environment, the ability to distinguish is not weakening; the need to distinguish is being suppressed.”
“REALITY IS CONFUSED WITH VISIBILITY”
Stating that this situation could cause many negative effects on children and youth, Asst. Prof. Dr. Uzunoğlu said: “For young users encountering fast, emotional, and often manipulative content, what is ‘true’, ‘important’, or ‘reliable’ becomes increasingly uncertain. In an environment where algorithms prioritize content that generates the most engagement, reality is often confused with visibility. An implicit learning process operates where what is most viewed is seen as true, and what is most shared is seen as valuable. In the long run, this can lead to the weakening of critical thinking and the normalization of a superficial perception of the world. Additionally, because children and youth grow up within a constant attention economy, modes of thinking that require patience, depth, and context are pushed to the background. The expectation that everything should be short, fast, and ‘fun’ can negatively impact learning processes and emotional resilience.”
“REGULATIONS CAN BE MADE”
Stating that families have important duties in this process, Asst. Prof. Dr. Uzunoğlu said, “We should not handle the issue solely with a reflex of banning or controlling. Isolating children from the digital world is neither possible nor a healthy solution. The real need is to talk about content with children; not just what they watch, but thinking together about why it appeared before them, how it makes them feel, and what its purpose is. Such conversations strengthen children's ability to distance themselves from and question the content they encounter online. At this point, certain regulations can be made. Making platform recommendation systems transparent for children, implementing age-sensitive algorithms, and ensuring companies have accountable representatives at the national level is not censorship; it is a requirement of public responsibility.”
‘TRUST’ WARNING TO COMPANIES
Pointing out that companies have also started taking measures against misleading content, Asst. Prof. Dr. Uzunoğlu stated: “At first glance, it seems positive that companies are implementing measures like labeling, warnings, verification, or watermarking against AI-sourced content. However, depending on how, for what purpose, and how transparently these measures are applied, they could have the opposite effect instead of building trust. Reflexive and harsh interventions regarding AI carry the risk of dragging the user into an environment of constant alarm rather than solving the problem structurally. The critical point here is that the measures taken by companies should not turn into an alarm system that offloads all responsibility onto the user. A ‘we warned you, the rest is up to you’ approach does not solve the problem; it merely leaves the burden on the individual. To protect user trust, platforms must not only label content but also explain why and how that content is put into circulation and the logic by which it is highlighted. Trust cannot be built without transparency.”








