The problem with social media isn't free speech itself, but algorithms that elevate misinformation for engagement. A targeted solution is to remove Section 230 liability protection *only* for content that platforms algorithmically boost, holding them accountable for their editorial choices without engaging in broad censorship.
Social media algorithms amplify negativity by optimizing for "revealed preference" (what you click on, e.g., car crashes). AI models, however, operate on aspirational choice (what you explicitly ask for). This fundamental difference means AI can reflect a more complex and wholesome version of humanity.
Many foreign-based social media accounts promoting extremist views aren't state-sponsored propaganda. Instead, they are run by individuals in developing nations who have discovered that inflammatory content is the easiest way to gain followers and monetize their accounts. This reframes the issue from purely geopolitical influence to include economic opportunism.
There is emerging evidence of a "pay-to-play" dynamic in AI search. Platforms like ChatGPT seem to disproportionately cite content from sources with which they have commercial deals, such as the Financial Times and Reddit. This suggests paid partnerships can heavily influence visibility in AI-generated results.
Beyond data privacy, a key ethical responsibility for marketers using AI is ensuring content integrity. This means using platforms that provide a verifiable trail for every asset, check for originality, and offer AI-assisted verification for factual accuracy. This protects the brand, ensures content is original, and builds customer trust.
The value of a large, pre-existing audience is decreasing. Powerful platform algorithms are becoming so effective at identifying and distributing high-quality content that a new creator with great material can get significant reach without an established following. This levels the playing field and reduces the incumbent advantage.
Extremist figures are not organic phenomena but are actively amplified by social media algorithms that prioritize incendiary content for engagement. This process elevates noxious ideas far beyond their natural reach, effectively manufacturing influence for profit and normalizing extremism.
A/B testing on platforms like YouTube reveals a clear trend: the more incendiary and negative the language in titles and headlines, the more clicks they generate. This profit incentive drives the proliferation of outrage-based content, with inflammatory headlines reportedly up 140%.
The online world, particularly platforms like the former Twitter, is not a true reflection of the real world. A small percentage of users, many of whom are bots, generate the vast majority of content. This creates a distorted and often overly negative perception of public sentiment that does not represent the majority view.
When social media reach and engagement decline, it's easy to blame the platform's algorithm. However, the more productive mindset is to see it as a reflection of your content's declining quality or relevance. The algorithm isn't hurting everyone, it's hurting those who aren't good. The solution is to improve your craft, innovate, and adapt to cultural trends.
Before ChatGPT, humanity's "first contact" with rogue AI was social media. These simple, narrow AIs optimizing solely for engagement were powerful enough to degrade mental health and democracy. This "baby AI" serves as a stark warning for the societal impact of more advanced, general AI systems.