Why Substack? Why Now?
Nurturing a healthier media ecology
Hello! Welcome to my Substack newsletter. If you’ve been an email subscriber to my WordPress blog, you’re already signed up. You may unsubscribe at any point.
I wanted to kick things off with a detailed explanation of why I’m starting this newsletter. There are four main reasons:
1) I’m abandoning social media. Practically speaking, this means leaving my Twitter and Instagram accounts in place for now (I deleted Facebook back in October of 2018) but withdrawing my active presence from them—that is, not using them at all. I’m doing this because I’ve concluded that our current media ecology, largely shaped by social platform design decisions that Silicon Valley companies have made over the past decade, is directly responsible for the accelerated way our society is unraveling. If we continue on our current trajectory, in a few short years the damage may be irreversible. I’m convinced that the only way to interrupt this pathology is to take drastic measures now. The hour is already late. If you think I’m exaggerating or being overly dramatic, keep reading.
Back in the early days of social media (circa 2009), Facebook and Twitter, the two major ad-driven social platforms at the time, were competing over who could create the most attention-grabbing chronological newsfeed. But the competition over attention quickly devolved into a competition over who could most effectively exploit human psychological weaknesses in order to maximize engagement. All for profit. For Twitter, Facebook, and other platforms that would follow, this competition led to the development of countless attention-grabbing, dopamine-generating design features as well as highly personalized, algorithmically curated newsfeeds. Combined, all these things have unwittingly resulted in what can be described as the largest psychology experiment ever run on an entire civilization—without informed consent and without an Institutional Review Board regulating it.I’ll highlight the most significant consequences and aspects of this experiment:
Brain hacking. The psychological hacking that users are continuously subjected to on these platforms is multi-dimensional. It’s mimetic/imitative, perceptual, social-validation-based, and identity-based in nature. Although the social platforms are machines, they function a lot like cults. They’re programmed to continuously identify and exploit human vulnerabilities in order to steer human behavior. Furthermore, being aware that the manipulation exists offers virtually no protection. No amount of vigilance or mindfulness on our part can overcome the neuropsychological hacking power inherent in the platform designs; nor can we outsmart deep-learning machines that constantly play out trillions of simulations.
Manipulation for profit. These platforms don’t merely hack the brains of its users; they collect sensitive personal information about us and sell it to the highest bidder. Think of them saying, “I have information on all these people that will reveal exactly when and how to persuade them to buy your product or to embrace your ideology. Who wants to pay me the most to put a message in front of them?” This is on top of regularly putting content in front of us that artificial intelligence (AI) simulations have proven will generate more engagement. In this way, we’re constantly being subjected to influences in ways that are at odds with our own interests and free will.
Destruction of shared reality and sense-making. Hyper-personalization of newsfeeds has led to individualized, manipulated constructions of reality for literally billions of people,and this result has crippled and deranged our collective ability to see and make sense of reality in shared ways. In terms of societal cohesion, this outcome is probably the most consequential. We saw a particularly dramatic manifestation of it at the nation’s Capitol on January 6th, 2021 and in the various interpretations of the event afterward. Tech philosopher L.M. Sacasas put it this way: “Consider the debate about whether this was a serious coup attempt or whether it was a farce and participants were there mostly for social media points. The answer is simply, ‘yes’… it’s not just that there is widespread disagreement about how to interpret the meaning of an event. It is also that there is widespread disagreement about the basic facts of the event in question.”
Destruction of trust. With the basis for shared reality so heavily undermined, a great many of us have lost our ability to trust anyone who doesn’t share our own very specific version/understanding of reality. When algorithms repeatedly show us evidence of “the other side” lying or being hypocritical or acting evil, it collapses whatever space there is for shared growth. The only things left are grievances, resentments, fears (paranoia), and lines in the sand. Moreover, while these sentiments are accelerated and hardened online, they spill out into shared physical spaces to the extent that even people who aren’t online are now downstream of the effects. My husband and I passed these yard signs a week ago when we walked to a local coffee shop. The people who put up these respective signs in their yards live across the street from one another.
It’s as if internet memes have made the leap from the digitized world into the physical realm. It makes me think of the original Jumanji film (1995) in which people get involuntarily sucked into a game world while creatures within the game unexpectedly burst into the non-game world and wreak havoc.
Becoming indispensable. People join and remain on these social platforms because of both perceived and actual benefits. It’s important to understand that this is by design. These Silicon Valley companies would love nothing more than to become “the obligate infrastructures that manage civilization’s global ‘social organs’”—in other words, to become indispensable for networking, maintaining connections to friends and family, advocacy and activism, overcoming isolation, finding blood/organ/bone marrow matches, reuniting with loved ones, fundraising, disaster relief, promoting books and music and other work, public discourse, etc. In fact, the more good/essential things they offer us, the more likely we are to become dependent on them, to remain in “The Matrix” to be milked for their profit. I believe that to whatever extent we’re able (I acknowledge that not everyone is able), we must resist becoming dependent on these specific digital oligarchs and actively seek to create more ethical alternatives. The harm to civilization and to the generations that will follow us is too great.
2) I want to give readers the best of what I have to offer as a writer. This is very much related to the first reason, but over the past few years, especially this past year during the pandemic, I fell into the habit of venting my ideas on social media almost as quickly as they formed. No matter how many times I reminded myself that my brain was being hacked, no matter how many commitments I made to myself to rein in this behavior, I couldn’t stop. I struggled a lot emotionally last year, and it proved too difficult to resist the easy social validation and instant dopamine fixes available at my fingertips. (Scroll back up and reread my paragraph on brain hacking.) Anyway, my habitual, compulsive posting on social media dissipated my creative energy and prevented me from doing deeper, more enduring work. Replacing that sort of ephemeral content with a biweekly newsletter will enable me to slow down and provide readers with higher quality content in an easy-to-access archive.
3) I’m exploring what it means to tend the digital commons. This reason is also tied to the first one. Since much of my writing touches on points of tension and division in society for the sake of peacemaking, I feel it’s imperative to cultivate discourse on these issues in spaces, whether digital or embodied, that are not attempting to maximize engagement through psychological manipulation. Whatever the various perceptions may be about the efficacy of individual efforts, I’m persuaded that attempting this kind of work on algorithmitized platforms is ultimately self-defeating, since they’re literally designed to create echo chambers and polarization (again, for the sake of profit). Here’s a visual representation of the liberal and conservative echo chambers on Twitter:
When I first started thinking about walking away from social media, the first thing I thought about was all that I would lose, and the first thing I felt was grief. I’ve experienced many benefits from my engagement on Twitter in particular, not the least of which has been connecting with incredibly thoughtful and curious people. I’ve both given and received a lot on that site. Plus, I doubt I could replicate the scale of my (perceived) reach on Twitter or Instagram through a Substack newsletter. However, there’s something that writer and humanities professor Alan Jacobs wrote that has stayed with me:
Those of us who live much of our lives online are not faced here simply with matters of intellectual property; we need to confront significant choices about the world we will hand down to those who come after us. The complexities of social media ought to prompt deep reflection on what we all owe to the future, and how we might discharge this debt.
He acknowledged that each person is free to choose dependency on platforms like Facebook, Twitter, and Instagram—living off the bounty of the respective CEOs— especially if they feel adequately compensated by the benefits they receive. But he pointed out that such dependency results in a forgetfulness about the reality that there’s another way to live. And only people who learn to carve out another way will be able to pass along to others a different way.
Your present-day social-media ecology eclipses the future social-media ecology of others. What if they don’t want their social lives to be bought and sold? What if they don’t want to live on the bounty of the factory owners of Silicon Valley? It would be good if we bequeathed to them another option, the possibility of living outside the walls the factory owners have built—whether for our safety or to imprison us, who can say? The open Web happens outside those walls.
In the interest of learning another way to live, another way to engage in public discourse, and a healthier mode of web citizenship, I’m leaving the confines of those industrial walls. I’m willing to trade the scale, exposure, and convenience they presently offer for a mere seed, perhaps—one that seems tiny and possibly archaic at the moment—but one that I hope will result in something sustainable, humane, and corrective. I’m inviting others to join me in this endeavor.
4) Finally, I like both the patronage model offered through Substack and the greater flexibility it offers for controlling who sees what content and how comments are moderated.
If all this sounds interesting to you and you’d like to come along for the ride, go ahead and subscribe by clicking the button below.
Hao, K. (2021, March 11). How Facebook got addicted to spreading misinformation. MIT Technology Review. https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/
Weinstein, B. (Host). (2021, February 26). Dark Horse Podcast with Tristan Harris & Bret Weinstein. [Video podcast episode]. In DarkHorse Podcast. YouTube.
Cooper, A. (2017, April 9). What is “Brain Hacking?” Tech Insiders on Why You Should Care. 60 Minutes. https://www.cbsnews.com/news/brain-hacking-tech-insiders-60-minutes/
Freed, R. (2018, March 18). The Tech Industry’s War on Kids. Medium. https://medium.com/@richardnfreed/the-tech-industrys-psychological-war-on-kids-c452870464ce
Harris, T. (2017, April 10). The Eyeball Economy: How Advertising Co-Opts Independent Thought. Big Think. https://bigthink.com/videos/tristan-harris-the-attention-economy-a-race-to-the-bottom-of-the-brain-stem
Ng, A (2017, May 1). Facebook lets advertisers target insecure teens, says report. CNET. https://www.cnet.com/news/facebook-advertise-insecure-teens-leaked-documents/
There are 2.8 billion people on Facebook, 1 billion people on Instagram, and 353 million people on Twitter.
Sacasas, LM. (2021, January 15). The Insurrection Will Be Live Streamed: Notes Toward a Theory of Digitization. The Convivial Society (Substack). https://theconvivialsociety.substack.com/p/the-insurrection-will-be-live-streamed
Harris, T. (2020, January 8). Unregulated Tech Mediation → Inevitable Online Deception → Societal Harm. Written Statement prepared for a Congressional Hearing. House Committee on Energy & Commerce. https://energycommerce.house.gov/sites/democrats.energycommerce.house.gov/files/documents/010820%20CPC%20Hearing%20Testimony_Harris.pdf
Yiu, Y. (2020, March 18). Visualizing Twitter Echo Chambers. Inside Science. https://www.insidescience.org/news/visualizing-twitter-echo-chambers