A timely call-to-arms from a Silicon Valley pioneer.
You might have trouble imagining life without your social media accounts, but virtual reality pioneer Jaron Lanier insists that we’re better off without them. In Ten Arguments for Deleting Your Social Media Accounts Right Now, Lanier, who participates in no social media, offers powerful and personal reasons for all of us to leave these dangerous online platforms.
Lanier’s reasons for freeing ourselves from social media’s poisonous grip include its tendency to bring out the worst in us, to make politics terrifying, to trick us with illusions of popularity and success, to twist our relationship with the truth, to disconnect us from other people even as we are more “connected” than ever, to rob us of our free will with relentless targeted ads. How can we remain autonomous in a world where we are under continual surveillance and are constantly being prodded by algorithms run by some of the richest corporations in history that have no way of making money other than being paid to manipulate our behavior? How could the benefits of social media possibly outweigh the catastrophic losses to our personal dignity, happiness, and freedom? Lanier remains a tech optimist, so while demonstrating the evil that rules social media business models today, he also envisions a humanistic setting for social networking that can direct us toward a richer and fuller way of living and connecting with our world.
|Product dimensions:||4.60(w) x 7.10(h) x 0.60(d)|
About the Author
Jaron Lanier is a scientist, musician, and writer best known for his work in virtual reality and his advocacy of humanism and sustainable economics in a digital context. His 1980s start-up VPL Research created the first commercial VR products and introduced avatars, multi-person virtual world experiences, and prototypes of major VR applications such as surgical simulation. His books Who Owns the Future? and You Are Not a Gadget were international bestsellers, and Dawn of the New Everything was named a 2017 best book of the year by The Wall Street Journal, The Economist, and Vox.
Read an Excerpt
YOU ARE LOSING YOUR FREE WILL
WELCOME TO THE CAGE THAT GOES EVERYWHERE WITH YOU
Something entirely new is happening in the world. Just in the last five or ten years, nearly everyone started to carry a little device called a smartphone on their person all the time that's suitable for algorithmic behavior modification. A lot of us are also using related devices called smart speakers on our kitchen counters or in our car dashboards. We're being tracked and measured constantly, and receiving engineered feedback all the time. We're being hypnotized little by little by technicians we can't see, for purposes we don't know. We're all lab animals now.
Algorithms gorge on data about you, every second. What kinds of links do you click on? What videos do you watch all the way through? How quickly are you moving from one thing to the next? Where are you when you do these things? Who are you connecting with in person and online? What facial expressions do you make? How does your skin tone change in different situations? What were you doing just before you decided to buy something or not? Whether to vote or not?
All these measurements and many others have been matched up with similar readings about the lives of multitudes of other people through massive spying. Algorithms correlate what you do with what almost everyone else has done.
The algorithms don't really understand you, but there is power in numbers, especially in large numbers. If a lot of other people who like the foods you like were also more easily put off by pictures of a candidate portrayed in a pink border instead of a blue one, then you probably will be too, and no one needs to know why. Statistics are reliable, but only as idiot demons.
Are you sad, lonely, scared? Happy, confident? Getting your period? Experiencing a peak of class anxiety?
So-called advertisers can seize the moment when you are perfectly primed and then influence you with messages that have worked on other people who share traits and situations with you.
I say "so-called" because it's just not right to call direct manipulation of people advertising. Advertisers used to have a limited chance to make a pitch, and that pitch might have been sneaky or annoying, but it was fleeting. Furthermore, lots of people saw the same TV or print ad; it wasn't adapted to individuals. The biggest difference was that you weren't monitored and assessed all the time so that you could be fed dynamically optimized stimuli — whether "content" or ad — to engage and alter you.
Now everyone who is on social media is getting individualized, continuously adjusted stimuli, without a break, so long as they use their smartphones. What might once have been called advertising must now be understood as continuous behavior modification on a titanic scale.
Please don't be insulted. Yes, I am suggesting that you might be turning, just a little, into a well-trained dog, or something less pleasant, like a lab rat or a robot. That you're being remote-controlled, just a little, by clients of big corporations. But if I'm right, then becoming aware of it might just free you, so give this a chance, okay?
A scientific movement called behaviorism arose before computers were invented. Behaviorists studied new, more methodical, sterile, and nerdy ways to train animals and humans.
One famous behaviorist was B. F. Skinner. He set up a methodical system, known as a Skinner box, in which caged animals got treats when they did something specific. There wasn't anyone petting or whispering to the animal, just a purely isolated mechanical action — a new kind of training for modern times. Various behaviorists, who often gave off rather ominous vibes, applied this method to people. Behaviorist strategies often worked, which freaked everyone out, eventually leading to a bunch of creepy "mind control" sci-fi and horror movie scripts.
An unfortunate fact is that you can train someone using behaviorist techniques, and the person doesn't even know it. Until very recently, this rarely happened unless you signed up to be a test subject in an experiment in the basement of a university's psychology building. Then you'd go into a room and be tested while someone watched you through a one-way mirror. Even though you knew an experiment was going on, you didn't realize how you were being manipulated. At least you gave consent to be manipulated in some way. (Well, not always. There were all kinds of cruel experiments performed on prisoners, on poor people, and especially on racial targets.)
This book argues in ten ways that what has become suddenly normal — pervasive surveillance and constant, subtle manipulation — is unethical, cruel, dangerous, and inhumane. Dangerous? Oh, yes, because who knows who's going to use that power, and for what?
THE MAD SCIENTIST TURNS OUT TO CARE ABOUT THE DOG IN THE CAGE
You may have heard the mournful confessions from the founders of social media empires, which I prefer to call "behavior modification empires."
Here's Sean Parker, the first president of Facebook:
We need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. ... It's a social-validation feedback loop ... exactly the kind of thing that a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology. ... The inventors, creators — it's me, it's Mark [Zuckerberg], it's Kevin Systrom on Instagram, it's all of these people — understood this consciously. And we did it anyway ... it literally changes your relationship with society, with each other. ... It probably interferes with productivity in weird ways. God only knows what it's doing to our children's brains.
Here's Chamath Palihapitiya, former vice president of user growth at Facebook:
The short-term, dopamine-driven feedback loops we've created are destroying how society works. ... No civil discourse, no cooperation; misinformation, mistruth. And it's not an American problem — this is not about Russian ads. This is a global problem. ... I feel tremendous guilt. I think we all knew in the back of our minds — even though we feigned this whole line of, like, there probably aren't any bad unintended consequences. I think in the back, deep, deep recesses of, we kind of knew something bad could happen. ... So we are in a really bad state of affairs right now, in my opinion. It is eroding the core foundation of how people behave by and between each other. And I don't have a good solution. My solution is I just don't use these tools anymore. I haven't for years.
Better late than never. Plenty of critics like me have been warning that bad stuff was happening for a while now, but to hear this from the people who did the stuff is progress, a step forward.
For years, I had to endure quite painful criticism from friends in Silicon Valley because I was perceived as a traitor for criticizing what we were doing. Lately I have the opposite problem. I argue that Silicon Valley people are for the most part decent, and I ask that we not be villainized; I take a lot of fresh heat for that. Whether I've been too hard or too soft on my community is hard to know.
The more important question now is whether anyone's criticism will matter. It's undeniably out in the open that a bad technology is doing us harm, but will we — will you, meaning you — be able to resist and help steer the world to a better place?
Companies like Facebook, Google, and Twitter are finally trying to fix some of the massive problems they created, albeit in a piecemeal way. Is it because they are being pressured or because they feel that it's the right thing to do? Probably a little of both.
The companies are changing policies, hiring humans to monitor what's going on, and hiring data scientists to come up with algorithms to avoid the worst failings. Facebook's old mantra was "Move fast and break things," and now they're coming up with better mantras and picking up a few pieces from a shattered world and gluing them together.
This book will argue that the companies on their own can't do enough to glue the world back together.
Because people in Silicon Valley are expressing regrets, you might think that now you just need to wait for us to fix the problem. That's not how things work. If you aren't part of the solution, there will be no solution.
This first argument will introduce a few key concepts behind the design of addictive and manipulative network services. Awareness is the first step to freedom.
CARROT AND SHTICK
Parker says Facebook intentionally got people addicted, while Palihapitiya is saying something about the negative effects on relationships and society. What is the connection between these two mea culpas?
The core process that allows social media to make money and that also does the damage to society is behavior modification. Behavior modification entails methodical techniques that change behavioral patterns in animals and people. It can be used to treat addictions, but it can also be used to create them.
The damage to society comes because addiction makes people crazy. The addict gradually loses touch with the real world and real people. When many people are addicted to manipulative schemes, the world gets dark and crazy.
Addiction is a neurological process that we don't understand completely. The neurotransmitter dopamine plays a role in pleasure and is thought to be central to the mechanism of behavior change in response to getting rewards. That is why Parker brings it up.
Behavior modification, especially the modern kind implemented with gadgets like smartphones, is a statistical effect, meaning it's real but not comprehensively reliable; over a population, the effect is more or less predictable, but for each individual it's impossible to say. To a degree, you're an animal in a behaviorist's experimental cage. But the fact that something is fuzzy or approximate does not make it unreal.
Originally, food treats were the most common reward used in behaviorist experiments, though the practice goes back to ancient times. Every animal trainer uses them, slipping a little treat to a dog after it has performed a trick. Many parents of young children do it, too.
One of the first behaviorists, Ivan Pavlov, famously demonstrated that he didn't need to use real food. He would ring a bell when a dog was fed, and eventually the dog would salivate upon hearing the bell alone.
Using symbols instead of real rewards has become an essential trick in the behavior modification toolbox. For instance, a smartphone game like Candy Crush uses shiny images of candy instead of real candy to become addictive. Other addictive video games might use shiny images of coins or other treasure.
Addictive pleasure and reward patterns in the brain — the "little dopamine hit" cited by Sean Parker — are part of the basis of social media addiction, but not the whole story, because social media also uses punishment and negative reinforcement.
Various kinds of punishment have been used in behaviorist labs; electric shocks were popular for a while. But just as with rewards, it's not necessary for punishments to be real and physical. Sometimes experiments deny a subject points or tokens.
You are getting the equivalent of both treats and electric shocks when you use social media.
Most users of social media have experienced catfishing (which cats hate), senseless rejection, being belittled or ignored, outright sadism, or all of the above, and worse. Just as the carrot and stick work together, unpleasant feedback can play as much of a role in addiction and sneaky behavior modification as the pleasant kind.
THE ALLURE OF MYSTERY
When Parker uses the phrase "every once in a while," he's probably referring to one of the curious phenomena that behaviorists discovered while studying both animals and people. If someone gets a reward — whether it's positive social regard or a piece of candy — whenever they do a particular thing, then they'll tend to do more of that thing. When people get a flattering response in exchange for posting something on social media, they get in the habit of posting more.
That sounds innocent enough, but it can be the first stage of an addiction that becomes a problem both for individuals and society. Even though Silicon Valley types have a sanitized name for this phase, "engagement," we fear it enough to keep our own children away from it. Many of the Silicon Valley kids I know attend Waldorf schools, which generally forbid electronics.
Back to the surprising phenomenon: it's not that positive and negative feedback work, but that somewhat random or unpredictable feedback can be more engaging than perfect feedback.
If you get a piece of candy immediately every time you say please as a child, you'll probably start saying please more often. But suppose once in a while the candy doesn't come. You might guess that you'd start saying please less often. After all, it's not generating the reward as reliably as it used to.
But sometimes the opposite thing happens. It's as if your brain, a born pattern finder, can't resist the challenge. "There must be some additional trick to it," murmurs your obsessive brain. You keep on pleasing, hoping that a deeper pattern will reveal itself, even though there's nothing but bottomless randomness.
It's healthy for a scientist to be fascinated by a pattern that doesn't quite make sense. Maybe that means there's something deeper to be discovered. And it's a great tool to exploit if you're writing a script. A little incongruity makes a plot or a character more fascinating.
But in many situations it's a terrible basis for fascination. The allure of glitchy feedback is probably what draws a lot of people into crummy "codependent" relationships in which they aren't treated well.
A touch of randomness is more than easy to generate in social media: because the algorithms aren't perfect, randomness is intrinsic. But beyond that, feeds are usually calculated to include an additional degree of intentional randomness. The motivation originally came from basic math, not human psychology.
Social media algorithms are usually "adaptive," which means they constantly make small changes to themselves in order to try to get better results; "better" in this case meaning more engaging and therefore more profitable. A little randomness is always present in this type of algorithm.
Let's suppose an algorithm is showing you an opportunity to buy socks or stocks about five seconds after you see a cat video that makes you happy. An adaptive algorithm will occasionally perform an automatic test to find out what happens if the interval is changed to, say, four and a half seconds. Did that make you more likely to buy? If so, that timing adjustment might be applied not only to your future feed, but to the feeds of thousands of other people who seem correlated with you because of anything from color preferences to driving patterns.
Adaptive algorithms can get stuck sometimes; if an algorithm gets no further benefits from further small tweaks to its settings, then further small tweaks won't stick. If changing to four and a half seconds makes you less likely to buy socks, but five and a half seconds also makes sales less likely, then the timing will remain at five seconds. On the basis of available evidence, five seconds would be the best possible time to wait. If no small random change helps, then the algorithm stops adapting. But adaptive algorithms aren't supposed to stop adapting.
Suppose changing even more might improve the result? Maybe two and a half seconds would be better, for instance. But incremental tweaks wouldn't reveal that, because the algorithm got stuck at the five-second setting. That's why adaptive algorithms also often include a sparser dose of greater randomness. Every once in while an algorithm finds better settings by being jarred out of merely okay settings.
Adaptive systems often include such a leaping mechanism. An example is the occurrence of useful mutations in natural evolution, which is usually animated by more incremental selection-based events in which the genes from an individual are either passed along or not. A mutation is a wild card that adds new possibilities, a jarring jump. Every once in a while a mutation adds a weird, new, and enhancing feature to a species.
Neuroscientists naturally wonder whether a similar process is happening within the human brain. Our brains surely include adaptive processes; brains might be adapted to seek out surprises, because nature abhors a rut.
When an algorithm is feeding experiences to a person, it turns out that the randomness that lubricates algorithmic adaptation can also feed human addiction. The algorithm is trying to capture the perfect parameters for manipulating a brain, while the brain, in order to seek out deeper meaning, is changing in response to the algorithm's experiments; it's a cat-and-mouse game based on pure math. Because the stimuli from the algorithm don't mean anything, because they genuinely are random, the brain isn't adapting to anything real, but to a fiction. That process — of becoming hooked on an elusive mirage — is addiction. As the algorithm tries to escape a rut, the human mind becomes stuck in one.(Continues…)
Excerpted from "Ten Arguments for Deleting Your Social Media Accounts Right Now"
Copyright © 2018 Jaron Lanier.
Excerpted by permission of Henry Holt and Company.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of Contents
Introduction, with cats 1
YOU ARE LOSING YOUR FREE WILL
Welcome to the cage that goes everywhere with you 5
The mad scientist turns out to care about the dog in the cage 8
Carrot and shtick 10
The allure of mystery 12
Heaven and hell are made of other people 16
Bit as bait 19
Addiction, meet network effect 21
Addiction and free will are opposites 23
QUITTING SOCIAL MEDIA IS THE MOST FINELY TARGETED WAY TO RESIST THE INSANITY OF OUR TIMES
The BUMMER machine 25
The parts that make up the BUMMER machine 29
The problem is limited, so we can contain it 37
SOCIAL MEDIA IS MAKING YOU INTO AN ASSHOLE
Sooty snow 39
Meeting my inner troll 41
The mysterious nature of asshole amplification technology 44
The most masterful master switch 49
Go to where you are kindest 51
SOCIAL MEDIA IS UNDERMINING TRUTH
Everybody knows 53
When people are fake, everything becomes fake 55
BUMMER kills 59
SOCIAL MEDIA IS MAKING WHAT YOU SAY MEANINGLESS
Meaning ajar 65
Pod people 69
SOCIAL MEDIA IS DESTROYING YOUR CAPACITY FOR EMPATHY
Digitally imposed social numbness 76
The lost theory in your brain 79
SOCIAL MEDIA IS MAKING YOU UNHAPPY
Why do so many famous tweets end with the word “sad”? 81
The wrong end of the BUMMER 85
High castle 90
SOCIAL MEDIA DOESN’T WANT YOU TO HAVE ECONOMIC DIGNITY
Double BUMMER 93
Baby BUMMER 94
Conflicted BUMMER 97
BUMMER blinders 98
Better than BUMMER 99
The corp perspective 103
The user perspective 104
SOCIAL MEDIA IS MAKING POLITICS IMPOSSIBLE
Arc burn 107
Arab Spring 110
Neither left nor right, but down 115
Black Lives Matter 117
If only this game were already over 123
SOCIAL MEDIA HATES YOUR SOUL
I met a metaphysical metaphor 125
The first four principles of BUMMER spirituality 126
BUMMER faith 132
BUMMER heaven 134
Existence without BUMMER 136
BUMMER anti-magic 137
CONCLUSION: CATS HAVE NINE LIVES
About the Author 147