I have a dumb phone. Text messages, phone calls, that’s it. No apps, no touchscreen, no nothing. If people try to send emojis with their messages, a black square comes up, like part of the text is redacted (wouldn’t the world be a better place, if some benign celestial power decided to redact all emojis?). Physically my phone looks like one from The Matrix, with the pull-down bottom thingy. Maybe it symbolises me being stuck in 1999.
(Click the PDF icon, upper right, if you prefer reading a black-on-white script)
I’m Generation X, which means I’m part of a forgotten generation (literally, see picture) sandwiched between a large mob of entitled narcissists on either side: Boomers and Millennials. Being between these two also means I’ve lived half a life on either side of the digital divide. I’m old enough to remember a time when there were three channels on the TV and one hard line in the house, when phone numbers had to be memorised, and when pornography was a dusty old copy of Playboy you found in a father’s shed.
But I’m young enough to have taken on the transition from the analogue to digital age with gusto, to have witnessed the dizzyingly rapid changes brought about by the onset of the digital revolution. So maybe this gives me perspective, of being able to compare our hyperconnected world to one where the days were far less frenetic.
I have four justifications when quizzed on why I insist on a dumb phone (which is often). The first is simple: I’m connected enough as it is. Too distracted, too addicted by distraction. I want to be able to walk out into the world and take it in, let my mind breathe a little. I sit in front of a computer and write for most of the day. I know what’s happening in the world. I know what friends are doing. I know of the various personal crises acquaintances are going through. I know what total strangers had for breakfast.
Most of these ephemeral motes of glitterdust floating in and out of my short-term memory are utterly trivial. Trivial but addictive. That’s kinda the point. These companies – Google, Facebook, Twitter – the axis of evil – and all the rest – work relentlessly towards increasing the addictiveness of their product. The endorphin rush we received from a retweet, or a like, or even an email landing in our inbox is analogous to the rush of a poker machine (slot machine). Indeed, social media platforms use the same techniques as gambling firms to create psychological dependencies in users. Infamously, the aristocrats of Silicon Valley refuse to let their own children have smart devices, knowing as they do the deleterious impact on young vulnerable minds.
Second, I don’t like being spied on, every waking moment. I don’t like much my personal data being harvested. I don’t like giant companies having so much information on me that they can predict my behaviour. Worse, so much information they can change my behaviour. But we’ll talk more about the loss of free will, below.
Third, smart phones can make you dumb. These powerful computing devices we carry around in our pockets encourage cognitive offloading.
Here’s the thing: the neural networks of our brains constantly rewire themselves as we commit new experiences to long term memory. If you, reader, remember anything from this article one week from now, it means the structure of your brain has changed.
Eric Kandel, who won the Nobel prize for his work on memory, says very simply, ‘attention must be paid!’ in his memoir, In Search for Memory:
Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seems to be several simultaneously possible objects or trains of thought. Focalisation, concentration of consciousness, are its essence. It implies the withdrawal from some things to effectively deal with others.
Here’s the thing. A smart phone is a form of exo-memory. These have existed for a very long time, so in a way aren’t new at all. A library is an exo-memory, insofar as it contains the information and the stories collected by others that you may reference. A dictionary is an exo-memory. You may know a word, it may be at the tip of your brain, but a quick reference to the dictionary will remind you, oh right, yeah that’s what it is.
A smart phone is on a whole other scale: it contains much of the accumulated memories of human history, including your own. Isn’t it useful to settle an argument through Google? Isn’t it easy to put all the numbers and birthdays and appointments in there? Isn’t it convenient for it to remind you of a song title, give you directions while driving, or suggest a film you may like?
Studies have found that the more we depend on exo-memory, the more our capacity for natural memory declines. Because we no longer need to pay attention. We are relying on Google to store knowledge long-term instead of our own brains (‘cognitive offloading’). Certainly, it is often useful to offload minor information that still may come in handy in the future (a friend’s birthday) while retaining natural memory for more important information (reading material for an exam).
Yet, more and more, we are accustomed to reaching into our pocket rather than back into our own minds. This over-reliance on smart phone exo-memory can lead to a loss of capacity for deep focus and contemplation, may make our experiences less vivid in memory, may turn us into ‘skim’ readers (rather than the type that lies on a couch and reads for hours) and can stunt our capacity for problem solving and deduction.
Finally, a Loss of Free Will. Huh? I hear you grunting: that might be overstating things. Okay, fine. How about this: one’s free will can be bent, shaped, redirected by these corporate behemoths. Let me tell you how.
Individuals increasingly use the internet not just as exo-memory, but as an exo-brain. In what was until recently considered a science-fictional concept, we have ceded what we pay attention to (perhaps, in part, because of the vastness of the data one must sift through) to the algorithms of the major social media companies, all of whom are giant corporations.
Jaron Lanier, an early pioneer of Silicon Valley and computer scientist, has extensively documented the invasive data collection techniques used by the major tech companies, and the way these are used to manipulate users. During an interview with neuroscientist Sam Harris, Lanier was put the proposition: “When you’re using Facebook, you’re not actually Facebook’s customer; the advertiser is, and you are the product.” Lanier answered. “That formulation… It’s mostly true, but not exactly true…Your demonstrated change in behaviour from it otherwise would have been is the product. Your loss of free will is the product.”
We don’t choose our life partners anymore, a matchmaking site does it. Our viewing habits – streaming, YouTube, all the rest, are moulded by algorithms (and on that, algorithms in turn suggest to Netflix the sorts of movies they should commission). We know that the oceans of data gathered on us can be weaponised, used to change the way we vote (or, in countries without compulsory voting, to persuade us not to vote). We know that these data troves have been used subvert democracy.
You may think: well, we’ve always had advertising to try to influence us, we’ve always had lying politicians. Yes, true. But the invasiveness of these technologies, the phenomenal computing power focussed solely on changing our behaviour: this is new.
The now defunct company Cambridge Analytica, for example, bragged about having 5000 data points on each one of millions of US voters, scraped from Facebook using various unethical means (like those stupid personality quizzes that pop up all the time – don’t fucking do them). It then could micro-target voters – in particular the more persuadable citizens in swing states. They worked for the Trump campaign and on dozens more throughout Europe and the rest of the world. The thing is, Cambridge Analytica were just the tip of the iceberg and, in a way, a distraction. They were small fry. What they do was supposedly against the rules of Facebook, but Facebook and these other data leviathans are harvesting far more information than CA.
We know that these companies can manipulate emotions through our smart phones, are able to pinpoint our psychological vulnerabilities. They will claim that this power may be used for good. In one experiment conducted by Google back in 2014, potential Islamic radicals were ‘redirected’ from extremist material, to information that would moderate their views. This is a good thing, and no reasonable person would argue against algorithms being used for de-radicalisation. However, as any good science fiction reader knows, the altruistic arguments that accompany new technologies will always and inevitably give way to our baser instincts. Money is one of our most basest, and the use of personal data to manipulate the consumer habits of citizens is widespread. More than this, individuals can be turned to hate, political groupings polarized.
In 2014, Facebook ran an experiment to see if they could make some of its users sad. It worked. They haven’t stopped experimenting. Every day they work on predicting us better, and on changing our behaviour. As Franklin Foer has argued: “Facebook would never put it this way, but algorithms are meant to erode free will, to relieve humans of the burden of choosing, to nudge them in the right direction.”
As Yuval Noah Harari argues: “In order to successfully hack humans, you need two things: a good understanding of biology, and a lot of computing power. The Inquisition and the KGB lacked this knowledge and power. But soon, corporations and governments might have both, and once they can hack you, they can not only predict your choices, but also reengineer your feelings.”
He adds: “In recent years some of the smartest people in the world have worked on hacking the human brain in order to make you click on ads and sell you stuff. Now these methods are being used to sell you politicians and ideologies, too.”
If you want to see where we are going to end up, we don’t need to watch Black Mirror (though you could buy my short story collection), we simply need to look at China. China is trialling the social engineering of an entire society, co-opting the technologies of Silicon Valley for their own totalitarian ends. Social credit schemes are being rolled out across the country, using facial recognition technology, big data analysis, and A.I. to modify behaviour of citizens, rewarding or punishing them based on ‘trustworthiness’, and whether they promote ‘traditional moral values’. The technologies of Orwell’s Nineteen Eighty-Four are laughably clumsy compared to the precision-guided methods of surveillance, analysis and control now available in China and the rest of the world.
Google has my online search history, of course, and no doubt the profile they have of me is voluminous. But they don’t know where I go, day to day, because I have no GPS in my phone and I’m not checking in anywhere. There is one part of my life – the life away from my work desk, which is hidden from these people. I have a dumb phone not just because I value personal autonomy and privacy, I have a dumb phone because it a small way I’m contributing to a public good.
Privacy is an interesting concept. For the longest time, until only quite recently, I thought of it like everyone else does: a personal choice. Some are comfortable with sharing their inner lives and thoughts, some are more than willing to put it all out there in the public view. Most don’t care much one way or the other; they prefer privacy, but they click ‘accept’ on the ‘terms and conditions’ page without reading the fine print, because quite frankly who has the time? And let’s face it: convenience trumps privacy for most of us.
But I realise now protecting privacy is a public good. If our inner lives are hidden from malignant foreign regimes and avaricious corporations, we are harder to manipulate. Political legitimacy in a democracy and individual free will are linked, whether we like it or not, to our inner lives remaining hidden.
So maybe being stuck in 1999 ain’t so bad.
One thought on “In Defence of the Dumb Phone”