They promised us Martian colonies; instead, we got Facebook

By Dr. Andrea Galli, published in Modern Diplomacy

The advent of the digitization changes the values of the society, especially as an apparatus of power, not as a real benefit to humanity.

Everyone talks about digitalization. When I browse the science and technology section in newspapers, I mostly find articles on smartphones, clouds, and social media. And I realize that the entertainment industry has become the technological progress engine nowadays.

For purposes of illustration: by 2019, California invested around 75 billion USD in venture capital, more than a half of it across the US, distributed between more than 2,300 startups. That is substantial. But if you take a closer look, the picture changes. More than a half of that goes into software development, with only about 20 percent allocated for life sciences and almost nothing for significant engineering. The buzz words are always the same: “cloud” something, “smart” something, “AI” something, “blockchain” something. In the meantime, the more aloof the claim, the higher the probability of funding, even if the real innovative benefit to humanity is negligible.

The situation is not better in other technology centers, including those in Europe. So in the end, we have cases like Theranos which turn out to be fraud machines on a large scale. We have the Binary Options scam startups in Tel Aviv which plundered the savings of people from half a continent. Or Wirecard in Germany suspected of operating one of the largest cloud platforms for money laundering.

If not based on a robust fraudulent scheme, the business models of such “cloud” something, “smart” something, “AI” something, “blockchain” something companies are actually ailing right from the start. Most users are not willing to pay money to use their platforms. That’s why the tech giants have come up with an idea: they pretend to believe in the dream of free use –and users pay with their private data.

Just imagine that we are back in the year 1990 when sending letters and making phone calls were still relatively expensive matters. The representative of a new telecommunications company stands at your door and says: “We have a super offer for you. You will never have to pay for long-distance calls again, we will also deliver every letter for free. But we will record everything you say or write. Furthermore, we reserve the right to analyze this information, share it with others, sell it, and besides–if we don’t like specific content –to delete it.” It’s clear what you would have said or done at the time to such a representative.

Today, we embrace the digital monitoring of society because we see this as a new normality. The sin was committed in 2004 when Google went public after the dotcom bubble burst. Even in the 1990s, search engines and social networks were still underpinned by the best intentions. They were meant to connect people, help share knowledge, create common grounds, and make money. It was about indexing websites while preserving the informational self-determination of the individual. Then it became clear that little money could be earned that way. And so began the indexing –the profiling – of users, i.e., people of flesh and blood.

The new tech companies collect all the data about our searching, writing, reading, walking, breathing, eating, paying, liking, loving, disliking, laughing, and purchasing behavior. This is called the Big Data. They can use that information to track us and sell us things. Or to monitor our thoughts and sell us lies. Or to surveil our opinion and manipulate us. Or they can resell the data and the analyzed profiles to third parties, including governmental organizations and political parties.

Artificial Intelligence plays a dominant role in this user profiling, monitoring, and surveillance business, since it delivers the techniques for it. Some computer scientists involved in Artificial Intelligence development enthusiastically say: “When computational learning ability meets large amounts of data, the quantity should one day turn into quality.” In other words, intelligence that learns on its own is actually created. Maybe so, but we are a long way from that.

Neural networks in AI remain classification and correlation machines. They detect patterns in data, for example, faces on billions of pictures. From such patterns, findings can be derived which, in turn, can be interpreted and used by humans. Yet, first of all, this has nothing to do with intelligence in the genuine sense of the word. It has nothing to do with the ability of an organism to independently create a model and to make decisions to adapt and thus to survive on its basis. If still more computing power meets more data, then we get better correlations, better pattern recognition, but not intelligence.

In 2012, the world’s fastest supercomputer was running at the Lawrence Livermore National Lab. It simulated a neural network with the complexity of a human brain with 530 billion neurons and 137 trillion synapses. The machine required eight megawatts of power but was 1,500 times slower than a human brain. Consequently, it would need 12 gigawatts to simulate an average human brain in real-time (let us say, that of an acumen of Omar Simpson). That is the power of about 15 to 20 nuclear reactors or 100 coal-fired power plants. Greta Thunberg will be glad to hear it! We will never, ever create artificial intelligence with the existing computer architectures.

The tech giants, from Facebook to Google, and the technological centers pursuing the buzz of the “cloud” something, “smart” something, “AI” something, “blockchain” something are making our lives difficult with their practices. The Silicon Valley and other comparable innovation centers promised us Martian and Moon colonies. They promised us luxurious interplanetary vessels populated with androids to do our housework and sexy cyborgs to entertain us with brilliant conversations. Instead, we received smartphones with preinstalled Facebook apps or other similar social media platforms. And in certain cases we got industrial robots that are taking away our jobs. Or algorithms running on supercomputers that automatically invest our hard-earned pensions into the technological innovation of the “somethings”. Or computational propaganda bots that trigger chain reactions of posts in social networks by publishing messages of ideology or hate or investment advice. The list of innovations that we got is long.

The upside is that if Facebook would vanish from the face of the earth tomorrow, what would be the consequences for humanity? None! Except for the tears of loneliness flowing on empty screens among social media addicts. But one thing is clear: the advent of digitization changes the values of the society and the quality of life as much as the advent of plastic did. Its long-term benefits are ambiguous. The responsibility for these innovations is enormous, certainly, not as a “technical means”, but as an instrument of power and power itself. It is through the culture of digitization that the spirit of a new power will manifest itself. There is no doubt (you can already judge by the early signs today) that digitization will be authoritarian and repressive like no other culture in the world.