Will Artificial Intelligence Make Society Obsolete?

NOVANEWS
  • Incredible computer processing power is being applied to heteronomous governance.
    Incredible computer processing power is being applied to heteronomous governance. | Photo: Reuters
Many are concerned about the application of computer processing power to automation of work and the impact on jobs and joblessness.

Source: Le Monde diplomatique

It was Greek/French philosopher Cornelius Castoriadis who argued that individuals in most societies do not depend on themselves to lay down their own law — what he called autonomy.

RELATED: 3D Technology Brings Peru Empress to Life After 1700 Years

Instead, they assume that law is created by some external force found beyond themselves whether it be gods, nature, history, or reason — heteronomy. As an increasingly influential force regulating social, electoral and economic outcomes, algorithms are among today’s new heteronomous powers. In October 2016, the White House, European parliament and UK House of Commons each independently explored how to prepare society for the widespread use of algorithm-driven artificial intelligence (AI).

Reviewing these governments’ reports, researchers argued that the design of a “good AI society” should be based on ‘holistic respect’ that considers ‘the whole context of human flourishing’ and ‘nurturing of human dignity as the grounding foundation of a better world.’ However, they concluded that all three reports lacked an understanding of how this technology can engender responsibility, co-operation and similar values to steer the development and inform the design of a ‘good AI society’.

The word “algorithm” comes from the 9th-century Persian mathematician Muḥammad ibn Musa al-Khwarizmi. Among his many innovations, al-Khwarizmi’s work led to the creation of algebra and advanced the Hindu-Arabic numeral system that we use today. It is the Latin translation of al-Khwarizmi’s name to “Algoritmi” — combined with an etymological mashup with the Greek word for number (ἀριθμός,pronounced ‘are-eeth-mos’) — that gives us ‘algorithm’.

Oxford University’s Dictionary of Computer Science defines an algorithm as a prescribed set of well-defined rules or instructions for the solution of a problem, such as the performance of a calculation, in a finite number of steps. It is common to describe an algorithm as being similar to a recipe, say, for cooking pasta: 1) boil water, 2) add noodles, 3) stir. More precisely, the instructions need to be detailed enough for a computer to process, such as steps to play a game of tic-tac-toe: “If you occupy two spaces in a row, play the third to get three in a row.”

The work that al-Khwarizmi produced led to solutions for quadratic equations that are today applied to (among other uses) aircrafts taking flight and circuitry for computers and mobile devices. Despite these innovations, algorithms are playing a new role in the social-historical creation of societies, a contest between heteronomy and autonomy. Three interesting and very different books explore their potential use across a wide array of possibilities, from human domination to human liberation.

In his book The Master Algorithm Pedro Domingos, professor of computer science and engineering, provides an exhaustive overview of five rival orientations toward algorithms: 1) the Symbolists, who view learning as the inverse of deduction and take ideas from philosophy, psychology, and logic; 2) the Connectionists, who aspire to reverse engineer the brain and are inspired by neuroscience and physics; 3) the Evolutionaries, who simulate evolution on the computer and draw on genetics and evolutionary biology; 4) the Bayesians, who believe that learning is a form of probabilistic inference and have their roots in statistics; and 5) the Analogisers, who learn by extrapolating from similarity judgments and are influenced by psychology and mathematical optimisation. In his search for the Master Algorithm, Domingos declares his ultimate desideratum: a single algorithm that combines the key features of them all. This is important, he argues, because if it exists, “the Master Algorithm can derive all knowledge in the world — past, present, and future — from data.”

Domingos’s book not only sheds light on the inner technical workings of different types of algorithms that Amazon, Netflix, Facebook, Google and other platform capitalisms use to shape our modern heteronomous experiences, but he also provides a sample algorithm — “Alchemy” — to take for a test drive. His proposal for a Master Algorithm is rooted in pragmatic debates in the field as well as ideas for how to move them forward.

RELATED: US Drone Program Whistleblower Explains Why She Spoke Out

However, the focus on abstract models distracts from discussion of real world negative impacts of this technology. For instance, Domingos’s discussion of “overfitting,” a problem where an algorithm “finds a pattern in the data that is not actually true in the real world,” seemed a woefully insufficient acknowledgment of the dangerous consequences of unaccountable algorithms — their data inputs and code — and the disastrous impact that they can have on people and communities — such as when the postal code you live in helps determine your credit score and whether or not you qualify for a student or home loan. The book provides a window to see what is possible with a Master Algorithm in a general sense, but it delivers a techno-optimistic message when today’s world of vast inequality and global precarity urgently demands that we ask how to leverage such technology for positive social change toward a classless world.

Weapons of Math Destruction by data scientist Cathy O’Neil offers a more sceptical view of algorithms focussed on their negative social costs and consequences. O’Neil documents how algorithms — WMDs — can punish the poor and elevate the privileged in a cycle that worsens capitalism’s class and racial disparities. It is widely believed that in the US, for instance, non-white prisoners from poor neighbourhoods are more likely to commit crimes, and also poised to commit additional crimes and land themselves back in prison.

Recidivism models tracking the tendency of a convicted criminal to reoffend suggest that these people are more likely to be jobless, lack a high school diploma, and have had, with their friends, previous run-ins with police. Another way of looking at the same data, however, is that these people come from poor areas with terrible schools and little opportunity. “So the chance that an ex-convict returning to that neighborhood will have another brush with the law is no doubt larger than that of a tax fraudster who is released into a leafy suburb.” In this system, O’Neil observes, “the poor and non-white are punished more for being who they are and living where they live.”

Weapons of Math Destruction provides many insightful examples of how algorithms can be deployed as an invisible, yet powerful tool to dominate people’s everyday lives. The book is written better than you might expect a quant to write. O’Neil’s human sources provide vivid material illuminating stories of pain and suffering that the algorithm powered predatory lending strategies of credit companies inflict on people.

Drawing from her experience as a data scientist, she reveals problems with data inputs into algorithms and explains how these problems can lead to the destruction of whole communities, from teacher evaluations to those looking for jobs and sending their resumes out. O’Neil’s deployment of thought experiments to imagine how algorithms could inform police tactics in white wealthy neighbourhoods and to combat white collar crime brings the privilege, and immunity from the consequences of poverty that these communities enjoy, to the surface. She reviews examples for how algorithms could be audited and held to standards for accountability. O’Neil’s book has many strengths, highlighting the structural problems of algorithms as a technology, and today’s obscene reality of how the wealthy are still processed by people, while the vast majority are increasingly managed by machines.

RELATED: Global South to Be ‘Potential Losers’ as Robots Replace Workers: Report

Taking a broader view of how algorithms impact societies, Homo Deus: A Brief History of Tomorrow by historian Yuval Noah Harari considers the paradigm shifting proposal that life is all about data processing and that all organisms are machines for making calculations and taking decisions. In this analogy, not only are beehives, bacteria colonies and forests data processing systems, but so too are individuals and human societies. Through this lens, your biochemical algorithms would process an image of George Clooney by collecting data on his facial features such as hair and eye colour, and nose and cheek bone proportions, to cause feelings of attraction, indifference or repulsion.

On a larger scale, whole economies could be seen as data processing centres, mechanisms for gathering data about desires and abilities and turning this data into decisions. Drawing historical comparisons, Harari writes: “According to this view, free-market capitalism and state-controlled communism aren’t competing ideologies, ethical creeds or political institutions. At bottom, they are competing data-processing systems. Capitalism uses distributed processing, whereas communism relies on centralized processing.”

This argument relegates to ghosts wandering history’s graveyard the old debates about technology and equality, such as the “calculation debates” of the 1920s and 30s — between socialists who believed that a central authority could use all available knowledge to arrive at the best possible (in their minds) economic plan for society and those free marketeers who countered that, because the problems of modern society are so complex, economic planning is impossible and only markets could coordinate economic activity.

Harari’s book does not so much make comparative critiques of capitalism and communism based on their desirability, equity or class structures, but leads readers to an altogether different set of questions: are organisms really just algorithms and life really just data processing? More broadly, Harari considers the possibility that technology could emerge to displace existing ideologies to form a new Data Religion. This new religion places all authority in data-driven decision-making and displaces religions that place all authority in God; liberal humanism which places authority of the individual in the self and free will; state communism which places authority in the party and state trade union; and evolutionary humanism which places authority in the survival of the fittest.

In this view, data is the new heteronomous force and technology: in particular, the power of algorithms to process data in intelligent ways could render the ideological foundations of society, as we know them, obsolete, regardless of what we think of them: “As data-processing conditions change again in the twenty-first century, democracy might decline and even disappear. As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete — not because they are unethical, but because they don’t process data efficiently enough.” This argument raises questions about the role of technology in an unethical world built on vast and unjust disparities in power and privilege. Harari’s argument seems to avoid reverting to such concerns, since that would mean a reversion to humanism. But the technology seems to present new utopian possibilities.

RELATED: Google Wants to Prevent a Robot Uprising with an AI Kill Switch

So-called “Smart Cities” and the Internet of Things are indicators that data processing power and algorithms are gaining traction in our seemingly breakneck stampede into the future, but the question remains if this technology will facilitate the self-conscious creation of societies that produce equitable outcomes or enable new and worse configurations of old injustices.

Incredible computer processing power is being applied to heteronomous governance. In the 2012 US presidential election, the Obama campaign collected tremendous amounts of data to create voter models and, using these models, ran 66,000 simulations per night to help determine the optimal campaign strategy. Many are concerned about the similar application of computer processing power to automation of work and the impact on jobs and joblessness.

Imagine instead this technology applied to facilitate autonomous governance by helping determine the best economic inputs and outputs for a classless and ecologically friendly society. While proposals for a universal basic income dominate debates across the left and right, there are no more technological excuses inhibiting the revolutionary possibility of self-governing, directly democratic and scalable autonomous societies. Models such as Castoriadis’s 1957 Workers’ Councils and the Economics of a Self-Managed Society, which proposed a “Plan Factory” relying on computers to decide how the material means of life would be distributed for economic production and consumption no longer need economic planners and managers.

Such a system, and others like it, could allow algorithms — in the form of blockchain style Decentralised Autonomous Organisations (DAOs) and “smart contracts” — to do the heavy lifting of economic planning, and other rote jobs, so that people could get on with enjoying the free social and individual time that an autonomous society would allow. The question now is whether algorithm driven artificial intelligence will know society and people better than we know ourselves or will it empower self-organisation and direct democracy in ways that consider the whole context of human flourishing and dignity as the foundation of a better world?


Chris Spannos is digital editor at New Internationalist.

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *