Tag: ethics (page 1 of 6)

AI, domination, and moral character

I don’t know enough on a technical level to know whether this is true or false, but it’s interesting from an ethical point of view. Meta’s chief AI scientist believes that intelligence is unrelated to a desire to dominate others, which seems reasonable.

He then extrapolates this to AI, pointing out that not only are we a long way off from a situation of genuine existential risk, but that such systems could be encoded with ‘moral character’.

I think that the latter point about moral character is laughable, given how quickly and easily people have managed to get around the safeguards of various language models. See the recent Thought Shrapnel posts on stealing ducks from a park, or how 2024 is going to be a wild ride of AI-generated content.

Fears that AI could wipe out the human race are “preposterous” and based more on science fiction than reality, Meta’s chief AI scientist has said.

Yann LeCun told the Financial Times that people had been conditioned by science fiction films like “The Terminator” to think that superintelligent AI poses a threat to humanity, when in reality there is no reason why intelligent machines would even try to compete with humans.

“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said.

“If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither,” he added.

Source: Fears of AI Dominance Are ‘Preposterous,’ Meta Scientist Says | Insider

A steampunk Byzantium with nukes

John Gray, philosopher and fellow son of the north-east of England, is probably best known for Straw Dogs: Thoughts on Humans and Other Animals. I confess to not yet having read it, despite (or perhaps because of) it being published in the same year I graduated from a degree in Philosophy 21 years ago.

This article by Nathan Gardels, editor-in-chief of Noema Magazine, is a review of Gray’s latest book, entitled The New Leviathans: Thoughts After Liberalism. Gray is a philosophical pessimist who argues against free markets and neoliberalism. In the book, which is another I’m yet to read, he argues for a return to pluralism, citing Thomas Hobbes’ idea that there is no ultimate aim or highest good.

Instead of one version of the good life, Gray suggests that liberalism must acknowledge that this is a contested notion. This has far-reaching implications, not least for current rhetoric around challenging the idea of universal human rights. I’ll have to get his book, it sounds like a challenging but important read.

The world Gray sees out there today is not a pretty one. He casts Russia as morphing into “a steampunk Byzantium with nukes.” Under Xi Jinping, China has become a “high-tech panopticon” that keeps the inmates under constant surveillance lest they fail to live up to the proscribed Confucian virtues of order and are tempted to step outside the “rule by law” imposed by the Communist Party.

Gray is especially withering in his critique of the sanctimonious posture of the U.S.-led West that still, to cite Reinhold Niebuhr, sees itself “as the tutor of mankind on its pilgrimage to perfection.” Indeed, the West these days seems to be turning Hobbes’ vision of a limited sovereign state necessary to protect the individual from the chaos and anarchy of nature on its head.

Paradoxically, Hobbes’ sovereign authority has transmuted, in America in particular, into an extreme regime of rights-based governance, which Gray calls “hyper-liberalism,” that has awakened the assaultive politics of identity. “The goal of hyper-liberalism,” writes Gray, “is to enable human beings to define their own identities. From one point of view this is the logical endpoint of individualism: each human being is sovereign in deciding who or what they want to be.” In short, a reversion toward the uncontained subjectivism of a de-socialized and unmediated state of nature that pits all against all.

Source: What Comes After Liberalism | NOEMA

AIs and alignment with human values

This is a fantastic article by Jessica Dai, cofounder of Reboot. What I particularly appreciate is the way that she reframes the fear about Artificial General Intelligence (AGI) as being predicated upon a world in which we choose to outsource human decision-making and give AIs direct access to things such as the power grid.

In many ways, Dai is arguing that, just as the crypto-bros tried to imagine a world where everything is on the blockchain, so those fearful about AIs are actually advocating a world where we abdicate everything to algorithms.

In a recent NYT interview, Nick Bostrom — author of Superintelligence and core intellectual architect of effective altruism — defines “alignment” as “ensur[ing] that these increasingly capable A.I. systems we build are aligned with what the people building them are seeking to achieve.”

Who is “we”, and what are “we” seeking to achieve? As of now, “we” is private companies, most notably OpenAI, the one of the first-movers in the AGI space, and Anthropic, which was founded by a cluster of OpenAI alumni.

[…]

To be fair, Anthropic has released Claude’s principles to the public, and OpenAI seems to be seeking ways to involve the public in governance decisions. But as it turns out, OpenAI was lobbying for reduced regulation even as they publicly “advocated” for additional governmental involvement; on the other hand, extensive incumbent involvement in designing legislation is a clear path towards regulatory capture. Almost tautologically, OpenAI, Anthropic, and similar startups exist in order to dominate the marketplace of extremely powerful models in the future.

[…]

The punchline is this: the pathways to AI x-risk ultimately require a society where relying on — and trusting — algorithms for making consequential decisions is not only commonplace, but encouraged and incentivized. It is precisely this world that the breathless speculation about AI capabilities makes real.

[…]

The emphasis on AI capabilities — the claim that “AI might kill us all if it becomes too powerful” — is a rhetorical sleight-of-hand that ignores all of the other if conditions embedded in that sentence: if we decide to outsource reasoning about consequential decisions — about policy, business strategy, or individual lives — to algorithms. If we decide to give AI systems direct access to resources, and the power and agency to affect the allocation of those resources — the power grid, utilities, computation. All of the AI x-risk scenarios involve a world where we have decided to abdicate responsibility to an algorithm.

Source: The Artificiality of Alignment | Reboot