I don’t know enough on a technical level to know whether this is true or false, but it’s interesting from an ethical point of view. Meta’s chief AI scientist believes that intelligence is unrelated to a desire to dominate others, which seems reasonable.

He then extrapolates this to AI, pointing out that not only are we a long way off from a situation of genuine existential risk, but that such systems could be encoded with ‘moral character’.

I think that the latter point about moral character is laughable, given how quickly and easily people have managed to get around the safeguards of various language models. See the recent Thought Shrapnel posts on stealing ducks from a park, or how 2024 is going to be a wild ride of AI-generated content.

Fears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Meta's chief AI scientist has said.

Yann LeCun told the Financial Times that people had been conditioned by science fiction films like “The Terminator” to think that superintelligent AI poses a threat to humanity, when in reality there is no reason why intelligent machines would even try to compete with humans.

“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said.

“If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither,” he added.

Source: Fears of AI Dominance Are ‘Preposterous,’ Meta Scientist Says | Insider