Strong AI and Utilitarianism

The article begins with a basic idea: That “doing good” implies proliferating the happiness or pleasure and eliminating the pain of conscious creatures. This is straight up utilitarianism – by no means a perfect moral theory – but about as good as we’ve got.

I mention how hard it is to project the long-term consequences of a “good” action. I.e.: How much suffering and pleasure was created by helping to build this library… or by volunteering to run a kids soccer camp in the summer? The butterfly effects are impossible to track, and it’s easy to deceive ourselves into justifying any of our actions based on a “utilitarian” belief that is indeed false and wrong.

However, it’s probably somewhat better than having no moral compass at all… or one bent on something other than utilitarian good. Think about a perfectly “virtuous” (good luck with whatever that means) society that was also miserable all the time. Think about a society that all believes in the “right” God (good luck with whatever that means) that was also miserable all the time.

Levels at Which Artificial Intelligence Might “Do Good”

The structure of the article roughly covers what we might consider to be the “gradients” of artificial intelligence’s influence on the moral good, from most near-term (and smallest) to most long-term (and greatest):

a) AI as a Tool for Doing Good:

We’ve done a good deal of coverage about the “altruistic” applications of AI (see our article on “AI for Good”). It should be noted that by no means do I think that nonprofit AI is the only “good” AI. There might be companies that generate massive profits from optimizing farming with AI or diagnosing cancer with AI – and by golly they may well “do” plenty of “good” in the process. I quickly move past this topic as it’s not what the talk is ultimately about.

b) AI as a Guage Towards Moral Goodness Itself:

If maximizing pleasure and eliminating pain is the ultimate goal of what we’re after, that’s good to know – but hard (basically impossible) to measure. I can guess that by being a mailman instead of a heroin dealer that I’ll have a more net positive impact on the world. If I donate to feeding children in Africa as opposed to buy the latest iPhone, then maybe – again – I can guess that I’m “doing good.” But it’s all guesses, and it’s all bad guesses.

If an AI system could in some way measure sentient hurt and sentience pleasure, and correlate those factors to actions, behaviors, public policy, etc… all with the ability to project those impacts into the future with better predictive ability than any team of human scientists – the indeed that might be the most morally meaningful invention of all time.

This would involve the following:

  • Understanding consciousness
  • Measuring well-being and it’s opposite in (basically all) living things
  • Somehow modeling that sentient measurement along with a near-infinite number of other variables in the real world
  • Extrapolate sentient well-being somewhat accurately into the future

Frankly, I consider it more likely that AI of post-human intelligence will arrive before this kind of “moral barometer” machine is invented, as the ability to detect the neural activity of all living things seems much closer to “impossible” than the creation of super intelligence (which itself is a gargantuan challenge).

c) Artificial Intelligence (and Nonbiological Sentience) as Goodness Itself

It seems safe to say that there is no “good” or “bad” without living, experiencing creatures. With no “experiencer”, there is no “good” or “bad” “experience”.

I’ve argued that the “moral value” of an entity seems to tie directly to the depth and richness of it’s consciousness (i.e. How much of a range of pains and pleasures that entity can experience).

Imagine the greatest pleasures and pains of a rodent, and compare those with the range of pains and pleasures that a human being can experience. A human clearly has many orders of magnitude more sentient potential and range (losing a child, growing a business, reading a poem, oil painting, nostalgia, humor, etc…), vastly beyond the experience of any rodent.

Lets imagine that an average rodent’s total “sentient range” score is a 10 (this is an arbitrary number, but stick with me here), and an average human’s total “sentient range” score is a 200. We might ask, what would a creature be able to experience with a “sentient range” score of 50,000? If a creature of that kind could be created (or somehow “enhanced” from existing biological life), it might be hard to deny its moral preeminence above humanity – a troubling idea to contemplate.

What Do We Do? (or “The Point”)

Indeed this is the big question. This is the purpose of my life, and the purpose of TechEmergence as an extension thereof.

The point of this presentation had little to do with talking about how AI is helping with farming or diagnosing disease (though these applications are important and should be pursued). Rather, the point was to talk about what “moral end-game” we’re striving for as a species.

  1. Are we to simply improve technology and encourage peace, and consider those aims to be good enough for the next millennia of homo sapiens?
  2. Can we possibly build an AI system to detect or predict the utilitarian impact of actions… maybe 10x or 100x better than human beings can now?
  3. Can we build an intelligence to determine a better, deeper, more robust moral understanding than that of utilitarianism?
    1. The impacts of such a system might be terrible, and there may be no “good” to determine outside of relative benefit. I think it’s quite likely that there is in fact no moral bedrock to eventually stand on, and I believe that “goodness” will likely remain subjective, contextual and illusive even for superintelligent machines.
  4. Can we use technology and super intelligence to somehow calibrate society, law, and biology to proliferate (or “engineer”) well-being itself? (While I’m rather congenial to this idea, ..
    1. Quote from the talk: “If we’re talking about well-being here… if that’s the point… to ‘make the world a better place’… then maybe we are not only fighting own own flawed nature, but maybe we’re also fighting nature itself. It seems safe to say that Nature herself cares not for the happy species, but cares only for the surviving one.”
  5. Is the ideal moral aim to create not only an infinitely more intelligent, but also an infinitely more blissful
    1. In this case, humanity, and almost all biological life (which is nearly entirely predicated on violence and suffering, from the lowest to the highest levels) show “bow out” nicely – making way for more morally worthy entities who can not only understand the universe in vastly greater depth, but who might be able to do so indefinitely at a level of conscious bliss that is positively unimaginable by humans.
    2. Quote from talk: “A beacon of super-intelligent super-bliss that could populate the galaxy.”

It would seem a shame if monkeys overtly decided that they were the chosen random species of the universe, and that indeed no species beyond them should ever be developed. What a shame to never have language, poetry, law, art, space travel, or the internet… all because of an arbitrary barrier to development from one selfish species?

The question arrises, what new dimensions of experience, of art, of knowledge, of moral and scientific understanding are we holding back if we envision “man as he is” (flawed as he is) as the great and ultimate aim of the universe?

In absolutely no way am I eager to run beyond humanity toward something “better”. Rather, I see this transition as inevitable, and that navigating this transition without terrible (maybe extinction-level) consequences will involve an incredible amount of global collaboration and ethical forethought – a process that should begin now.

The great importance of AI and neurotechnology – and the whole class of technologies that might create or enhanced intelligence and sentience itself – is that they not only pose an existential threat to humanity (i.e. they could be misused as destructive forces to snuff out life on earth)… but that they also could imply a great proliferation of moral value and “life”… vastly beyond what exists on this planet – or maybe anywhere in the universe.

For this reason, I’m of the belief (and have been since 2012) that determining the trajectory of intelligence and sentience itself is the preeminently important moral concern of our species.

This will need to be an open-minded, interdisciplinary process, and one that – for better or for worse – I think will require global steering efforts (to ensure that “team humans” is on the same page about what kind of next-level intelligence we’re trying to create here), and global transparency efforts (to ensure that nobody is tinkering with intelligence and consciousness in ways that seem likely to cause massive and unnecessary conflict).

TechEmergence was created for one reason:

To proliferate that conversation about the post-human transition.

Currently, we mostly cover the industry impact and applications of AI. We do that because:

  • It’s valuable, and will allow us to sustain TechEmergence’s case as a growing business without needing to ask for donations or handouts.
  • It draws the attention of business and government leaders – exactly the folks who will be helping to develop and adopt AI technologies that will shape our world (I unabashedly aim to eventually draw these same leaders into a conversation beyond business implications, into the discussion of the future of intelligence itself – and how humanity will manage that transition).
  • It provides a platform for us to gather consensus level though about not just the business implications of AI, but it’s moral and social impact (see our previous examples of this with our “Conscious AI” researcher poll, or our “AI Risk” researcher poll), something we’ll be doing more and more of.

Source: https://www.techemergence.com/can-artificial-intelligence-make-the-world-a-better-place/