Artificial intelligence is predicted to have an unprecedented impact on society and possibly humanity.

On one end of the spectrum, there are the optimists, like Mark Zuckerberg, who are confident in the technology bringing positive change to our world, such as replacing humans in dangerous jobs, automated transportation, solving climate change, home improvement/care, healthcare & management, surveillance, farming, to name just a few realms and industries that will be profoundly changed by AI and robotics.

And at the other end of the spectrum, renowned personalities, AI insiders, such as Elon Musk CEO of XSpace and Tesla, Bill Gates of Microsoft, the late physicist Stephen Hawkings, or Apple’s Steve Wozniak are warning against AI’s power.

Our friend and client Mo Gawdat, former Chief Business Officer at Google [X] finds himself on both ends of the spectrum. According to Mo, who has spent the last 15 years or so at the core of AI’s most advanced research labs, AI can be either an unprecedented opportunity or the start of a doom scenario.

A keynote speaker at the World Happiness Summit in Miami, Mo made a major announcement, by way of a mini docu-statement (produced by Positive Solutions), revealing his resignation from Google so as to concentrate on his own moonshot: spread happiness to one billion people, turn into a global movement –#onebillionhappy – that will prioritize happiness and will help shape and draw a safe path for artificial intelligence’s development.

How to deter super-intelligent machines from becoming malicious?

“We are replicating human intelligence with machine learning. Just like an 18-month-old infant, machines are learning by observing a modern world full of illusions, greed, obsessions, ego and disregard for other species. We need to fill the world with compassion and kindness if this is what we want to pass on to future generations.” – Mo Gawdat

No matter which side you stand on, the technology is so powerful that this ought to be thought over, and the sooner the better, as once these machines are smarter than us in all domains (give or take 10-15 years), it might be too late for us to contain and/or control how the machines will behave. The recent developments in neural networks simulating human neurons thus allowing machines to process data and information on their own, at a speed and level of sophistication unattainable by humans, prompts us to reassess not only the ethical, social, cultural and technological impacts and risks of AI, but also the “value system” imparted onto these machines, as they are still in their infancy.

Indeed, beyond the ethical question of whether human intelligence should be replicated or not (and enhanced a billion-fold within 30 to 40 years), beyond the question of good vs evil – like all technologies are initially welcome – for the first time in humanity, we are building a technology that will outpace our own intelligence at a furious pace and magnitude within a generation. This is worth more than a few rules, regulations and firewalls.

“It is fair to imagine that AI might be the last technology we humans invent, because our artificially intelligent newborn infants, the computers that we’re teaching now, once they are smart enough, they will solve the next problem on our behalf. Give them a problem like global warming and ask them to solve it. They will see a much larger data set, much larger set of opportunities, they can mix physics, maths, biology in so many different ways that we may not be able to comprehend as a single human brain. But we need to make sure that those machines work on our side.” – Mo Gawdat

Chinese business mogul Jack Ma, founder of e-commerce giant Alibaba, addressed, at the recent World Economic Forum in Davos, a warning along these lines, with machines replacing humans on an unprecedented level (and unlikely place):

Wisdom is from the heart {…} The machine intelligence is by the brain […] You can always make a machine to learn the knowledge. But it is difficult for machines to have a human heart {…} We have the responsibility to have a good heart and do something good. Make sure that everything you do is for the future. “

The late Stephen Hawkings has also warned last year that AI could “re-design itself at an ever-increasing rate” and thus be “the worst event in our civilization, unless society finds a way to control the development.” But many agree – even amongst AI’s defenders – that attempting to control superintelligence, whether through technology or legislation, may well not be enough.

“How do you contain these artificially intelligent machines? You don’t. The best way to raise wonderful children is to be a wonderful parent,” says Mo Gawdat, who announces dedicating the rest of his life and resources to one-billion happy.

His motivation: not only honor his dear son, Ali, who passed away unexpectedly during a routine medical procedure in 2014, but also model AI according to values that lean toward the positive and have human’s best interest at heart.

Could Artificial Intelligence spark World War III or lead to our demise? 
If artificially intelligent machines reach the point of Singularity, as mentioned by Mo Gawdat in our video, some scientists and tech experts believe Artificial General Intelligence – as opposed to AI designed for specific tasks (like self-driving cars or Siri) and even smarter Artificial SuperIntelligence – could be able to take over economies, financial markets, healthcare and transport systems, likely to cause global mayhem.

“It is predicted that by 2029 the intelligence of the machines we’re building will surpass our own human intelligence and, believe me, machines are going to capture our value system. We now have a choice – Are we going to value greed, selfishness and fear? Or love, compassion and trust? This is no longer the responsibility of opinion leaders, of technologists, of government leaders, it’s the responsibility of you and I.” – Mo Gawdat

 

Today is World Happiness Day.

Which do you choose?

We choose happiness. We choose to help spread it worldwide.

Join the movement at www.onebillionhappy.org