Ethics for Artificial Intelligence

Ethics for Artificial Intelligence

Many experts believe that artificial intelligence (AI) might lead to the end of the world. Recently Russian President Vladimir Putin says the nation that leads in AI ‘will be the ruler of the world’. And Facebook co founder Mark Zuckerberg went on clash with Elon Musk stating that

“I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.”

“Whenever I hear AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build and how it is going to be used.”

And Musk replied on twitter as “I’ve talked to Mark about this. His understanding of the subject is limited.”

On the other hand Microsoft co founder Bill gates says we shouldn’t panic about AI. And day by day, there is an increase in debates on AI problems about the threats for the future, such as unemployment and weaponizing of AI is just one of the many ethical issues being raised.

So, there may be a solution, we all agree with on, by establishing ethics or law’s standards for Artificial Intelligence.

Suggested Article: What is DeepMind’s AI?

Recently, Google’s AI subsidiary DeepMind launches a new research team to establish an Ethics Committee for Artificial Intelligence.

To solve issues on Artificial Intelligence. The problems include managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

DeepMind Ethics & Society (or DMES) will publish research on these topics and others starting early 2018.

The team, co-leads Verity Harding and Sean Legassick says that DMES will help DeepMind “explore and understand the real-world impacts of AI.”

The group has eight full-time members at the moment, but DeepMind decides to increase there members upto 25 within this year. The group has six unpaid external people and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.

The ethical standards may be set out only for good values for the future world.

Although, we already know that technology can be used either for good or bad, this is how we will choose. So, let’s hope for a better future and on our own we’re going to figure out what will be the future or there is no future.

Read also: Bitcoin is the Future?

Scientists present the first bionic hand with the sense of touch
Cryptojacking might be the new privacy threat in 2018
Physicists developed a coldest chip runs at near Absolute Zero
Scientists prints ‘self-healing’ flexible metal circuits
A Tokyo-based Startup going to start ‘Moon Ads’
New research could develop batteries that triple’s the range of electric vehicles

No Comments

Leave a Comment.