The real problem of AI

I’m going to cut right to the chase here. There’s a growing problem in having advanced AI algorithms taking over human tasks while possessing little, to non-existent ethical benchmarks that assess their actions.

Think of these AI systems as a small child. They will learn about the world through their senses, and depending on their experiences, they will draw conclusions about the world. In the same manner, what we feed into our AI systems will determine how they think and how they’ll act. What happens then, when a robot acts in a way that is advantageous to it’s program but disadvantageous to the welfare of people?

Suppose an autonomous car is travelling along a road. It detects a small object. We would see it as a child (probably who’s run away from his home) and is lying down in this desolate road. The truck senses the size of the object but doesn’t think its poses any risk to it’s driving. It can’t veer around because the road is small, and it has a high priority override to deliver express packages by the evening. The truck judges the risk of the object to the truck to be minuscule, so it proceeds. Ending the life of the child.

This is one such example of what could happen if we don’t properly train our AI to make good moral judgements. A proper judgement requires full knowledge of the situation, so as to assess all the facts. It requires a conceptual idea of good and evil, in order to pursue good and prevent evil, even if it’s at the cost of economic gain. It also requires the proper capacity to judge circumstances. These and many other such considerations need to be put into the algorithms of our ai.

I mentioned something that is extraordinarily important. So important, and striking, that you probably just missed it. The concept of imbuing machines with notions of “good” and “evil.” These notions are necessary, because they are the last reins of judgement when we decide to do something. I might be asked to collaborate on an insider trading scheme. I could feel the pressure to act accordingly because if not my manager could fire me. I could feel the allure of the money that I will be making. However, I know in my heart that this would be the wrong thing to do. And so, refusing all personal gain and instead facing all to lose, I refuse to cooperate, because I sense it is the wrong thing to do.

Similar judgement should be imparted on our robots as they slowly start to overtake more complex, personal jobs from us. AI robots are designed to maximize productivity. But in the case that harm could be done, it must recognize the consequences of its action and act in the way that is best for the prosperity and flourishing of its creators. It should be the last line of defense for any person living near an ai system, so that they might be assuredly protected from unintended harm.

I cannot say how the concept of good and evil will manifest in an algorithm. It seems like so much more intuitive than a set of logical parameters. I might try to tackle this on another post.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s