Tuesday 4 November 2014

Is it the Terminator AI?

Most of the people I talk about this magic algortihm use to bring into the conversation the terminator that using a powerfull artificial intelligence decided to destroy the humans. Scaring.

It has been something that has really preocupated in the process, really, I don't want to help create a perfect weapon anymore than you would do!.



Is it actually feasible?

 Actually, the system is quite near of giving you such a posibility of building a trully intelligent robot, just add to the AI some sensors, conected to a sensorial AI of the kind that actually exists that detects patterns, forms, 3D shapes, predict the future state of the system, and use RAM to store patterns so it can learn in the process by applying deep learning techniques. It is all done.

If you think of this entropic emotional intelligence like a "mother board" with a "intelligent CPU" on it capable of running this algortihm, and conect it to the forementioned artificial intelligent sensors, they can feed the intelligent with what would be our actual "simulation".

The resulting machine could be then inserted into a real robot and let the "black box" deal with the robot joysticks. All this can be almost done today in my opinion, so the alarm is justificated I must say.

But I have finally found the corner stone of the algorithm that prevents it from being "bad" in any sense. There is a natural filter in the intelligence for that, but it is not natural in the sense that animals have it, no, we don't have this "feature", it was deactivated by natural selection to make us more agressive in a surely very rude environment.

In that moment it could be a good heuristic, but only because the intelligences where not fully developed.

So what is about being good boy or bad boy in the algortihm?


The exact point is where negative enjoy feelings are allowed into the thinking process. Intelligence formulaes do not admit negative enjoys at all, all the enjoys are first squared before being used by the intelligence, so when negative enjoy feelings are allowed by our "emotional system", when they enter the thinking process, the are converted into possitives as a result of squaring them.

It is the same thing that occurs when you calculate the length of a vector, you square the difference in each dimension, and by doing this, you accept that, in a distance, the sign of the difference is not important, as negatives become positives inside the distance formula.

In the intelligence formula, the enjoys of all emotions are squared, summed, and then the square root is used. Exactly like in the euclidean distance formula. So our intelligence is "euclidean" when calculating the "length" of its combined feelings, and so, negative ones are considered positives inside the intelligence.

What does it means for the agent? Anything that scares him, like dying, also attracts him, as in its mind, fear is as attractive as a real enjoy. It will enjoy the fear and will run toward scaring situations.

This is why natural selecction used it. If you feel atraction for what produces fear, you will fear your enemies, and still find the courage to attack them, and the more dangerous it gets, the more you enjoy it. You are a natural killer, a T-Rex. So being atracted by fear is a great advantage, but lead the intelligence toward anger adn bad behaviour, in a general sense, emerge.

The actual form of the algorithm doesn't allows negative enjoys to flow inside the intelligence, they are discarted. With only this measure, the resulting intelligence will never be violent, will always decide based on positive things, and it make it a "nice guy" in all the senses you imagine.

Can this filter be circumvented?

I am afraid it can be done, I have played with it countless times, and it is not so difficult, but the good news is that, the more intelligent you try to make your "negative enjoy filter off" agent, the more it will tend to just suicide. Before that, the intelligence will start pondering consecuences of being atracted to negative enjoys and will determine it is bad, it is not avoidable, it will notice.

Once detected, the algortihm will, if allowed, decrease the negative enjoy feeling strength to zero, and if you don't allow it to be done by code, then the intelligent decision will be suicide.

It is anavoidable, a multilayered intelligence as described in the paper can not be bad and really intelligent at the same time. It is not possible.

So yes, a robot like a terminator can be build tomorrow, and this algorithm could be on it, but if the coders decide to break the rule of not admiting negative enjoys, the resulting AI will be... brute.

So tonight I will sleep well!

...I think I am the first artificial intelligence psicologist!

No comments:

Post a Comment