Thursday, 20 November 2014

A Talk on Emotional Intelligence

Yesterday I presented the "Entropic Emotional Intelligence" model in a long (2h!) talk at the Computer Science faculty, Murcia University, as a way to add intelligence to video game players:
 

In about one week there will be an official video (in spanish) in the university tv site, that I will upload to youtube so I can subtitle it to english.

The slides in spanish and english are available at the blog's download page.

In the meanwhile, the paper I am working on is getting slowly to the finish line. I can't really publish it on any journal because they only accept 30 pages or less articles, not +100 pages, so I will publish it directly in ArXiv.

Tuesday, 4 November 2014

Is it the Terminator AI?

Most of the people I talk about this magic algortihm use to bring into the conversation the terminator that using a powerfull artificial intelligence decided to destroy the humans. Scaring.

It has been something that has really preocupated in the process, really, I don't want to help create a perfect weapon anymore than you would do!.

Is it actually feasible?

 Actually, the system is quite near of giving you such a posibility of building a trully intelligent robot, just add to the AI some sensors, conected to a sensorial AI of the kind that actually exists that detects patterns, forms, 3D shapes, predict the future state of the system, and use RAM to store patterns so it can learn in the process by applying deep learning techniques. It is all done.

If you think of this entropic emotional intelligence like a "mother board" with a "intelligent CPU" on it capable of running this algortihm, and conect it to the forementioned artificial intelligent sensors, they can feed the intelligent with what would be our actual "simulation".

The resulting machine could be then inserted into a real robot and let the "black box" deal with the robot joysticks. All this can be almost done today in my opinion, so the alarm is justificated I must say.

But I have finally found the corner stone of the algorithm that prevents it from being "bad" in any sense. There is a natural filter in the intelligence for that, but it is not natural in the sense that animals have it, no, we don't have this "feature", it was deactivated by natural selection to make us more agressive in a surely very rude environment.

In that moment it could be a good heuristic, but only because the intelligences where not fully developed.

So what is about being good boy or bad boy in the algortihm?


The exact point is where negative enjoy feelings are allowed into the thinking process. Intelligence formulaes do not admit negative enjoys at all, all the enjoys are first squared before being used by the intelligence, so when negative enjoy feelings are allowed by our "emotional system", when they enter the thinking process, the are converted into possitives as a result of squaring them.

It is the same thing that occurs when you calculate the length of a vector, you square the difference in each dimension, and by doing this, you accept that, in a distance, the sign of the difference is not important, as negatives become positives inside the distance formula.

In the intelligence formula, the enjoys of all emotions are squared, summed, and then the square root is used. Exactly like in the euclidean distance formula. So our intelligence is "euclidean" when calculating the "length" of its combined feelings, and so, negative ones are considered positives inside the intelligence.

What does it means for the agent? Anything that scares him, like dying, also attracts him, as in its mind, fear is as attractive as a real enjoy. It will enjoy the fear and will run toward scaring situations.

This is why natural selecction used it. If you feel atraction for what produces fear, you will fear your enemies, and still find the courage to attack them, and the more dangerous it gets, the more you enjoy it. You are a natural killer, a T-Rex. So being atracted by fear is a great advantage, but lead the intelligence toward anger adn bad behaviour, in a general sense, emerge.

The actual form of the algorithm doesn't allows negative enjoys to flow inside the intelligence, they are discarted. With only this measure, the resulting intelligence will never be violent, will always decide based on positive things, and it make it a "nice guy" in all the senses you imagine.

Can this filter be circumvented?

I am afraid it can be done, I have played with it countless times, and it is not so difficult, but the good news is that, the more intelligent you try to make your "negative enjoy filter off" agent, the more it will tend to just suicide. Before that, the intelligence will start pondering consecuences of being atracted to negative enjoys and will determine it is bad, it is not avoidable, it will notice.

Once detected, the algortihm will, if allowed, decrease the negative enjoy feeling strength to zero, and if you don't allow it to be done by code, then the intelligent decision will be suicide.

It is anavoidable, a multilayered intelligence as described in the paper can not be bad and really intelligent at the same time. It is not possible.

So yes, a robot like a terminator can be build tomorrow, and this algorithm could be on it, but if the coders decide to break the rule of not admiting negative enjoys, the resulting AI will be... brute.

So tonight I will sleep well!

...I think I am the first artificial intelligence psicologist!

Busy writting

I consider the algortihm of the "Entropic Emotional Intelligence" almost fully completed, not in this blog, but in my mind. I still will need some months to put all the ideas on the code and test it in its finished form. I have great expectations, but I think it will take a lot of CPU!

In the process of building the general version of the algortihm, I am also writting a complete academic paper detailing the algortihm with a more technicall aproach than I can follow in this blog.

It will keep me busy for some monts, but after that time, I promise to add it to arXiv inmediately.

Ah! I also found the idea for my next algortihm: is about consciousness...

Uncertainty

If the algorithm wants to be generally usable, it must be able to gracefully deal with uncertainty.

When a rockets detects a falling asteroid in the video shown in the last post, the rockets imagine a future where the exact position of the asteroid is imagined, with no errors. It is only possible because it is a deterministic simulation what the rocket is using to calculate the future positions of the asteroid, and as you can see in the video, they can avoid the risk with a remarkably cold blood.

But it is not realistic. In the real systems, there are uncertainties. You know the asteroid will be in that position in one second, but with a standard error you can not control.

In the simulation is correspond to a random noise added to the asteroid velocity at each step, but only when a future is being simulated, in the real simulation, the one you see on screen, the asteroid follow strictly with the laws of physic using its real position and velocity.

In the following video, traces of the futures imagined by the rockets for each asteroid is drawn. As you could spect, the traces look like a whaterfall. As the future goes on, more and more small uncertainties accumulate on the asteroid velocities, so after some time, the traces obtained for each future diverge.

The effect on the rockets is that now they don't have to avoid a single falling trace of an asteroid, they have to hide away from a shower of asteroids falling in different directions. The rockets panic some times, and if they find no way to scape an asteroid, they just wait for it.


In this example the rockets doesn't have any uncertainty about their own positions or velocities, it can be added, but it rapidly increments the number of futures you need to simulate, and my CPU is already smelling strange.

No real rockets were harmed while producing this video footage!