So called "Intelligente behaviour" can be defined in a pure thermodinamic languaje, using just "entropy". Formulaes look pretty intimidating, but once you get the idea, coding it into a working AI is quite simple.
Fractalizing the same idea takes away entropy calc form the AI and makes it work much better.

While reviewing the video of my first talk (I do it frequently, but this time I happily discovered it mentioned on a data analysis blog I was just reading) I noticed automatic subtitles were available, and quite nicely converted to text for my surprise, wow!

So I tried the english automatic translation and hey, you wouldn't say, but it was near perfect, even the little jokes were almost intact!

So if you were willing to listen to a simple explanation of this "Entropic Intelligence" algorithm, one you can easily understand and directly apply to your home-made kart simulation (or whatsoever it is), here you are, just remember to activate subtitles and then, clicking on the gear, select translate and choose you own languaje, it will do it quite right.

You will notice comments are not enabled on YouTube. This video belongs to the Elche University CIO and so I have no editing rights, but feel free to comment here if you feel in the need, I will be happy to answer.

Quantum physic plays a big role in developing an artificial intelligence, more than many could think at first glance.

Back in the days I was developing the Entropic Intelligence, I needed to discard similar ending futures before evaluating the "Future Entropy" of a given option. This was because "entropy" always involves using a given minimum distance so futures that ends closer than this must be considered only one single future. If this remainded to you some quantum principle like exclusion, you are quite rigth.

Entropic Intelligence grouping future end points

When I started the conversion into a fractal intelligence, I first thought I could get ride of this ugly step of deleting some futures. This post is to explain I could NOT do it. I still need to perform this "pruning" of the fractal so its branches doesn't condensate into a single small zone.

Lets see an example of a "good fractal" that doesn't condensate on a single point, this is how all fractals used by the AI should look like:

As you can see, the fractal density is not too high at any given point, and this is why the fractal can spread to almost half a lap ahead of the kart's position.

But without an exclusion principle, and as you will see in the video below, things can go terrible wrong with the fractal, making it to condensate in not-so-good zones, missleading the AI into easy to avoid traps:

Fractal condensate in a zone making AI blind to other better options.

This is why I tried to simulate quantum physics using fractals before jumping into the fractal intelligence. And what I learned from this avdventure is proven to be critical to really applying fractal intelligence optimally.

The botton line is quite surprising: In the karts example of fractal AI -but also on any other form of fractal growth algortihm I have tried- agents need to check for collisions with other karts on the track to detect and avoid this possibilities... but they also need to check collisions with themselves!

Colliding with your self will "automagically" disolve any accumulation that may tend to appear, as fractal growth laws dictate that collided branches (I can name a "future" as a "branch" or a "spark" some times, depending on the mental example I am using at the moment: a plant that grows by bifurcating its branches, or a lighting bolt formed by a miriad of electric sparks moving around) will move to the position where another randomly chosen branch -or future- is. This simple fractal growth rule -there are some more- makes all the problem to vanish.

But there is more fun about it: If you apply this self-crashing detection at the beginning of all the futures being imagined in a frame, they will all collide with each other in a chaos, as they started in the same initial point.

This is a problem nature solved in quantum physics by using what Richard Feynman called "tinny little arrows", or what is also known as "wave equation's phase angle". This quantum "artifact" mandates that a couple of photons leaving an electron will not likely collide with each other, as their "tinny little arrows" points in almost the same direction, so the difference has an almost zero lenght.

Length of such additon of arrows is called "amplitude of the event". The probability of such event (the collision) is then this length squared, an even smaller number.

Again, to safely apply Pauli's exclusion principle to fractal intelligence, I am afraid I will ultimately need to account for some "future's phase angle" so a couple of future positions will more likely collide when they have raced a slightly different length, so arrows are not "in phase" anymore.

I don't know about you, but I am really impressed of how similar quantum mechanics is to fractal AI, not to mention my other two fractal algorithms to maximize/minimize scalar functions. Yes, they are slowly converging into the same quantum physics one!

Let us go now to this new fractal AI video. Compared to the last one, now points 8 and 9 of the list are coded into the AI, but as I commented, just to discover I still need exclusion and may be phase, so I keep it labeled as "beta".

In the first lap debug thinking was off, and everything seemed to go ok, not optimal but quite ok. In the second lap debug thinking is turned on and then we can see how the fractal some times concentrate in small regions, leading to bad decisions, while other spread nicely to half a lap ahead, as it was intended to do.

When trying to judge how optimal the AI is now, please keep in mind that the kart does not pretend to win any race, its only goal is to run "as fast as possible". This is why it usually prefers to widely open on curves, not becuase it is good for winning the race, that is not the case, but because it allows it to keep a high speed on the turns.

Finally, a word about the silly back driving ending: After the bad decision after half of the second lap, the kart start driving backwards to un-trap it self, that was intelligent, but then it keeps on going backwards until it finally crash and break the toy. But why?

The answer is the same as before: The lack of an exclusion principle. When most of the futures goes backwards, if you don't avoid those acumulations with a exclusion principle, it will focus only on this possibility, being again blind to other less probable -but may be more interesting- options, namely, driving forward again to speed up in the long term, something it was supposed to be able to do as it was thinking 30 seconds ahead.

This blog has been devoted to "Entropic Emotional AI" until some weeks ago. Now I call this "old" implementation of the AI the "linear" or the "entropic" version, while now my attention has shifted to "Fractal Emotional AI".

Why? What makes this change in the wording means? Is "Fractal" any better that "Entropic"?

Short answer is Fractal is much better, powerful and simplier. It is the big brother of the now weak Entropic AI. It performs much better that the previous one in terms of the emerging intelligent behaviours and, more important (or not) it is far more flexible and extendable than it was never.

Lets compare the same frame of a similar simulation with both versions. Fractal and Linear versions used 500 futures both, but Fractal was thinking 50 seconds ahead, while Linear version only allowed up to 20 seconds. Anyway, using more that 20 is a total waste of CPU as you will see.

Lets start with the fractal version:

Fractal AI - 500 futures - 50 seconds

The most important difference is that the fractal version is capable of thinking 50 seconds ahead and more, there is no limit. The fractal growth is made in such a way that all the track can be "scanned" in one single instant.

Now lets see how did the linear/entropic version:

Linear (entropic) AI - 500 futures - 20 seconds

Linear version is not capable of thinking even 20 seconds ahead, most of the futures it thinks end up a few seconds after start. The futures can't even reach a 5 or 6 seconds time horizon! That is the main difference.

In the linear version, a "crazy monkey" was in charge to drive the kart during those endless 20 seconds, moving the joysticks randomly here and there. But none of the monkeys was able to keep it running for more than 5 or 6 seconds.

Crazy monkeys were never a good idea anyhow, I noted it in my first talk, but without the help of fractal growth, what else do you have?

Those are the main differences:

Detecting the outcomes for your action any number of seconds ahead means you can decide not to pass another kart because you don't like champagne, and you can foresee that this action would lead you to win the race and then to have to drink it... no way! The power of this unlimited time horizon to think ahead is the fractal magic doing its work.

The fractal size -or its weight if you think of it as a plant growing up- tells you how good or bad a given option is, this means no need to use strange "mood" or complicated "gain" feelings, just positive enjoys and simple gain feelings.

The fractal growth is not governed by differentinal changes on the agent's state, so there is no need to store the "old" values for all parameters.

Not using strange feelings means there is no need to calculate strange "stimulis" derived from the real one associated with each feeling. For goals to work in the linear version I needed up to 3 of them. Now they are all gone.

Fractals are know for scanning the space optimally. To get a similar effect, if even possible, you would need to use a number of linear futures orders of magntude bigger than with a fractal. Fractal AI is CPU friendly as CPU time depends linearly on time ahead used. No NP problem here.

The "shape" of the fractal (or its laws of growth) has more degrees of freedom than lines had. This means the fractal AI can adapt to different environments much better that linear ever could.

Some other secondary "artifacts" I needed to add to the linear version are now obsolete. The main one is the grid size and deleting the similar ending futures (the entropic part of the algorithm) but also some others like ReactionTime or Urgency in the gain feelings.

If you don't need to prune the repeated futures on each options, you don't need options at all. No need to decide witch decisions will be your arbitrary chossen options.

Also, there is more need of controlling joystick "sensitivity", as there no option to choose, you are free to try out any random action you could take just by having a fractal growing in that particular direction.

This all means that the fractal AI, althought being much more advanced and powerful than entropic one was, is also much simplier. Fractals are simplier than lines, but also goals, feelings and the rest of the ingredients are now simplified to a minimun or even deleted from the code.

To get an idea of how good it can be, have a look at the "oldy but goldy" kart performing an almost perfect lap at the standar test track using an unfinished version of the fractal AI (items 8 and 9 are not still implemented, that is why it crashes at the end):

In some day items 8 and 9 will be ready, we will leave beta stage in that moment and get to version 1.0, but there are more things in the way for V2.0, of course!

If you wish to compare with a similar "old AI" video, check this one, the time limit of 5-6 seconds is quite obvious on dificult sections, as the karts are not aware of the problems behind the next curve, they can not "see" it at all:

Tonight four of my new "fractal minded" rockets have been playing "Ninja fight". As before, you can compare it with the "linear/entropic" version of the algorithm here.

The rules are simple: If rocket "A" touch "B" with the tip of its main thruster flame, it takes energy from it.

There is no new "goals", nothing instructed them to avoid other's flames or use theirs to fight and gain energy, in the same way no one told them how to land to fill the tank: they decided what to do and how to behave by their own, just based on how good/bad is getting or loosing energy ( abasic goal all players have).

The resulting behaviour, in my highly biased opinion (IMHBO?), is near perfect. The parameters were: 300 futures to scan the future consecuences, 20 seconds ahead thinking. It was not quite too much, yellow one needed a little more to survive a hard situation, and when at the end of the video I relaxed it to 200f 15s, the winner ended up crashing. My fault!

I will try again today with more futures, 500 will be a sweet point I spect, so I can check "visually" for the behaviour and determine if it "looks optimal" to me or not.

This is a very hard problem for any AI, they follow two really conflicting goals: take energy from others to avoid starving, but avoid them from killing you.

As they all share the exact same parameters, no one is better than other. I think it is a extreme MCDM (Multiple Criteria Decision Making) problem, so solving it right is a way to show an efective, general way to crack those family of (basically open) problems.

In the last third of the video, I switch on the visual debuging options for the white rocket, showing overimpossed the traces of the paths it is thinking about (fractal paths) and red and green spots where the AI predicts good or bad outcomes.

Good outcomes correspond to paths in witch it can take energy from others, as gaining energy is a basic goal, while red dots correspond to event in witch rocket loose energy (other rocket hit him) or places where it crash with the walls.

It was a pitty I relaxed the AI parameters after only one was left. I wanted it to land just to have a "nice ending" for the video, but I shouldn't had relaxed it at all, as it was slightly too low for it to land. My condolences for the rocket family.

I have also keept some frames of the video so you can inspect with some details what is going on under the hood.

Image 1 - This first image show the white rocket lose/gain feelings. As commented, red spots correspond to expected future loses, so it will actively avoid them, while green ones are gain feelings (getting energy is always a gain for it), so it will actively try to get to the green zones.

Image 1 - Gain and Lose Feelings

Image 2 - This one is quite a mesh! On top of the red/green feelings sopts, you see fractal paths the intelligence is following. The most important thingto notice here is fractal gets more dense in green areas, so better options are scanned more deeply in a very natural way.

Image 2 - Fractal paths

Image 3 - This one is interesting as it shows how, in the hardest situations, like here where most probably it will crash into ground if it doesn't react quickly, the fractal shape adapts. Compare it with the previous "relaxed" frame to spot the differences.

Basically, the branching/cloning proccess has speed up a lot, meaning the fractal bifurcate much more times per second. Each bifurcation is marked with a red and black dot, and this is what makes the paths in this frame so dense: they are bifurcating almost at every pixel.

These "adaptative" behaviour of the fractal itself is key to make a fractal algorithm do someting useful, but not only the cloning proccess needs to be dynamically balanced, there are other parameters that need something similar.

Image 3 - Stressing the rocket

Before going on with the algorithm developement (I have some ideas to expand it a little further into... consciousness?), I plan to make some more videos: A better fight scenario (droping bombs may be), a cooperative scenario (a hive of bees working for the community and fighting hive enemies by their own).

Ah! And I still have to show you all a little about the "quantum physics" fractal, one that tries to mimic the QED of the friend Feynman. It is quite out of my quantum physics level of understanding, so don't spect "real physic behaviour", just something close to it (watching Feynman lectures did help just a little).

Yesterday I found a video that really caught my attention: Starting from the fractal shape of our brain, the speaker Wai H. Tsang derives the fractal working of the intelligence (hey! like my fractal intelligence!) and propose a fractal way to model intelligence... a 90% match with my actual work!

The second part of the video go through consciousness, the great missing part in my actual schema (ouch!) and from this point, it jumps to the universe, religions, god... I have walked down this road too, with almost exactly the same results, except I used entropic principles to make the walk, while he used fractals. My trip was months before I jumped into fractals, that is why there is a difference. Any how, I agree with Wai 100%.

If you liked my fractal intelligence videos and want to know what makes this idea work so great (and great by it self) please take this journey, you wont regreat.

This weekend I was fixing the fractal version of the AI to avoid those "silly decisions" that the first, alpha version, was clearly doing now and then.

The exponential growth of the fractals was badly defined, so after some days of thinking about it, I changed it and... voila! I was right this time!

Now the fractal version of the emotional AI really shines, making the linear one to pale in comparation.

Have a look at some fractal rockects dealing with 30 falling asteroids, all having uncertaintly on theirs positions, so the scenario is comparable to the one used in the last post.

Have a look at the third one for instance: It avoids the first falling rocks, then take land in an aparently risky place and, before being destroyed by the next rocks, it jumps away safely... the funny part is that it really knew in advance that it was going to have time to fill up the fuel tank and fly away before being hitted! With 20 seconds to think ahead, the AI can take such decisions seamlesly... and 20 seconds is only a chosen number, there is no problem with thinking 5 or 20 minutes ahead.

Well, to be honest, there is a small problem about thinking 5 minutes ahead: actual version is quite ineficient (this video took more than 24 hours to geenrate in a very modest PC, using a single CPU thread btw) so before I delve into those interesting possibilities, I will need to slim down the proccess a little bit.

The good news is that, once the fractal algortihm is as reliable as it is now, I can move into presenting more complex scenarios to the AI, and watch it solve them. My next milestone will be about new goals to build a really complex behaviour, in particular, I plan to simulate a honey bees hive with bugs that try to stole the honey.

Bees are supposed to work together in keeping the hive healty, so they will react as a group to avoid the bugs from getting honey... commiting suicide if it helps in this "transcendental" goal.

So how does this "Fractal Emotional Intelligence" really compares with the previous model, the "Entropic Emotional Intelligence"?

Yesterday it was worst, much worst, today it is almost as intelligent... or more. Intelligence is dificult to measure, the only way for me is to record a couple of long videos and watch them carefully, then use my intuition. Poor method, but what else is available?

You should start by watching the old "linear" intelligence dealing with a dangerous meteor shower in the previous post before going on. To be fair, I would suggest you to watch the "Uncertaintly" video as it mimic the conditions used in the next videos (folling asteroids are seen by rockets with some uncertaintly, so where will they fall is fuzzy for them).

Linear intelligence was well developed when this video was made, and the resulting behaviuor was excellent (from my highly biased point of view). Fractal version is still in alpha/beta version, some internals are not 100% converted and tested in the new model, so keep in mind this fractal version has still to grow up a little before judging.

That said, here you have a video in the same circumstances, but using fractal intelligence. Thinking debug is "on" in the few initial seconds, so you can stop the video and inspect the fractal path the rocket is using (it is too caothic to be "on" all the time).

It was cheating! There are fewer asteroids and fewer rockets, I know, but the idea with this frist video was to show a little of the fractal paths being used. Take a closer look at one frame of the video, the lines bifurcate to create a fractal path that can be very long (in time) without problems (linear version was limited to 5-10 seconds only, this video uses 20s but more can be used without problems).

Here you have a second video without those disturbing lines (ops, sorry, it is a "fractal entity"). It is not still a real match for the "linear" version, but gets nearer.

As you can see, it works to some extend better than before, but then they also make stupid decisions. First deads are probably caused by uncertaintly: rockets can not predict quite exactly where meteors are falling so they can't avoid get hitted.

After that, there is no energy drops around, so they end up hungry -well, I depleted the energy levels manually in some point- so they needed to take land urgently, but there was not much free space to land, and they are disturbing each other (and still not "mature") so more crashes arrise.

This final phase is critical for rockects, the hardest part, as they are low in energy and it urges them to try landing on difficult places.

Even with that in mind, the previous "linear" model would had probably do it better, quite better, but hey, it is a beta!

I planed to upload only one video per day until I had no more things to show you about those fractals, but I can not start this serie of posts without showing you one of my favourites: a fractal whos growth laws were designed to make it "sniff" towards the function's global maximum, go there, and then start a deep scan to find a better place.

A "maximizing plant" if you prefer.

I uploaded this function as it is special: most functions used to test those optimizing algorithms are designed for continuos functions, and the best algorithms also need the function to be diferentiable one or two times.

This fractal algorithm surely can be beaten on those functions by standard algorithms, but this one can scan any kind of function (even a random noise can be maximized to some degree) in any number of dimensions.

The function showed is clearly not continuous, and I deliverately placed a gasp between the intial position and the global best, but particle still converges quite straight to the best.

But what about convergence? Can you start from any initial point and still get convergence to the global optimum? Well, fractal is doing quite a nice job scanning the state space, so the most I can say is: it will try hard to converge from any initial point.

It is good at convergence, but I can not formally prove it in any way, that is the naked truth (fractals are not so nice at proving things) but just have a look at the following video.

I will show you 400 initial positions and it convergence to the global best (blue dots correspond to a slow fractal growth, while yellow ones use a more agressive aproach):

As you can see, all points converge, slower or faster, to the right position... and the alogrtihm can be then easily paralelized.

I hope you enjoyed watching!

UPDATE: I have added a benchmark about this algorithm in this post.

I am back writting after some months without activity, and it was not because I was busy writing a paper about the entropic intelligence (it is still unfinished), it was because I have been working hard on a "fractal" version of the intelligence engine.

Fractals are for algortihms what bluetooth is (was?) for devices: everything works better and faster. The dark side of fractals are they are hard to define, too caothic, at least if you want to make real calculations with them (instead of generating nice fractal images).

In the proccess of converting all the ideas the entropic intelligence was based on into some fractal equivalents, I have learned some tricks about dealing with them, and before presenting any video about fractal intelligence (it is still in beta, but works about as nice as the previous "linear" version did), I would like to first show you some early experiments with fractals.

The first thing I tried to solve using just fractals was optimizing a funtion: given f(x,y,z)=r (all reals) find the point where the funcition reaches its lower/higher point.

The funny thing is that fractals ended up being perfect for looking for the global maximums, not the local ones. In a very organic way, fractals can adapt its shape to look further or closer depending on the part of the function they are traveling at each moment.

So, here you have a video of my first "fractal experiment":

As you can see, this "thing" travels towards the global maximum at an almost constant speed. It is far from perfect, basically because the radius of the "thing" doesn't still change dynamically as it should, but it can illustrate quite clearly some basic ideas I have used.

First thing to note is that fractals are "proyected" from the actual position in several directions. Then, fractals grow as plants would do, taking the needed energy from the soil (function value serves as "nutrients"), and branching or cloning when the are full of energy.

There are some more "growth laws" involved, like the number of branches is a constant, but basically, after some time pass, the fractals are weigthened as you would do with real plants, and the central position is moved toward the most growed-up zones.

I will not delve into the fine details now, I will just try to make videos of some of my experiments and comment a little on them. Surely there will be time to come back to those "intermediate" algortihms, as they look quite promissing for some tasks.

In my list of things than could be tried to solve using this kind of fractals are: globally optimizing functions (I have a nice candidate), optimally connecting cities with roads (early beta), fractal emotional intelligence (working 90%, a V2 will come later), and, who knows, may be fractally growing neuronal networks (learning by modifying the internal structure, not only the axon weights) or, in my wildest dream (but kind of working), simulate "simplified" Q.E.D. physics with clouds of photons (these one would make a very nice screen saver at least!). If you love Feynman diagrams may be you will love this videos too!

But all those fractal algorithms are actually unfinished, some in alpha, some in beta, some in nowhere land, so I will just stick to commenting videos that show up something "real" and running.

If using fractals to solve problems seems interesting to you, or if you already use them, feel free to comment here or contact me, I will be glad to know!