So called "Intelligente behaviour" can be defined in a pure thermodinamic languaje, using just "entropy". Formulaes look pretty intimidating, but once you get the idea, coding it into a working AI is quite simple.
Fractalizing the same idea takes away entropy calc form the AI and makes it work much better.

Controlling an arbitrary complex agent so it behaves "intelligently" by using the thermodynamic concept of causal entropic forces is possible by using an special kind of fractal decision tree, the "One Way" fractal algorithm I have commented here before.

But controlling a group of agents in a intelligent way is not that
simple. I always managed to make several agents to evolve at the same
time in the same environment, but it was done just by giving each agent
its own personal intelligence, while considering the rest of the agents
as mobile parts of the environment: obstacles to be avoided to survive.

Here is a real "swarm intelligence" controlling a group of agents as one:

Finding a way for the fractal futures to be able to travel back in time, as in Feynman diagrams and integrals, has proved to be a little tricky, but I feel I am quite near to solve it.

In the mean time, I want to share a "dead end" try to accomplish this by using succesive layers of futures sended to the future in different succesive times, so they act as a serie of concentric wave fronts that travels to the future reinforcing each other.

Let start by showing a nice video of this idea at work:

Just after reading my own last post I wondered why I didn't let both algorithms to solve the maze in similar conditions so I could have a clear idea of how better one is compared to the older.

Wow, I didn't remebered the lineal "Entropic" AI to be so limited! I always tested it in the open field as I knew it didn't perform well in maze-like environments, but it looks like the google car racing against a F1.

So just have a look to the fractal "One Way" version of the last post solving the maze:

Today I consider the "One Way" version of the fractal AI to be officially finished: I can not do it any better!

I have recorded a short video directly from my screen so you can watch it at work in slow motion, generated in real time in a very slow PC (no GPU, no cuda, no paralellization, no optimising, just old and dirty standard code) so you can watch the fractal as it grows up.

What you will see in the video are the "tips" of the branches of the fractal as it evolves in time (visit the post about "Fractal algorithm basics" for more info about what those "branches" are), like a "front wave" of imaginary futures scanning all the possibilities in front of you.

It really acts as a flock of silly birds, but a very special one: all the birds are totally blind, they change direction and velocity totally randomly, like having a crazy monkey pushing the joysticks randomly, but when one bird crashes, it is cloned to a safer position, the position of one of the "best birds" in the actual flock. This process of cloning and collapsing is what defines the "Fractal growth" as commented in the last post, and it makes it possible for the futures to reach the exit of a complicated maze quite automagically.

The video start by showing you the final fractal paths used to solve the maze, then I switched the app into "Slow motion debug" mode and made a new step of the thinking process, so you can see the fractal forming. The params were 1000 futures and 100 seconds.

This post is meant to be the first in a serie about the basic working of what I now call "Fractal Growth based algorithms", including not only the "Fractal AI" I am working on, but also methods for function
optimisation or even my biggest experiment, the "quantum physics simulator" showed in previous posts.

Why are those methods relevant?

I have a strong feeling that those methods could be revolutionary in several aspects and serve as a base for new mathematical tools based on the power of fractals: I think they have the power to dilute some NP problem into P, and even more interesting, into O(n), making some hard problem to become "not so hard" anymore.

Nature is made up of fractals, as Maldelbrot showed us: a tree is a fractal, the coast line is a fractal, a mountain is also a fractal, etc. but we only use fractals for drawing fractals... it is the "lovely cat generator" of modern maths.

An algorithm is only "promising" until you perform some benchmarks against others similar state-of-the-art algorithms that are aimed at the same kind of problems.

Benchmarking a general AI algorithm is not easy as it is supposed to generate "intelligent behaviour" and it is not quite clearly defined, but I also develop some other fractal algorithms aimed at finding the global minimum of a real function, or "optimising" functions, and in this case, the algorithm is easily comparable with others, so here we go.

Fractal AI is quite mature by now, it has reached a quite big milestone, being the first "finished" fractal AI I have been able to produce to date. It is not still "completed" in a general sense, but with some prefixed limitations, it is finished.

I stresssed it with 30 asteroids using only 100 futures (in previous tests I needed 300 or 500 futures to get something like this) and it was no problem for the rocket at all. Then I tried 40 asteroids. Again no problem but more stress.

50 was too much for it. The sky was full of asteroids, I didn't spect it to survive at first, but it did. I did manage to fly away the shower, but survivided! It is a new record.

With more futures (about 200) it could have done, in fact, it is quite near to scape, but end up badly landed, but alive. I will try to pass the 60 barrier, but not sure it is possible to do...

UPDATE: I tried with 200 futures, and it made the trick:

Once the Fractal AI is formally finished (regardless of how many improvements I could still implement on it) it is time to mention some of my wildest proyects for the long term.

Long fractal shots

All of them are long shots, things I feel that can be done using fractals
in the exact same way I am applying in the Fractal AI. Some of them are
already "half done" but will need a deep rethinking, others are just ideas
in an very early stage of maturity.

1) Fractal growing of neuronal networks
This one is quite interesting as it could automate the proccess of building the network itself (adding neurons and connectons fractally as needed, no more need to choose how many layers to use) at the same time it learns from the examples it is exposed to. I have some sketches of it, but it is istill not mature enough to start coding. I call it "Fractal Deep Learning".

2) Globally otimizing real functions of any kind
Finding local minimuns is tricky, more if you don't impose the restriction of the function being continous, but trying to search for the global minimum is even more tricky.

Actually, I have three of them, none is optimal still but they can beat the most advanced optimizing algortihms I could find. I spect a great improvement once I code the ideas used in the last Fractal AI into any of them.

3) Simulating Quantum Physics
Fractal intelligence is really a quantum algortihm translated down to standard computers with the use of fractals, retaining all the magic of quantum computing down to earth, as the computational cost scale linearly with the different parameters you can change to make intelligence better.

So, when all those aditions are implanted into the actual quantum physic (QED more precisely), I spect the simulation to fit with quantum physics quite closely.

This physic fractal is, at the same time, the perfect one for doing global optimizing of functions: The best way to find the highest point in a citie is producing a storm over it, and watch the lightings fall. They will show you where the hights point is. This is the idea behind the algortihm, and that is way, in this code, agents are called "bolts" and futures "sparks" (but the underlaying ideas are the same).

In QED bolts correspond to particles, while sparks correspond to virtual photons forming a electrically charged cloud around the particle. Clouds of different particles interact, so the atraction, repulsion, exclusion principle, etc. emerge from the rules of interaction used.

4) Connecting cities with a minimum cost
The old problem of connecting cities with different productions optimally using a minimum lenght of road is also perfect for this fractals. Fungus already do it this way!

Spain badly connect with fractals

I have a very old code about this, it was one of the first things I tried out with fractals. The small changes I needed to make it properly work are now quite clear.

This problem represent a whole type of NP hard problems. For instance, the shales-man problem can be adapted to this same idea. This adaptation is already done and working, but using the weakest of the three optimizing algorithms, so not so impresive as you may think. I spect being able to adapt it to the best of the three, the quantum one, and have a second try on it.

Random thoughts

-As the fractal base algortihm scales linearly, I spect all those problems to be solved in a lineal (so happily polynomial) time, no more NP problems as long as you can adapt it to fractals.

-Once a problem is "fractalized", it naturally becomes a quantum algortihm, so it can be easily solved using real quatum computers (check for availability first!). I spect a massive number of algortihm being adapted to the next wave of quantum computers, and who knows, may be it is the easiest way to do it!

-If you don't have a quantum computer at hand, you still can easily paralelize the fractal algortihm and use net computing to apply as much CPU power as you can get into solving the problem. Not the same things, but the best match.

-I know Richard Feynman knew about it all long before I did, he just didn't have the right tool at this time: the notion of "fractal" Mandelbrot brought to us. The "tinny little arrows" of Feynman is what he left behind for others to follow (apart from the quite interesting "ant" episode). Thanks a lot, Richard!

-There also other even longer shots in mind I still haven't cracked, but they are not ready to leave the labs!

Real examples of the same fractal

Nature use it for everything, big or small, and you can find these exact fractals everywhere:

Lighting tattoed this fractal on her! Is she a matemagician?

Ligthing strike inprint the same fractal.

Lighting or fractal? (by Diamond Hoo Ha Man)

Ant tracks are also fractals (but a more advanced one!)

Today I will show you what I consider to be the first finished version of the "Fract Emotional Artificial Intelligence".

The Fractal AI is now parameter less, meaning all those params I needed to set manually, like the "evapore ratio", are now governed by the AI.

Nice fractal thinking paths.

Before telling you about the history behind, please watch a video of the Fractal AI V2.0 (2.0 for no particular reason except it follows 1.9). It is just the same old boring pair of rockets trying to collect energy from drops as fast as posible, but this time, they sports the newest version of the AI (visible from the last third of the video).

Change log

So what has changed from the last post? I was telling you about the implementation of an exclusion principle to avoid dense areas. I finished it, and I didn't need to use momentums -velocities- at all, it was not needed, fractal nature of the algorithm makes it irrelevant.

But the resulting AI was not what I spected. It was terrible conservative, watching it you would say it was so clever it didn't want to risk it self. The exclusion principle by its own was not enough.

Then I turned into the "Evapore ratio" searching for a solution. This evapore ratio defines how many futures are replaced by others each second, or, inversely, how long a future will last, in average. If evapore was set to zero, average life of futures tend to infinity, so they trace simple lines.

So I tried to make this evapore automatically adjusted by the AI, specting it to solve the problem of this excess of prudence. It idn't work.

Then I realized something fundamental was missing. There was the need for a second AI parameter: The "Sensibility" to the feelings that the agent's goals triger.

It sounds extrange at first, but it makes quite a lot of "sense" (punch intended): Initially, I had an "Entropic Intelligence" that was linear and goal-less, just using "blind" entropic forces. Then, I added feelings, and things changed for better. Now, I have added fractals as a sustitutive to linear paths, again for better.

So, if a parameter called "Evaporation" was changing the paths from lineal ones when evapore ratio was set to zero, to a more and more fractally shaped path, then I needed a second parameter, simetrically defined, to control how "emotional" the algorithm goes.

I called it "Sense", and basically makes all feelings weight to be augmented or diminised in the decision taking proccess. Using evapore and sense simultaneously realy made the magic.

I start playing with different combinations of "Evapore" and "Sense" values, noticing I could simulate quite a range of different behaviours:

Evapore=0 and Sense=0
It was the old and first "Common Sense" linear algorithm I came to in my first posts.

Evapore=1 and Sense=0
It is a new kind of "Common Sense", one much better than the linear one, as it replaces boring lines with nice fractals.

Evapore=1 and Sense=1
It is the "Fractal Emotional Artificial Intelligent" it is pure form, the best of all four.

Finally, I tried to make the AI to automatically adjust this "Sense" for me, and voila, it worked fine this time. The introduction of the "sense" factor allowed me to fine-tune the AI and get a nice video.

But evaporation is not an easy parameter to manually choose, playing around with the fractality was dangerous, as more of it was better, until a point in witch it becomaes dangerous. Same goes for the "Sense" parameter. There was a "Sweet point" for each situation.

The final step was to make BOTH of the parameter to be simultaneously adjusted by the AI. It closed the loop. The AI was parameter-less for the first time

I will show you some frames of another discarted video showing the effect of each parameter. Please note the two level meters on the player's left, they represent the evapore and sense ratios used at each moment, both ranging from 0 to 5 (again, the 5 limit is for no particular reason).

Evapore ratio at its maximum.

Green dots accounts for good things to come, points were the fractal
cloned, usually after a good point (a big drop) is reached, while red
ones account for places where a future decided to "suicide" because the
neighborhood was too dense for him (the "exclusion principle" in
action), quite like in a "game of life".

There are also red points where the future ends because of the
agent's death, but they are another kind of "red points", as not all the
systems need to have a "dead state".

Goal less fractal "Common Sense"

This last frame correspond to Sense=0. The AI continuously goes into this "Common Sense" configuration when the situation gets too dangerous. It switches off feelings, fix the surviving problem, and when done, it switch on again the feelings and pursue the goals actively again. It is a "Fractal Common Sense", a safe auto pilot that automatically engages when needed.

What a Fractal AI is

You can think of it in some different ways, all correct.

Personally, I think of it as a "fractalized" version of my previous "Entropic Emotional Intelligence". In the fractal version, all the complex heuristic added to make the linear version to work, are nicely gone.

Also, it is a fractal "Conway's Game of Life", a cellular automata that
decides with one clones and witch one evaporates using simpler rules:
nice places tend to make futures to clone, while too dense areas tend to make them evaporate. The resulting evolution of the cellular automata is the way the futures spread into the future, exploring the possibilities optimally. Ah! And the exact rules used adapt continuously so the growth is always maximized.

But
it also performs as a truly evolutive algortihm, as the fractals
traveling from different initial direction evolve by evaporating the
worst postioned futures while cloning -and mutating- the best ones, making decisions based on how fast each fractal fagocitated each other.

Just to make it better, the algortihm is actually easy and naturally
paralellizable, in a sense that really fits with the paradigm of quantum
computing. The algortihm is indeed a natural quantum algorithm, just downgraded to make it work on non-quantum computers, like my PC. If you should want to "revert" it into its natural form of a quantum algorithm, some of the code just would need to be wiped out, but nothing added.

So
basically the Fractal AI algorithm is a perfect mix of several "magic algortihms": an emotional one, a cellular automaton, a fractal one and, and an evolutive algorithm, all packed up together into a single quantum algorithm simplier than any
of the above alone.

I is worth noting that the resulting algorithm has a very low computational cost, the CPU time needed is low and, more importantly, it grows LINEARLY with all the parameters involved. Even when the AI controls several agents, the collision detection phase, the one that is usually O(n²), is also linear O(n) in this case thanks to quantum computing or, equivalently, thanks to the fractal nature of it.

I will add a last video where all those "debbuging options" are switched on so you can see, real time, how the fractal evolves, "sniffing" the possible outcomes for the different decisions. This time only 100 futures were used, so the AI is not capable of decinding optimally when there are a few disperse drops.

What is next?

Being "perfect" doesn't mean you can't improve it. It means that, given the fixed parameters you supplied, it performs as good as it can be done.

In those videos, I had to choose the following parameters:

1- Seconds to think in advance (20 or 30 seconds for instance).
2- The number of futures to use for each agent (200 for instance).
3- The precision or FPS (Frames per Second) of the simulation (20 FPS).
4- The goal set the agent has to follow (keep health, keep energy, get drops and love speeding).
5- The relative strength of the feelings coming from the goals (1, 1, 5 and 0.5 in this example).

One to tree are related to how many resources you can invest on this AI, so the more the better, and it is OK to be manually set. Fourth is the goal set you want the AI to follow. Again, you should define it some how manually. But the fifth one, the relative strength of the goals, can be directly controlled by the AI and will be in some days or weeks.
So the next steps intended are:

1- Make use of the fourth "transcendental feeling" to simulate deep collaboration, a "hive" of agents. I don't remeber commenting about it on the blog, so I will have to explain it when I have the first videos.

2- Make the relative strengths of the actual goals to change automatically so the fourth feeling is maximized. It is similar with the old idea of "layered entropic intelligence" form my distant pass, but now, under the umbrella of fractals, it is way much simplier to code and use, and more CPU friendly too!

3- Try to make the low level goals automatically created by the AI. This will be a tricky part, as there are a lot of combintions of sensorial data you can use, so a proper "evolutive" solution will be needed. I already thought about it when I was developing the Entropic version, and again, using fractals makes it all much simplier to implement, so may be I will be able to do it (but I am not quite sure now).

When all those points are coded, it will allows us something great: You define the problem by giving a simulaton of it, then set some CPU limits (number of seconds and futures) and finally a high level "transcendental" goal like "Win the race", "Take care of the colony" or "Bring me beer" (my personal favourite).

The AI will then construct the optimal goal set for you (an optimal base of it using grade one polynomials of the sensor outputs), give them the optimal relative strenghts, adjust the AI internal params of Evapore and Sense again optimaly for the given problem, and then use as many agents as it is controlling (hundreds of drones or just a single robot) to follow your high level order, without any human help, in a deeply collaborative and intelligent way.

Today I had some spare time to work on this and now I am presenting you the first videos of this exclusion at work. It did improve AI quite a lot! Just see the video and judge:

Please note the track was quite wet and that I ordered it to suicide at the end. Also, this AI is still in a "beta" stage.

For instance, the actual coding of this "exclusion principle" only uses agent's positions X and Y to detect dense areas. It is a position-only exclusion principle, but the complete one should also take angle and velocities into account, as it is the positon and momentum of the particles that need to be "excluded".

Why is this necessary in an AI implementation? Well, we are not in a physical simulation, so I would only choose to implement it if I think it is for better, as it is in this case: Imagine two futures that started by choosing oposite initial directions. Both futures will initially diverge, as one kart will turn left while the other turn right. But they can then change direction and both future's traces can then cross in a given point. When future 1 is in this position, so it is future 2, as they cross there, but they will be driving in different directions.

If I should detect a crash in this situation, I will delete one of them, loosing a geniune "diferent course" of the future, never knowing what would had happened "if".

May be future 1 was just to crash in the next moment, while future 2 was not. So, if in this situation I delete future 2, both futures will disapear, the good future 2 first, and then the other crashing one.

I prefered to implement exclusion in two phases just to be able to tell how important momentum or angle is for AI. Surely they both will be, but not as much of an improvement as the position excluson alone is, I spect.

So the AI on the video still lacks some features to be showing 100% of its power:

a) Exclusion principle is only applied to positions, not momentums or angles, as it should be.

b) Implementation of position exclusion uses a hand-choosen distance parameter. I has to be the AI the one to chose it automatically in next versions.

c) Cloning rate is poorly choosen now. A simple heuristic is used now. AI will do it in next versions, with same solution as in b).
All that said, the kart is "almost" optimally driven for my highly bised taste, and when the above is addresed, I think it will drive almost optimally, only given one single parameter for the AI, the "evaporation rate", and some CPU power, the more the best.

But, to be honest, this was not the first video I recorded today... it was the third one.

First one was a little dissaster! I used 100% of exclusion, meaning all futures were colliding with all the rest in the first seconds, along with a poor implementation of it. This made almost all of the futures to collapse and only 3 or 4 of them "survived" the first second or so.

Those few surviving futures were then the only ones being scanned deeper into the future, so the kart became blind to most of the available future outcomes, and driving blind is never a good idea.

Watch this "trully first" video before I delete it!

And here is video 2. It is nice, and shows some nice fractals, but the track had quite more friction, so driving it was not so challenging as is the third video I showed you first.

This time I want to show you the Quantum Electrodynamic "fractal simulator" on a 3D environment, so you can compare it with the previous 2D simulation video.

Again, the proton is not a real proton. If it were, it would scape from the trap inmediately, but I didn't want this to happend, so i keept it on screen by changing it a little.

I have redefined colors this time and also added a sound track, as youtube user Saelikho suggested me. Good choice, Saelikho!

In some point in the video I switched to a top view, just to have a small 3D "sensation". After some time, I switch off the sparks on screen, so only the "macro" particles are visibles.

It is funny to note that, even those "big" particles can attrack, repel or orbit around others, they don't really "exists" in the fractal definition, they don't play any significant role, just represent the averaged position of all the little sparks on its cloud.

I am NOT a quantum physicist, not even a decent amateur, but I have a slight idea of how Quantum Electrodynamics works -thanks to Feynman lectures and books- so I was tempted to try this out: can I simulate QED using only fractals?

Well, it was not that way that the idea started... it was really the opposite: I was trying different ways to maximize a function with fractals, then I got sometinhg that resembled a lot to QED -to me- once I was able to undress it from all the "macro" misconceptions I was using.

Finally, QED simulation made a decent function "solver" (maximizing or minimizing) but this is another history. The reason I am talking about QED is just to show you a simple video of how this extrange thing behaves.

In the video, an electron and a particle similar to a proton (sorry, real protons still in beta) are trapped in a 2D electromagnetic trap (function being minimized is x²+y²+z²), so they attract each other, but at the same time they can not be in the same spot, so a repulsion appears at small distances (I didn't place it there, I swear!) that make them to stay a little apart.

A cloud of "virtual photons" dance around the particles in the video, forming nice spirals and reinforcing one to another into a stable configuration where both spirals rotate in opposite directions, in a nice and strange dance.

If you know about QED more than I do -not difficult at all- surely you will laught a little about the simplicity of my approach. Please do comment about it, I am willing to be ashamed for my deep ignorance ;-)

Note: Wave equation's phase was off for this video for no particular reason. I will try to
simulate a difraction pattern using it and upload the video some near day.

This example runs on a 2D euclidean space, but other options allow me to simulate it over a 3D space or even a Minkowsky 4D, where time is just another dimension. The resulting shapes are not so different to those in this video, but it sounds terrible nice to use a "Minkowsky space", doesn't it?

Tech note: This video is not "real time" recorded form my computers, but it could be so, the simulation is lighting fast and can be paralelized if I weren't so lazy!

While reviewing the video of my first talk (I do it frequently, but this time I happily discovered it mentioned on a data analysis blog I was just reading) I noticed automatic subtitles were available, and quite nicely converted to text for my surprise, wow!

So I tried the english automatic translation and hey, you wouldn't say, but it was near perfect, even the little jokes were almost intact!

So if you were willing to listen to a simple explanation of this "Entropic Intelligence" algorithm, one you can easily understand and directly apply to your home-made kart simulation (or whatsoever it is), here you are, just remember to activate subtitles and then, clicking on the gear, select translate and choose you own languaje, it will do it quite right.

You will notice comments are not enabled on YouTube. This video belongs to the Elche University CIO and so I have no editing rights, but feel free to comment here if you feel in the need, I will be happy to answer.

Quantum physic plays a big role in developing an artificial intelligence, more than many could think at first glance.

Back in the days I was developing the Entropic Intelligence, I needed to discard similar ending futures before evaluating the "Future Entropy" of a given option. This was because "entropy" always involves using a given minimum distance so futures that ends closer than this must be considered only one single future. If this remainded to you some quantum principle like exclusion, you are quite rigth.

Entropic Intelligence grouping future end points

When I started the conversion into a fractal intelligence, I first thought I could get ride of this ugly step of deleting some futures. This post is to explain I could NOT do it. I still need to perform this "pruning" of the fractal so its branches doesn't condensate into a single small zone.

Lets see an example of a "good fractal" that doesn't condensate on a single point, this is how all fractals used by the AI should look like:

As you can see, the fractal density is not too high at any given point, and this is why the fractal can spread to almost half a lap ahead of the kart's position.

But without an exclusion principle, and as you will see in the video below, things can go terrible wrong with the fractal, making it to condensate in not-so-good zones, missleading the AI into easy to avoid traps:

Fractal condensate in a zone making AI blind to other better options.

This is why I tried to simulate quantum physics using fractals before jumping into the fractal intelligence. And what I learned from this avdventure is proven to be critical to really applying fractal intelligence optimally.

The botton line is quite surprising: In the karts example of fractal AI -but also on any other form of fractal growth algortihm I have tried- agents need to check for collisions with other karts on the track to detect and avoid this possibilities... but they also need to check collisions with themselves!

Colliding with your self will "automagically" disolve any accumulation that may tend to appear, as fractal growth laws dictate that collided branches (I can name a "future" as a "branch" or a "spark" some times, depending on the mental example I am using at the moment: a plant that grows by bifurcating its branches, or a lighting bolt formed by a miriad of electric sparks moving around) will move to the position where another randomly chosen branch -or future- is. This simple fractal growth rule -there are some more- makes all the problem to vanish.

But there is more fun about it: If you apply this self-crashing detection at the beginning of all the futures being imagined in a frame, they will all collide with each other in a chaos, as they started in the same initial point.

This is a problem nature solved in quantum physics by using what Richard Feynman called "tinny little arrows", or what is also known as "wave equation's phase angle". This quantum "artifact" mandates that a couple of photons leaving an electron will not likely collide with each other, as their "tinny little arrows" points in almost the same direction, so the difference has an almost zero lenght.

Length of such additon of arrows is called "amplitude of the event". The probability of such event (the collision) is then this length squared, an even smaller number.

Again, to safely apply Pauli's exclusion principle to fractal intelligence, I am afraid I will ultimately need to account for some "future's phase angle" so a couple of future positions will more likely collide when they have raced a slightly different length, so arrows are not "in phase" anymore.

I don't know about you, but I am really impressed of how similar quantum mechanics is to fractal AI, not to mention my other two fractal algorithms to maximize/minimize scalar functions. Yes, they are slowly converging into the same quantum physics one!

Let us go now to this new fractal AI video. Compared to the last one, now points 8 and 9 of the list are coded into the AI, but as I commented, just to discover I still need exclusion and may be phase, so I keep it labeled as "beta".

In the first lap debug thinking was off, and everything seemed to go ok, not optimal but quite ok. In the second lap debug thinking is turned on and then we can see how the fractal some times concentrate in small regions, leading to bad decisions, while other spread nicely to half a lap ahead, as it was intended to do.

When trying to judge how optimal the AI is now, please keep in mind that the kart does not pretend to win any race, its only goal is to run "as fast as possible". This is why it usually prefers to widely open on curves, not becuase it is good for winning the race, that is not the case, but because it allows it to keep a high speed on the turns.

Finally, a word about the silly back driving ending: After the bad decision after half of the second lap, the kart start driving backwards to un-trap it self, that was intelligent, but then it keeps on going backwards until it finally crash and break the toy. But why?

The answer is the same as before: The lack of an exclusion principle. When most of the futures goes backwards, if you don't avoid those acumulations with a exclusion principle, it will focus only on this possibility, being again blind to other less probable -but may be more interesting- options, namely, driving forward again to speed up in the long term, something it was supposed to be able to do as it was thinking 30 seconds ahead.

This blog has been devoted to "Entropic Emotional AI" until some weeks ago. Now I call this "old" implementation of the AI the "linear" or the "entropic" version, while now my attention has shifted to "Fractal Emotional AI".

Why? What makes this change in the wording means? Is "Fractal" any better that "Entropic"?

Short answer is Fractal is much better, powerful and simplier. It is the big brother of the now weak Entropic AI. It performs much better that the previous one in terms of the emerging intelligent behaviours and, more important (or not) it is far more flexible and extendable than it was never.

Lets compare the same frame of a similar simulation with both versions. Fractal and Linear versions used 500 futures both, but Fractal was thinking 50 seconds ahead, while Linear version only allowed up to 20 seconds. Anyway, using more that 20 is a total waste of CPU as you will see.

Lets start with the fractal version:

Fractal AI - 500 futures - 50 seconds

The most important difference is that the fractal version is capable of thinking 50 seconds ahead and more, there is no limit. The fractal growth is made in such a way that all the track can be "scanned" in one single instant.

Now lets see how did the linear/entropic version:

Linear (entropic) AI - 500 futures - 20 seconds

Linear version is not capable of thinking even 20 seconds ahead, most of the futures it thinks end up a few seconds after start. The futures can't even reach a 5 or 6 seconds time horizon! That is the main difference.

In the linear version, a "crazy monkey" was in charge to drive the kart during those endless 20 seconds, moving the joysticks randomly here and there. But none of the monkeys was able to keep it running for more than 5 or 6 seconds.

Crazy monkeys were never a good idea anyhow, I noted it in my first talk, but without the help of fractal growth, what else do you have?

Those are the main differences:

Detecting the outcomes for your action any number of seconds ahead means you can decide not to pass another kart because you don't like champagne, and you can foresee that this action would lead you to win the race and then to have to drink it... no way! The power of this unlimited time horizon to think ahead is the fractal magic doing its work.

The fractal size -or its weight if you think of it as a plant growing up- tells you how good or bad a given option is, this means no need to use strange "mood" or complicated "gain" feelings, just positive enjoys and simple gain feelings.

The fractal growth is not governed by differentinal changes on the agent's state, so there is no need to store the "old" values for all parameters.

Not using strange feelings means there is no need to calculate strange "stimulis" derived from the real one associated with each feeling. For goals to work in the linear version I needed up to 3 of them. Now they are all gone.

Fractals are know for scanning the space optimally. To get a similar effect, if even possible, you would need to use a number of linear futures orders of magntude bigger than with a fractal. Fractal AI is CPU friendly as CPU time depends linearly on time ahead used. No NP problem here.

The "shape" of the fractal (or its laws of growth) has more degrees of freedom than lines had. This means the fractal AI can adapt to different environments much better that linear ever could.

Some other secondary "artifacts" I needed to add to the linear version are now obsolete. The main one is the grid size and deleting the similar ending futures (the entropic part of the algorithm) but also some others like ReactionTime or Urgency in the gain feelings.

If you don't need to prune the repeated futures on each options, you don't need options at all. No need to decide witch decisions will be your arbitrary chossen options.

Also, there is more need of controlling joystick "sensitivity", as there no option to choose, you are free to try out any random action you could take just by having a fractal growing in that particular direction.

This all means that the fractal AI, althought being much more advanced and powerful than entropic one was, is also much simplier. Fractals are simplier than lines, but also goals, feelings and the rest of the ingredients are now simplified to a minimun or even deleted from the code.

To get an idea of how good it can be, have a look at the "oldy but goldy" kart performing an almost perfect lap at the standar test track using an unfinished version of the fractal AI (items 8 and 9 are not still implemented, that is why it crashes at the end):

In some day items 8 and 9 will be ready, we will leave beta stage in that moment and get to version 1.0, but there are more things in the way for V2.0, of course!

If you wish to compare with a similar "old AI" video, check this one, the time limit of 5-6 seconds is quite obvious on dificult sections, as the karts are not aware of the problems behind the next curve, they can not "see" it at all:

Tonight four of my new "fractal minded" rockets have been playing "Ninja fight". As before, you can compare it with the "linear/entropic" version of the algorithm here.

The rules are simple: If rocket "A" touch "B" with the tip of its main thruster flame, it takes energy from it.

There is no new "goals", nothing instructed them to avoid other's flames or use theirs to fight and gain energy, in the same way no one told them how to land to fill the tank: they decided what to do and how to behave by their own, just based on how good/bad is getting or loosing energy ( abasic goal all players have).

The resulting behaviour, in my highly biased opinion (IMHBO?), is near perfect. The parameters were: 300 futures to scan the future consecuences, 20 seconds ahead thinking. It was not quite too much, yellow one needed a little more to survive a hard situation, and when at the end of the video I relaxed it to 200f 15s, the winner ended up crashing. My fault!

I will try again today with more futures, 500 will be a sweet point I spect, so I can check "visually" for the behaviour and determine if it "looks optimal" to me or not.

This is a very hard problem for any AI, they follow two really conflicting goals: take energy from others to avoid starving, but avoid them from killing you.

As they all share the exact same parameters, no one is better than other. I think it is a extreme MCDM (Multiple Criteria Decision Making) problem, so solving it right is a way to show an efective, general way to crack those family of (basically open) problems.

In the last third of the video, I switch on the visual debuging options for the white rocket, showing overimpossed the traces of the paths it is thinking about (fractal paths) and red and green spots where the AI predicts good or bad outcomes.

Good outcomes correspond to paths in witch it can take energy from others, as gaining energy is a basic goal, while red dots correspond to event in witch rocket loose energy (other rocket hit him) or places where it crash with the walls.

It was a pitty I relaxed the AI parameters after only one was left. I wanted it to land just to have a "nice ending" for the video, but I shouldn't had relaxed it at all, as it was slightly too low for it to land. My condolences for the rocket family.

I have also keept some frames of the video so you can inspect with some details what is going on under the hood.

Image 1 - This first image show the white rocket lose/gain feelings. As commented, red spots correspond to expected future loses, so it will actively avoid them, while green ones are gain feelings (getting energy is always a gain for it), so it will actively try to get to the green zones.

Image 1 - Gain and Lose Feelings

Image 2 - This one is quite a mesh! On top of the red/green feelings sopts, you see fractal paths the intelligence is following. The most important thingto notice here is fractal gets more dense in green areas, so better options are scanned more deeply in a very natural way.

Image 2 - Fractal paths

Image 3 - This one is interesting as it shows how, in the hardest situations, like here where most probably it will crash into ground if it doesn't react quickly, the fractal shape adapts. Compare it with the previous "relaxed" frame to spot the differences.

Basically, the branching/cloning proccess has speed up a lot, meaning the fractal bifurcate much more times per second. Each bifurcation is marked with a red and black dot, and this is what makes the paths in this frame so dense: they are bifurcating almost at every pixel.

These "adaptative" behaviour of the fractal itself is key to make a fractal algorithm do someting useful, but not only the cloning proccess needs to be dynamically balanced, there are other parameters that need something similar.

Image 3 - Stressing the rocket

Before going on with the algorithm developement (I have some ideas to expand it a little further into... consciousness?), I plan to make some more videos: A better fight scenario (droping bombs may be), a cooperative scenario (a hive of bees working for the community and fighting hive enemies by their own).

Ah! And I still have to show you all a little about the "quantum physics" fractal, one that tries to mimic the QED of the friend Feynman. It is quite out of my quantum physics level of understanding, so don't spect "real physic behaviour", just something close to it (watching Feynman lectures did help just a little).

Yesterday I found a video that really caught my attention: Starting from the fractal shape of our brain, the speaker Wai H. Tsang derives the fractal working of the intelligence (hey! like my fractal intelligence!) and propose a fractal way to model intelligence... a 90% match with my actual work!

The second part of the video go through consciousness, the great missing part in my actual schema (ouch!) and from this point, it jumps to the universe, religions, god... I have walked down this road too, with almost exactly the same results, except I used entropic principles to make the walk, while he used fractals. My trip was months before I jumped into fractals, that is why there is a difference. Any how, I agree with Wai 100%.

If you liked my fractal intelligence videos and want to know what makes this idea work so great (and great by it self) please take this journey, you wont regreat.

This weekend I was fixing the fractal version of the AI to avoid those "silly decisions" that the first, alpha version, was clearly doing now and then.

The exponential growth of the fractals was badly defined, so after some days of thinking about it, I changed it and... voila! I was right this time!

Now the fractal version of the emotional AI really shines, making the linear one to pale in comparation.

Have a look at some fractal rockects dealing with 30 falling asteroids, all having uncertaintly on theirs positions, so the scenario is comparable to the one used in the last post.

Have a look at the third one for instance: It avoids the first falling rocks, then take land in an aparently risky place and, before being destroyed by the next rocks, it jumps away safely... the funny part is that it really knew in advance that it was going to have time to fill up the fuel tank and fly away before being hitted! With 20 seconds to think ahead, the AI can take such decisions seamlesly... and 20 seconds is only a chosen number, there is no problem with thinking 5 or 20 minutes ahead.

Well, to be honest, there is a small problem about thinking 5 minutes ahead: actual version is quite ineficient (this video took more than 24 hours to geenrate in a very modest PC, using a single CPU thread btw) so before I delve into those interesting possibilities, I will need to slim down the proccess a little bit.

The good news is that, once the fractal algortihm is as reliable as it is now, I can move into presenting more complex scenarios to the AI, and watch it solve them. My next milestone will be about new goals to build a really complex behaviour, in particular, I plan to simulate a honey bees hive with bugs that try to stole the honey.

Bees are supposed to work together in keeping the hive healty, so they will react as a group to avoid the bugs from getting honey... commiting suicide if it helps in this "transcendental" goal.

So how does this "Fractal Emotional Intelligence" really compares with the previous model, the "Entropic Emotional Intelligence"?

Yesterday it was worst, much worst, today it is almost as intelligent... or more. Intelligence is dificult to measure, the only way for me is to record a couple of long videos and watch them carefully, then use my intuition. Poor method, but what else is available?

You should start by watching the old "linear" intelligence dealing with a dangerous meteor shower in the previous post before going on. To be fair, I would suggest you to watch the "Uncertaintly" video as it mimic the conditions used in the next videos (folling asteroids are seen by rockets with some uncertaintly, so where will they fall is fuzzy for them).

Linear intelligence was well developed when this video was made, and the resulting behaviuor was excellent (from my highly biased point of view). Fractal version is still in alpha/beta version, some internals are not 100% converted and tested in the new model, so keep in mind this fractal version has still to grow up a little before judging.

That said, here you have a video in the same circumstances, but using fractal intelligence. Thinking debug is "on" in the few initial seconds, so you can stop the video and inspect the fractal path the rocket is using (it is too caothic to be "on" all the time).

It was cheating! There are fewer asteroids and fewer rockets, I know, but the idea with this frist video was to show a little of the fractal paths being used. Take a closer look at one frame of the video, the lines bifurcate to create a fractal path that can be very long (in time) without problems (linear version was limited to 5-10 seconds only, this video uses 20s but more can be used without problems).

Here you have a second video without those disturbing lines (ops, sorry, it is a "fractal entity"). It is not still a real match for the "linear" version, but gets nearer.

As you can see, it works to some extend better than before, but then they also make stupid decisions. First deads are probably caused by uncertaintly: rockets can not predict quite exactly where meteors are falling so they can't avoid get hitted.

After that, there is no energy drops around, so they end up hungry -well, I depleted the energy levels manually in some point- so they needed to take land urgently, but there was not much free space to land, and they are disturbing each other (and still not "mature") so more crashes arrise.

This final phase is critical for rockects, the hardest part, as they are low in energy and it urges them to try landing on difficult places.

Even with that in mind, the previous "linear" model would had probably do it better, quite better, but hey, it is a beta!

I planed to upload only one video per day until I had no more things to show you about those fractals, but I can not start this serie of posts without showing you one of my favourites: a fractal whos growth laws were designed to make it "sniff" towards the function's global maximum, go there, and then start a deep scan to find a better place.

A "maximizing plant" if you prefer.

I uploaded this function as it is special: most functions used to test those optimizing algorithms are designed for continuos functions, and the best algorithms also need the function to be diferentiable one or two times.

This fractal algorithm surely can be beaten on those functions by standard algorithms, but this one can scan any kind of function (even a random noise can be maximized to some degree) in any number of dimensions.

The function showed is clearly not continuous, and I deliverately placed a gasp between the intial position and the global best, but particle still converges quite straight to the best.

But what about convergence? Can you start from any initial point and still get convergence to the global optimum? Well, fractal is doing quite a nice job scanning the state space, so the most I can say is: it will try hard to converge from any initial point.

It is good at convergence, but I can not formally prove it in any way, that is the naked truth (fractals are not so nice at proving things) but just have a look at the following video.

I will show you 400 initial positions and it convergence to the global best (blue dots correspond to a slow fractal growth, while yellow ones use a more agressive aproach):

As you can see, all points converge, slower or faster, to the right position... and the alogrtihm can be then easily paralelized.

I hope you enjoyed watching!

UPDATE: I have added a benchmark about this algorithm in this post.

I am back writting after some months without activity, and it was not because I was busy writing a paper about the entropic intelligence (it is still unfinished), it was because I have been working hard on a "fractal" version of the intelligence engine.

Fractals are for algortihms what bluetooth is (was?) for devices: everything works better and faster. The dark side of fractals are they are hard to define, too caothic, at least if you want to make real calculations with them (instead of generating nice fractal images).

In the proccess of converting all the ideas the entropic intelligence was based on into some fractal equivalents, I have learned some tricks about dealing with them, and before presenting any video about fractal intelligence (it is still in beta, but works about as nice as the previous "linear" version did), I would like to first show you some early experiments with fractals.

The first thing I tried to solve using just fractals was optimizing a funtion: given f(x,y,z)=r (all reals) find the point where the funcition reaches its lower/higher point.

The funny thing is that fractals ended up being perfect for looking for the global maximums, not the local ones. In a very organic way, fractals can adapt its shape to look further or closer depending on the part of the function they are traveling at each moment.

So, here you have a video of my first "fractal experiment":

As you can see, this "thing" travels towards the global maximum at an almost constant speed. It is far from perfect, basically because the radius of the "thing" doesn't still change dynamically as it should, but it can illustrate quite clearly some basic ideas I have used.

First thing to note is that fractals are "proyected" from the actual position in several directions. Then, fractals grow as plants would do, taking the needed energy from the soil (function value serves as "nutrients"), and branching or cloning when the are full of energy.

There are some more "growth laws" involved, like the number of branches is a constant, but basically, after some time pass, the fractals are weigthened as you would do with real plants, and the central position is moved toward the most growed-up zones.

I will not delve into the fine details now, I will just try to make videos of some of my experiments and comment a little on them. Surely there will be time to come back to those "intermediate" algortihms, as they look quite promissing for some tasks.

In my list of things than could be tried to solve using this kind of fractals are: globally optimizing functions (I have a nice candidate), optimally connecting cities with roads (early beta), fractal emotional intelligence (working 90%, a V2 will come later), and, who knows, may be fractally growing neuronal networks (learning by modifying the internal structure, not only the axon weights) or, in my wildest dream (but kind of working), simulate "simplified" Q.E.D. physics with clouds of photons (these one would make a very nice screen saver at least!). If you love Feynman diagrams may be you will love this videos too!

But all those fractal algorithms are actually unfinished, some in alpha, some in beta, some in nowhere land, so I will just stick to commenting videos that show up something "real" and running.

If using fractals to solve problems seems interesting to you, or if you already use them, feel free to comment here or contact me, I will be glad to know!

Last november 19th 2014 I had a talk at the Murcia University titled "Entropic Emotional Intelligence for video games".

The talk is in spanish and surely it is difficult to follow for any foreign, my spanish is a little "local", but I will try to subtitle it into english sooner than later, so come back if you missed it.

The intelligence I present here is exactly the one being discussed on the already ongoing paper, except for a nice addition for negative feelings introduced later on the paper, what I call "relativized feelings" that is still not covered in this blog... I am becoming lazy!

Remember you have the slides, both in spanish and english, at the download page.

I have been some months now without updating the blog, so many of you could think this EEI (Emotional Entropic Intelligence) proyect is iddle at the moment, but this is far from true, so I am writting this post just to let you know were am I focusing now and what to spect in the near future.

First thing to comment is the paper. I stoped writting it after about 1/2 of the work, but I have re-started the task this week.

The main reason why I stoped writting is that I finally found a definitive "cure" for negative feelings in the emotional intelligence. It means now we can safely play with goals that generate negative enjoy (fear) for instance, without any patologic behaviour showing up. Really goods news for the stability of the algortihm and a great limitation that dissapears.

This new formulas have already been implemented on the software and they work better than the previous approach, but some fine details are still being tuned. During the next few days it should be tested and closed, may be I will then publish a new software version along with a blog entry about those changes.

This new approach, the "relativistic mind" as I call it, forced me to change the internals of the goals, but now it is simplier and quite more clear, so it looks as a nice addition to the algortihm to me.

Once those details are ok, I will made the correspoding changes to the paper and go all way down until finishing it. It will not cover some fancy expansions of the idea, I will let all this for a possible new article, or a small eBook, who knows.

Just to show something in the post, I will show you some of the drawings I am preparing for the paper:

In this case, the drawing is a little obsolete -code is still changing its shape- and I need to re-draw it.