Thursday, 18 June 2015

Using Feynman Integrals

Some posts ago I commented on the strange need of using a "Pauli's Exclusion Principle" on the Fractal AI in order to make it work as spected.

It may sound strange that quantum physics play a role on building a fractal AI, but actually it is the real base for such an algortihm.

Today I want to comment on my very last idea: using real Feynman integrals on the Fractal AI.

First try to draw the fractal

Tuesday, 16 June 2015

Passing the "Asteroids Test"


Fractal AI is quite mature by now, it has reached a quite big milestone, being the first "finished" fractal AI I have been able to produce to date. It is not still "completed" in a general sense, but with some prefixed limitations, it is finished.


I stresssed it with 30 asteroids using only 100 futures (in previous tests I needed 300 or 500 futures to get something like this) and it was no problem for the rocket at all. Then I tried 40 asteroids. Again no problem but more stress.

50 was too much for it. The sky was full of asteroids, I didn't spect it to survive at first, but it did. I did manage to fly away the shower, but survivided! It is a new record.


With more futures (about 200) it could have done, in fact, it is quite near to scape, but end up badly landed, but alive. I will try to pass the 60 barrier, but not sure it is possible to do...

UPDATE: I tried with 200 futures, and it made the trick:

Tuesday, 9 June 2015

Long Term Proyects

Once the Fractal AI is formally finished (regardless of how many improvements I could still implement on it) it is time to mention some of my wildest proyects for the long term.

Long fractal shots

All of them are long shots, things I feel that can be done using fractals in the exact same way I am applying in the Fractal AI. Some of them are already "half done" but will need a deep rethinking, others are just ideas in an very early stage of maturity.

1) Fractal growing of neuronal networks
This one is quite interesting as it could automate the proccess of building the network itself (adding neurons and connectons fractally as needed, no more need to choose how many layers to use) at the same time it learns from the examples it is exposed to. I have some sketches of it, but it is istill not mature enough to start coding. I call it "Fractal Deep Learning".

2) Globally otimizing real functions of any kind
Finding local minimuns is tricky, more if you don't impose the restriction of the function being continous, but trying to search for the global minimum is even more tricky.



Actually, I have three of them, none is optimal still but they can beat the most advanced optimizing algortihms I could find. I spect a great improvement once I code the ideas used in the last Fractal AI into any of them.

3) Simulating Quantum Physics
Fractal intelligence is really a quantum algortihm translated down to standard computers with the use of fractals, retaining all the magic of quantum computing down to earth, as the computational cost scale linearly with the different parameters you can change to make intelligence better.


So, when all those aditions are implanted into the actual quantum physic (QED more precisely), I spect the simulation to fit with quantum physics quite closely.

This physic fractal is, at the same time, the perfect one for doing global optimizing of functions: The best way to find the highest point in a citie is producing a storm over it, and watch the lightings fall. They will show you where the hights point is. This is the idea behind the algortihm, and that is way, in this code, agents are called "bolts" and futures "sparks" (but the underlaying ideas are the same).

In QED bolts correspond to particles, while sparks correspond to virtual photons forming a electrically charged cloud around the particle. Clouds of different particles interact, so the atraction, repulsion, exclusion principle, etc. emerge from the rules of interaction used.

4) Connecting cities with a minimum cost
The old problem of connecting cities with different productions optimally using a minimum lenght of road is also perfect for this fractals. Fungus already do it this way!

Spain badly connect with fractals

I have a very old code about this, it was one of the first things I tried out with fractals. The small changes I needed to make it properly work are now quite clear.

This problem represent a whole type of NP hard problems. For instance, the shales-man problem can be adapted to this same idea. This adaptation is already done and working, but using the weakest of the three optimizing algorithms, so not so impresive as you may think. I spect being able to adapt it to the best of the three, the quantum one, and have a second try on it.

Random thoughts

-As the fractal base algortihm scales linearly, I spect all those problems to be solved in a lineal (so happily polynomial) time, no more NP problems as long as you can adapt it to fractals.

-Once a problem is "fractalized", it naturally becomes a quantum algortihm, so it can be easily solved using real quatum computers (check for availability first!). I spect a massive number of algortihm being adapted to the next wave of quantum computers, and who knows, may be it is the easiest way to do it!

-If you don't have a quantum computer at hand, you still can easily paralelize the fractal algortihm and use net computing to apply as much CPU power as you can get into solving the problem. Not the same things, but the best match.

-I know Richard Feynman knew about it all long before I did, he just didn't have the right tool at this time: the notion of "fractal" Mandelbrot brought to us. The "tinny little arrows" of Feynman is what he left behind for others to follow (apart from the quite interesting "ant" episode). Thanks a lot, Richard!

-There also other even longer shots in mind I still haven't cracked, but they are not ready to leave the labs!

Real examples of the same fractal

Nature use it for everything, big or small, and you can find these exact fractals everywhere:

Lighting tattoed this fractal on her! Is she a matemagician?
Ligthing strike inprint the same fractal.
Lighting or fractal? (by Diamond Hoo Ha Man)


Ant tracks are also fractals (but a more advanced one!)

Physarum polycephalum is a master!

Particle collisions too.
Fractal AI is of the exact same kind.

A finished version of Fractal AI

Today I will show you what I consider to be the first finished version of the "Fract Emotional Artificial Intelligence".

The Fractal AI is now parameter less, meaning all those params I needed to set manually, like the "evapore ratio", are now governed by the AI.

Nice fractal thinking paths.

Before telling you about the history behind, please watch a video of the Fractal AI V2.0 (2.0 for no particular reason except it follows 1.9). It is just the same old boring pair of rockets trying to collect energy from drops as fast as posible, but this time, they sports the newest version of the AI (visible from the last third of the video).


Change log

So what has changed from the last post? I was telling you about the implementation of an exclusion principle to avoid dense areas. I finished it, and I didn't need to use momentums -velocities- at all, it was not needed, fractal nature of the algorithm makes it irrelevant.

But the resulting AI was not what I spected. It was terrible conservative, watching it you would say it was so clever it didn't want to risk it self. The exclusion principle by its own was not enough.

Then I turned into the "Evapore ratio" searching for a solution. This evapore ratio defines how many futures are replaced by others each second, or, inversely, how long a future will last, in average. If evapore was set to zero, average life of futures tend to infinity, so they trace simple lines.

So I tried to make this evapore automatically adjusted by the AI, specting it to solve the problem of this excess of prudence. It idn't work.

Then I realized something fundamental was missing. There was the need for a second AI parameter: The "Sensibility" to the feelings that the agent's goals triger.

It sounds extrange at first, but it makes quite a lot of "sense" (punch intended): Initially, I had an "Entropic Intelligence" that was linear and goal-less, just using "blind" entropic forces. Then, I added feelings, and things changed for better. Now, I have added fractals as a sustitutive to linear paths, again for better.

So, if a parameter called "Evaporation" was changing the paths from lineal ones when evapore ratio was set to zero, to a more and more fractally shaped path, then I needed a second parameter, simetrically defined, to control how "emotional" the algorithm goes.

I called it "Sense", and basically makes all feelings weight to be augmented or diminised in the decision taking proccess. Using evapore and sense simultaneously realy made the magic.

I start playing with different combinations of "Evapore" and "Sense" values, noticing I could simulate quite a range of different behaviours:

Evapore=0 and Sense=0
It was the old and first "Common Sense" linear algorithm I came to in my first posts.

Evapore=0 and Sense=1
It correspond to the linear and emotional version, what I called "Entropic Emotional Intellignce"

Evapore=1 and Sense=0
It is a new kind of "Common Sense", one much better than the linear one, as it replaces boring lines with nice fractals.

Evapore=1 and Sense=1
It is the "Fractal Emotional Artificial Intelligent" it is pure form, the best of all four.

Finally, I tried to make the AI to automatically adjust this "Sense" for me, and voila, it worked fine this time. The introduction of the "sense" factor allowed me to fine-tune the AI and get a nice video.

But evaporation is not an easy parameter to manually choose, playing around with the fractality was dangerous, as more of it was better, until a point in witch it becomaes dangerous. Same goes for the "Sense" parameter. There was a "Sweet point" for each situation.

The final step was to make BOTH of the parameter to be simultaneously adjusted by the AI. It closed the loop. The AI was parameter-less for the first time

I will show you some frames of another discarted video showing the effect of each parameter. Please note the two level meters on the player's left, they represent the evapore and sense ratios used at each moment, both ranging from 0 to 5 (again, the 5 limit is for no particular reason).

 Evapore ratio at its maximum.


Green dots accounts for good things to come, points were the fractal cloned, usually after a good point (a big drop) is reached, while red ones account for places where a future decided to "suicide" because the neighborhood was too dense for him (the "exclusion principle" in action), quite like in a "game of life".

There are also red points where the future ends because of the agent's death, but they are another kind of "red points", as not all the systems need to have a "dead state".

Goal less fractal "Common Sense"


This last frame correspond to Sense=0. The AI continuously goes into this "Common Sense" configuration when the situation gets too dangerous. It switches off feelings, fix the surviving problem, and when done, it switch on again the feelings and pursue the goals actively again. It is a "Fractal Common Sense", a safe auto pilot that automatically engages when needed.

What a Fractal AI is

You can think of it in some different ways, all correct.

Personally, I think of it as a "fractalized" version of my previous "Entropic Emotional Intelligence". In the fractal version, all the complex heuristic added to make the linear version to work, are nicely gone.

Also, it is a fractal "Conway's Game of Life", a cellular automata that decides with one clones and witch one evaporates using simpler rules: nice places tend to make futures to clone, while too dense areas tend to make them evaporate. The resulting evolution of the cellular automata is the way the futures spread into the future, exploring the possibilities optimally. Ah! And the exact rules used adapt continuously so the growth is always maximized.

But it also performs as a truly evolutive algortihm, as the fractals traveling from different initial direction evolve by evaporating the worst postioned futures while cloning -and mutating- the best ones, making decisions based on how fast each fractal fagocitated each other.

Just to make it better, the algortihm is actually easy and naturally paralellizable, in a sense that really fits with the paradigm of quantum computing. The algortihm is indeed a natural quantum algorithm, just downgraded to make it work on non-quantum computers, like my PC. If you should want to "revert" it into its natural form of a quantum algorithm, some of the code just would need to be wiped out, but nothing added.

So basically the Fractal AI algorithm is a perfect mix of several "magic algortihms": an emotional one, a cellular automaton, a fractal one and, and an evolutive algorithm, all packed up together into a single quantum algorithm simplier than any of the above alone.

I is worth noting that the resulting algorithm has a very low computational cost, the CPU time needed is low and, more importantly, it grows LINEARLY with all the parameters involved. Even when the AI controls several agents, the collision detection phase, the one that is usually O(n²), is also linear O(n) in this case thanks to quantum computing or, equivalently, thanks to the fractal nature of it.

I will add a last video where all those "debbuging options" are switched on so you can see, real time, how the fractal evolves, "sniffing" the possible outcomes for the different decisions. This time only 100 futures were used, so the AI is not capable of decinding optimally when there are a few disperse drops.


What is next?

Being "perfect" doesn't mean you can't improve it. It means that, given the fixed parameters you supplied, it performs as good as it can be done.

In those videos, I had to choose the following parameters:

1- Seconds to think in advance (20 or 30 seconds for instance).
2- The number of futures to use for each agent (200 for instance).
3- The precision or FPS (Frames per Second) of the simulation (20 FPS).
4- The goal set the agent has to follow (keep health, keep energy, get drops and love speeding).
5- The relative strength of the feelings coming from the goals (1, 1, 5 and 0.5 in this example).

One to tree are related to how many resources you can invest on this AI, so the more the better, and it is OK to be manually set. Fourth is the goal set you want the AI to follow. Again, you should define it some how manually. But the fifth one, the relative strength of the goals, can be directly controlled by the AI and will be in some days or weeks.
So the next steps intended are:

1- Make use of the fourth "transcendental feeling" to simulate deep collaboration, a "hive" of agents. I don't remeber commenting about it on the blog, so I will have to explain it when I have the first videos.

2- Make the relative strengths of the actual goals to change automatically so the fourth feeling is maximized. It is similar with the old idea of "layered entropic intelligence" form my distant pass, but now, under the umbrella of fractals, it is way much simplier to code and use, and more CPU friendly too!

3- Try to make the low level goals automatically created by the AI. This will be a tricky part, as there are a lot of combintions of sensorial data you can use, so a proper "evolutive" solution will be needed. I already thought about it when I was developing the Entropic version, and again, using fractals makes it all much simplier to implement, so may be I will be able to do it (but I am not quite sure now).

When all those points are coded, it will allows us something great: You define the problem by giving a simulaton of it, then set some CPU limits (number of seconds and futures) and finally a high level "transcendental" goal like "Win the race", "Take care of the colony" or "Bring me beer" (my personal favourite).

The AI will then construct the optimal goal set for you (an optimal base of it using grade one polynomials of the sensor outputs), give them the optimal relative strenghts, adjust the AI internal params of Evapore and Sense again optimaly for the given problem, and then use as many agents as it is controlling (hundreds of drones or just a single robot) to follow your high level order, without any human help, in a deeply collaborative and intelligent way.

Thursday, 4 June 2015

Fractal AI: An Almost Perfect Lap!

The main problem with fractals is that they tend to grow exponentially, so keeping this growth under control is a difficult task in many aspects.

Some posts ago I commented about the need of combining Fractal AI with some kind of "Exclusion Principle", meaning it that the fractal needed to avoid high density zones in order to work optimally.

Today I had some spare time to work on this and now I am presenting you the first videos of this exclusion at work. It did improve AI quite a lot! Just see the video and judge:



Please note the track was quite wet and that I ordered it to suicide at the end. Also, this AI is still in a "beta" stage.

For instance, the actual coding of this "exclusion principle" only uses agent's positions X and Y to detect dense areas. It is a position-only exclusion principle, but the complete one should also take angle and velocities into account, as it is the positon and momentum of the particles that need to be "excluded".

Why is this necessary in an AI implementation? Well, we are not in a physical simulation, so I would only choose to implement it if I think it is for better, as it is in this case: Imagine two futures that started by choosing oposite initial directions. Both futures will initially diverge, as one kart will turn left while the other turn right. But they can then change direction and both future's traces can then cross in a given point. When future 1 is in this position, so it is future 2, as they cross there, but they will be driving in different directions.

If I should detect a crash in this situation, I will delete one of them, loosing a geniune "diferent course" of the future, never knowing what would had happened "if".

May be future 1 was just to crash in the next moment, while future 2 was not. So, if in this situation I delete future 2, both futures will disapear, the good future 2 first, and then the other crashing one.

I prefered to implement exclusion in two phases just to be able to tell how important momentum or angle is for AI. Surely they both will be, but not as much of an improvement as the position excluson alone is, I spect.

So the AI on the video still lacks some features to be showing 100% of its power:

a) Exclusion principle is only applied to positions, not momentums or angles, as it should be.

b) Implementation of position exclusion uses a hand-choosen distance parameter. I has to be the AI the one to chose it automatically in next versions.

c) Cloning rate is poorly choosen now. A simple heuristic is used now. AI will do it in next versions, with same solution as in b).
 All that said, the kart is "almost" optimally driven for my highly bised taste, and when the above is addresed, I think it will drive almost optimally, only given one single parameter for the AI, the "evaporation rate", and some CPU power, the more the best.

But, to be honest, this was not the first video I recorded today... it was the third one.

First one was a little dissaster! I used 100% of exclusion, meaning all futures were colliding with all the rest in the first seconds, along with a poor implementation of it. This made almost all of the futures to collapse and only 3 or 4 of them "survived" the first second or so.

Those few surviving futures were then the only ones being scanned deeper into the future, so the kart became blind to most of the available future outcomes, and driving blind is never a good idea.

Watch this "trully first" video before I delete it!



And here is video 2. It is nice, and shows some nice fractals, but the track had quite more friction, so driving it was not so challenging as is the third video I showed you first.

Tuesday, 2 June 2015

3D Fractal

This time I want to show you the Quantum Electrodynamic "fractal simulator" on a 3D environment, so you can compare it with the previous 2D simulation video.

Again, the proton is not a real proton. If it were, it would scape from the trap inmediately, but I didn't want this to happend, so i keept it on screen by changing it a little.


I have redefined colors this time and also added a sound track, as youtube user Saelikho suggested me. Good choice, Saelikho!

In some point in the video I switched to a top view, just to have a small 3D "sensation". After some time, I switch off the sparks on screen, so only the "macro" particles are visibles.

It is funny to note that, even those "big" particles can  attrack, repel or orbit around others, they don't really "exists" in the fractal definition, they don't play any significant role, just represent the averaged position of all the little sparks on its cloud.

Monday, 1 June 2015

Quantum fractal simulator

I am NOT a quantum physicist, not even a decent amateur, but I have a slight idea of how Quantum Electrodynamics works -thanks to Feynman lectures and books- so I was tempted to try this out: can I simulate QED using only fractals?

Well, it was not that way that the idea started... it was really the opposite: I was trying different ways to maximize a function with fractals, then I got sometinhg that resembled a lot to QED -to me- once I was able to undress it from all the "macro" misconceptions I was using.

Finally, QED simulation made a decent function "solver" (maximizing or minimizing) but this is another history. The reason I am talking about QED is just to show you a simple video of how this extrange thing behaves.

In the video, an electron and a particle similar to a proton (sorry, real protons still in beta) are trapped in a 2D electromagnetic trap (function being minimized is x²+y²+z²), so they attract each other, but at the same time they can not be in the same spot, so a repulsion appears at small distances (I didn't place it there, I swear!) that make them to stay a little apart.

A cloud of "virtual photons" dance around the particles in the video, forming nice spirals and reinforcing one to another into a stable configuration where both spirals rotate in opposite directions, in a nice and strange dance.

If you know about QED more than I do -not difficult at all- surely you will laught a little about the simplicity of my approach. Please do comment about it, I am willing to be ashamed for my deep ignorance ;-)


Note: Wave equation's phase was off for this video for no particular reason. I will try to simulate a difraction pattern using it and upload the video some near day.

This example runs on a 2D euclidean space, but other options allow me to simulate it over a 3D space or even a Minkowsky 4D, where time is just another dimension. The resulting shapes are not so different to those in this video, but it sounds terrible nice to use a "Minkowsky space", doesn't it?

Tech note: This video is not "real time" recorded form my computers, but it could be so, the simulation is lighting fast and can be paralelized if I weren't so lazy!