## Friday, 6 May 2016

### Pathfinding problem

In my first contact with professor Talbi he proposed me to try to solve the Pathfinding general problem with my algorithms, as the examples I showed in my talk were quite similar to solving it.

The Pathfinding problem consist in finding the shortest path from point A to B and it is useful not only in programming robots to go here and there, many other problem relates to finding this path.

So I borrowed time here and there to prepare a couple of videos and yesterday I have a second meeting with him to show the videos along with the real-time example where the agent follows the mouse. Here you have the video I prepared for the meeting:

There is a very nice way to solve it when the points are limited to some "cities" and special points (cross-roads) and you can only move from points by using "roads". In this case you are finding the shortest path over a directed graph, like in a GPS navigation app where you can only go from one cross-road to another using the exiting roads sections, the beautiful and simple Dijsktra's algorithm.

The other case when the pathfinding problem can be solved is when the geometry of the zones where you can walk is simple and you can triangulate it so the allowed-to-walk area is fully covered with non-overlapping triangles. In this case you have the Any-angle path planning algorithm.

But in my code, there is no need for the areas to be triangulated, the frontiers can be quite rough or even fuzzy, like in the video, where I added gray areas where the agent can walk, but at a cost (he dislike being on gray areas).

As you can see, the power of the algorithm is based on the space scanning, the fractal is able to cover the whole space quite easily, while at the same time, concentrate in the more promising paths until the end point is reached.

At the end of the video, the energy of the "robot" was very low, there was no gas to make another go, so I halted it and added energy drops along the paths. As there is a "keep energy level high" goal always on in my agents, it detects the drops and change the path so he can feed and refill the energy tank.

So the "shortest path" is not exactly the path being found, instead it is managing a set of goals and trying to find a path where all of them are maximised at the same time, a multi-objective friendly path.

Before going to sleep I prepared a second video where a rocket and a robot try to find the shortest path while avoiding collisions as they cause damage (and a "keep health level high" is also in the pack). The collaborative code is not 100% done, strange things happens with 3 or 4 agents that need revisiting, but two agents was ok for the actual implementation:

Basically, the fractal AI I am using is quite capable of solving this problem (given enough CPU and time) in a continuous way -in this case- but also a complete path can be extracted easily form any iteration of the algorithm that reaches the point B if needed.

1. Hello, I am very interested in your research and experiments. Have you tried to use the causal entropic force formalized by AD Wissner-Gross to find paths? I would like to see the algorithm if it is possible. Thank you very much. Pierre

1. Hi Pierre and thanks for your interest.

This post is exactly about solving this path finding problem using "causal entropic forces", but Alexander's original formulas for entropic forces are not enough, as they only integrate on possible paths using the probavility of this path being taken by the system.

It means the resulting behaviour of this AI is going to be focused on keeping as many different futures available to the system, ie. keeping the system up and running as long and steady as possible, but you can not define any other goal for the AI to follow, as in this case should be "try to be as near as the target position as you can".

My implementation differs from Alezander's in that, I added a potential associated with a system's state, so I could redefine entropy to account for this potential and make the AI to follow my "orders".

In this case, for any state S, I used:

Potential(s) = 1/(1+Dist_From_Target_To(s))

This is a positive funtion that is higher as you reach your goal, and the addition to it into the entropy formulas is equivalent to asking the AI to maximize this potential value over time, ie. find a way to reach to the target point.

My first way to add this potential was using "feelings" that are higher as you get near of your goals, so the potential was something like a "emotional potential".