Monday 1 August 2016

What is Consciousness?

Some days ago I wrote a little about how this fractal AI works, it was not too detailed and the really important details were intentionally left unsolved. I promise to fill the gaps in short for you to be able to try the fractals by your self, but not now.

Today I want to give an overview of how the "complete fractal AI algorithm" could look like in some months, the ideas I am actually working on, and specially some random thoughts about consciousness.

The part I am now working at is about how to add memory to the fractal AI. This far, fractal AI was totally memory-less, meaning it does not learn from experience at all. I now call this an pure instinct-drive or intuitive mind. When you ask something to this AI, it thinks on the problem from scratch, and gives you an answer that is good enough for evolving in this medium with intelligence, a "real time " decision making algorithm good enough for many task.

But while driving a rocket on a difficult environment is hard but can be done with just an intuitive fractal AI, most NP-hard problems -problems where the time needed to solve it grows exponentially with the size of the problem to be solved- are usually not so easy to solve just with pure intelligence, usually you need something more to guide the intuition.



Memory fractal

Trying to solve one of those hard problems I came into the conclusion that I needed to let the algorithm to play with the problem for a time so it could somehow memorise those decisions that, in retrospective, proved to be right over time, and use all those memories to help the AI decide better and better as more experiences were stored and processed.

I could solve the problem with a simplified version of this idea, now I am working in generalising the method used into a truly fractal model of memory that will work in conjunction with the "memory-less" AI: The memory-less AI will create those memories, and the memories will help the AI by showing it paths of successive decisions that, in similar circumstances, worked fine. It acts as a new goal the AI have to follow, a goal that bases its potential in how similar this state is with those in memories, and on how good or dab those memories were.

I can not show you anything about it as it is not coded on the rocket case (I use the rockets as my general case, as it is a really complete problem to test new ideas) and the preliminary use I am making of it is still not working OK. Basically I need to use a fractal model of memory instead of the more classical one I am using now, until then, I only have clues on how it should work out.

But I can tell you something interesting: My "perfect fractal", what I called the "Feynman fractal" as it allowed time travelling (Feynman integrals or Feynman diagrams allows particles and paths to travel time reversed or not, and both directions are equally important), is actually equivalent to a memory fractal if the memory is really coded as another form of fractal.

It doesn't work as I spected at all, futures does not need to actually travel back in time, instead, recalling past memories and using them to decide makes the same effect and are much simpler to manage mentally and in the code than a complex time travelling fractal.

Consciousness

Ah! The holly Grial of AI, consciousness! I always dream some day, after I could deeply understand the fractal mind, I could naturally see what consciousness really is. I spected it to be some surprisingly twist to the fractal structure, like time travelling, but in some magic way I could not imagine.

But now I think it may not be such a dramatic thing after all, instead it may be just a simple way to modify the internal parameters used in the fractal AI as it is used, a way to change your mind own working parameters to accommodate what is to come.

Although the idea is still half-baked in my mind, the following "real brain" example could clarify it a little:

Your intelligence, being it a neuronal network or not, uses some "scale of values" to measure how good or bad something is. For instance, feeling hunger could have a relative importance to your intelligence of 0.34, while begin thirsty may well be a little more important, 0.65 for instance.

That is your normal "scale of values", the one that serves you well in everyday life. But imagine you need to travel a long river crossing a desolated region. Your mind will probably decide to lower the importance of water and raise the importance of food, as it can foresee that the lack of food will be the real problem during your journey.

This process is actually not making you smarter, or using memories to guide you through the river course, instead is fine-tuning you internal thinking params so the resulting behaviour is more probably going to save your life.

A process that modify all the inner params of an AI to better adapt to the changing environment is, as I now see it, a proto-form of consciousness, and generalising the process to fuse with the actual fractal AI could not only make it 100% parameter-less but, given enough intelligence and memory, make what we recognise as "consciousness" to slowly emerge in the agent behaviour.

So my bet is: the fewer hidden params exists in your AI algorithm that the algorithm itslef can not evaluate and change if needed, the more conscious your algorithm will be. It is not an easy trck to have an AI that can change its own working at will, but this is the way I will try to go after the summer time.

As a side note, black-box algorithm like a neuronal network would prove harder to dress up with the "emperor new clothes" of consciousness?

3 comments:

  1. Awesome post! I’m so glad you are working on this! Consider:

    - Not needing memory / history. If the system is big enough, the pattern, or at least a smaller fractal part of it, you are looking for is likely already present somewhere. It just likely has a bit different context. It’s like searching for a chunk of content that fits in a bigger chunk of context.
    - “Scale of values” can be replaced by Return On Investment (ROI). Ultimately, we are just trying to allocate resources effectively and it appears that ROI, within a context, is the best overall way to compare options.
    - Intelligence happens mostly in the connections. Nodes can be mostly dumb. Once connection density and structure(?) pass a phase change, emergence can happen! Perhaps enough emergence equals consciousness?
    - Patterns can iterate into resonate coherence (get TUNED) and have more obvious CLARITY. Then, more amplitude (power) can flow because [illusions / beliefs] don’t DAMPEN the flow. It “rings true”. We can more easily FEEL it and we label that feeling “good”. Intelligence moves toward unique things that feel good.
    - If I understand your phrase “… the fewer hidden params exists in your AI algorithm…” Consider that nature’s operating system completely self configures and self adapts. And nature, like any complex adaptive system, has about 3 initial conditions: add value first, context (structured connections), nodes (semi permeable membrane separates what a thing is, and what it is not).
    - E = mc^2 says matter is slow energy and energy is fast matter. So everything is an amplitude plus a frequency. Do patterns of amplitude and frequency ≈ consciousness?

    More https://cpology.wordpress.com/2014/10/27/evolution-codified-for-practical-use-how-to-build-a-global-brain-incompete-draft/

    ReplyDelete
    Replies
    1. Hi cpology, I will try to answer all your questions:

      -Memory is needed to really impruve the AI, holding a pattern of a complex environment in the AI is quite more complex that a memory and way more efficient to me.

      -Scale of value is ROI, but a ROI is a compound of different values glued into one, an average of many, and the weights used to mix them is what I caal the "scale of values", it is a vector, not a single value.

      -I do not replicate a real brain at all, connections and nodes as part of neurons are not in my algorithm. I try to replicate the algorithms running in that brain but on a different and more classic "programming languaje", no neuronal network is damaged during the experiments ;)

      -Resonance occurs but as a form of reinforcement between the past memories and the futures being considered.

      -Nature self adapts, so yes, I agree it must be doable.

      -Photons have energy that is not "fast mass" as they can not stop and have a rest mass, so they are not exactly the same thing: mass is energy, energy can be mass or not.

      I don't think conciousness is just a frequency and amplitude pattern, any set of waves will fit your definition, and waves on the ocean doesn't seem to be consciouss to me. Sounds great, but is not connected to me.

      Delete
  2. For more information about Fractals and Fractal Trigeometry see http://www.fractal.org

    ReplyDelete