Key principles:
- Future Prediction is at the core of the human mind. It predicts the future, evaluates it “emotionally” and then tries to alter the future, to maximize pleasure & minimize discomfort.
- We consciously focus on stuff that
- either failed our prediction
- or promise a pleasure/discomfort that isn’t trivially simple to reap/avoid
Failed predictions (Out-of-Context):
Failed predictions (Out-of-Context):
- Failed predictions are equally important with predicted pleasures/discomforts, because failed prediction are probably an indication that our set of current assumptions about the world have gaps; which means that we can’t fully trust our predicted pleasures/discomforts.
- We solve failed predictions by conducting root cause analysis, i.e. finding a set of extra assumptions about our currently reality that need to be turned on (or some that need to be turned off as well) and thus would explain the out-of-context phenomenon.
Maximizing pleasure / minimizing discomfort:
- We maintain a library of actions & goals that worked in the past. We remember under which conditions they brought us from state A to outcome B and will auto-execute them if very similar conditions arise. Those auto-executions can be chained, resulting a library of full “choreographies”. These are very useful to allow us to function on known situations and only focus our conscious brain on what really matters.
- When the desired outcome D is not linked from this library, we solve backwards by uncovering interim outcomes (in this example C and then B) until we link back to our current state A.
Single mechanism for both Unexpected & Unattained:
- If predictions are outcomes that are “sponsored” and goals that maximize pleasure or minimize discomfort are also outcomes that are “sponsored”, then we can unify the two mechanisms into one: A single mechanism that identifies the biggest difference between reality & sponsored outcome and tries to eliminate that “Differential”.
- That’s how multi-tasking is unlocked. As differentials grow or get resolved, we can always shift our attention to the biggest one.
Abstracting away solutions:
- To compress information, we group similar solutions together. This allows us reuse good solutions even if the circumstances are not identical next time. But it also
- introduces an extra step of particularization (when we know the high-level solution but need to recall/investigate the specifics that best suit this situation)
- results in fuzzy matching & routing when we don’t have the mental capacity to particularize the solution to the specifics of the current situation and thus just pick the default & execute it (e.g. driving towards work, instead of taking a turn, because your conscious mind was occupied in a conversation)
Ready for a diagram, accompanied with a more detailed description?