MANIC, formerly known as PMML.1, is a cognitive architecture developed by the predictive modeling and machine learning laboratory at University of Arkansas. It differs from other cognitive architectures in that it tries to "minimize novelty". That is, it attempts to organize well-established techniques in computer science, rather than propose any new methods for achieving cognition. While most other cognitive architectures are inspired by some neurological observation, and are subsequently developed in a top-down manner to behave in some manner like a brain, MANIC is inspired only by common practices in computer science, and was developed in a bottom-up manner for the purpose of unifying various methods in machine learning and artificial intelligence.
Overview
At the highest level, MANIC describes a software agent that, supposedly, will exhibit cognitive intelligence. The agent's artificial brain comprises two major components: a learning system and a decision-making system.
Learning system
The learning system models the agent's environment as a dynamical system. It consists of an "observation function", which maps from the agent's current beliefs to predicted observations, and a "transition function", which maps from current beliefs to future beliefs in the next time-step. The observation function is implemented with a generative deep learning architecture. It is trained in an unsupervised manner from the observations that the agent makes. The intrinsic representations of those observations become the agents "beliefs". The transition function is trained in a supervised manner, to predict the next beliefs from the current ones. The entire learning system is based loosely on a 2011 paper by Michael S. Gashler that describes a method for training a deep neural network to model a simple dynamical system from visual observations.[1]
Decision-making system
The decision-making system consists of a planning module and a contentment function. The planning module uses an evolutionary algorithm to evolve a satisficing plan. The contentment function maps from the agent's current beliefs, or anticipated beliefs, to an evaluation of the utility of being in that state. It is trained by reinforcement from a human teacher. In order to facilitate this reinforcement learning, MANIC provides a mechanism for the agent to generate "fantasy videos" that show anticipated observations if a candidate plan were to be executed. The idea is that a human teacher would evaluate these videos and rank them according to desirability or utility, and the agent could then use that feedback to refine its contentment function.
Sentience
MANIC proposes that the learning system gives the agent awareness of its environment by modeling it, and using that model to anticipate future beliefs. It further proposes that a similar mechanism can also implement sentience. That is, it claims that awareness can be implemented with an outward-looking model, and sentience can be implemented with an inward-looking model. Therefore, it proposes to add "introspective senses", which theoretically give the agent the ability to become aware of its own inner feelings, by modeling them, just as it is aware of its external environment. To some extent, MANIC suggests that existing methods already in use in artificial intelligence are unintentionally creating subjective experiences like those typically associated with conscious beings.
References
- ↑ Gashler, M. and Martinez, T., Temporal Nonlinear Dimensionality Reduction, In Proceedings of the International Joint Conference on Neural Networks IJCNN'11, pp. 1959–1966, 2011
External links
- http://uaf46365.ddns.uark.edu/lab/cogarch.svg, A poster in SVG format that describes the MANIC architecture.
- https://github.com/mikegashler/manic, A Java implementation of MANIC.