Workshop on Neurodynamics and Intentional Dynamical Systems

IJCNN 2005 workshop, August 5, 2005

Organizers: Peter Andras (U Newcastle, UK),  Ricardo Gutierrez-Osuna (Texas A&M U, TX), Walter J Freeman (UC Berkeley, CA), Robert Kozma (U Memphis, TN)

 

Abstracts

 

Special invited Talk:

 

LATE BRAIN PLASTICITY AND TACTILE SENSORY SUBSTITUTION

 

 Paul Bach-y-Rita, MD

 

 Stephen W. Kercel, PhD, PE

 

 Abstract:

 

The brain is capable of major reorganization at all stages of life, in response to functional demand. In 1963 a project to demonstrate this capacity was initiated with tactile vision substitution, that was so successful that it added another goal, to develop practical substitution systems for persons with sensory loss. Numerous sensory substitution systems have been demonstrated to be feasible. We see with the brain, not the eyes, which receive an optical image that is turned into a data stream along the optic nerve to the rest of the brain. Excellent artificial receptors (e.g., a tv camera for vision substitution; an accelerometer for vestibular substitution) are available, and appropriate human machine interfaces (HMIs) have been developed, most recently and effectively through the tongue. In addition to practical applications, sensory substitution systems allow many theoretical issues to be addressed, and have led to brain imaging evidence of late brain reorganization. How the input from the tongue HMI delivered deep into the base of the brain can produce persisting reorganization (with effects lasting for hours or days after the input is interrupted), how reorganization can be obtained following damage with as little as two percent surviving neural substrate in a particular system, what role unmasking plays in the reorganization, are areas that will be addressed. Brain dynamics depends on synaptic, diffusive, and glia activities. Furthermore, experiments and observations indicate that synaptic and diffusive activities modify each others' morphology, and glia activity modifies both. Synaptic activity modifies glia morphology. The relationship between these three functions forms a closed-loop hierarchy of causation in brain function, and the operation of that seemingly unusual structure may account for some of the more interesting properties of cognitive function that cannot be explained by appeal to either traditional concepts of recursion or cybernetic feedback loops. Brain dynamics may depend on synaptic, diffusive, and glia activities. Furthermore, experiments and observations indicate that synaptic and diffusive activities modify each others' morphology, and glia activity modifies both. Synaptic activity modifies glia morphology.

 


 
 
Understanding of Agent's Own Inner State through Voluntary Action that Forces Equilibrium in Agent--Environment Dynamics
 
Yoonsuck Choe
Department of Computer Science
Texas A&M University
College Station, TX 77843-3112
choe@tamu.edu
http://faculty.cs.tamu.edu/choe
 
How can the brain understand its own inner state, a pattern consisting of action potentials and nothing else? Current approaches to this problem require access to both the environmental stimuli and the brain activity pattern. However, such a luxury is not permitted to the brain itself, which has access only to the action potentials, a codified signal. A simple thought experiment shows that passive observation of the patterns is not enough to achieve understanding. Such an exercise also provides a solution, that voluntary action can help solve the problem. This observation is in line with the view of pragmatism (and other modern approaches such as active vision) that perception is fundamentally an active process. Here, we propose a simple, yet powerful criterion that can guide voluntary action toward the understanding of the brain's own inner state: The property of a particular action sequence that maintains invariance in the inner state is analogous to the environmental properties that triggered that inner state. How this criterion can be understood in terms of agent--environment dynamics and equilibria within it, and its relationship to the idea of ``eigen-behavior'', will be discussed.
 
 

 

A Complementary Hypothesis on Hippocampal Dynamics: the Called “Place Fields” are Primarily Self-organizing “Firing Fields” and do not Depend of Environmental Visual Signals.

 

Renan Vitral

NIPAN-Department of Physiology,

Federal University of Juiz de Fora

renan@icb.ufjf.br

 

Recent Neurobiological data (including Neurophysiology, Microanatomy and Molecular Neurobiology) sustain my hypothesis that the development of what is called “Place Fields” does not match with the arrival of environmental visual information into the hippocampus. I will show how the development of these firing fields depends on the afferent information from anterior-dorsal complex of thalamus, subicular projections and other that carry vestibular and head direction signals first than visual. These projections will form self-organizing ‘islands’ of theta firing on hippocampal principal cells under the concurrent theta oscillations upon the hippocampus. Besides, I will emphasize that these initial firing fields work independently on each sub-field: dentate gyrus, CA3, CA2 and CA1.

 

This hypothesis is complementary to the described on the literature because with elapsing of time the afferent of visual centers would act adding reinforcement weights to the generated firing fields, transforming them in real “place fields”. This proposal does not disagree with the described hippocampal participation on visual navigation tasks, the multiple hippocampal functions including the participation on memory formation and consolidation, the hippocampal dynamics, and also add interesting research perspectives on computational modeling. We also support the use of self-organizing neural networks constructed with opponent processing receiving signals both from excitatory afferents and interneuron inhibitory and secondarily excitatory afferents to principal cells as a very interesting tool for computational modeling, because dependent on the variable used for the mathematics of the dipoles we can reach a consistent pattern of hippocampal dynamics that matches the observed oscillations with the corresponding behavior and function.

 

 


Neurodynamics and Evolution of Intentional Communicating Agents

 

Leonid I. Perlovsky

Air Force Research Lab, Hanscom AFB, USA, Leonid.Perlovsky@hanscom.af.mil

 

José F. Fontanari

Universidade de São Paulo, São Carlos, Brazil, fontanari@ifsc.usp.br

Summary

 

In existing works on evolution of communication, communication signs refer to objects. According to Terrence Deacon, this level of communication can be learned by animals [1]. Truly human achievement is symbolic language communication, such that phrase-relationships among communication signs-words correspond to relationships among objects. In this paper we formulate mathematically this human ability as the evolution and neurodynamics of two parallel hierarchies, the hierarchy of language and the cognitive hierarchy of objects and relationships. We present modelling field theory (MFT), which combines understanding of the surrounding environment by learning its conceptual-modelling representations with emotional evaluation of the correspondence between representations and reality [2]. Modelling field theory gives a framework, in which cognitive and language hierarchies evolve in parallel. Due to communication among agents, the two types of learning mutually support each other, and relationships among words and phrases correspond to relationships among objects and relationships among them. We present results of simulations demonstrating first steps toward evolution of symbolic communication.

At the first level of the cognitive hierarchy agents learn to recognize objects, and in parallel, they learn signs-words corresponding to the objects. In parallel with the first level, the second level emerges. At the second level of the cognitive hierarchy agents learn to recognize relationships among objects, and in parallel, they learn signs-phrases corresponding to relationships among the first-level sign-words.

The uniqueness of human language is probably one of the few scientific ideas that still resist the corrosive effects of the, borrowing Dennett’s metaphor [3], “universal acid” that stems from Darwin’s concept of evolution through natural selection. The notion of a “language organ” exclusive of the human species which was originally designed to carry out combinatorial calculations [4] and the exaggerated emphasis on the role of cultural evolution, in opposition to genetic evolution, on the development of language [5,6] are often invoked to support the claim that we are the only species capable of genuine symbolic thinking and communication [1]. This anthropocentric view is usually criticized by ethologists [7,8] who seek to demonstrate that the gap between human and non-human languages is not that big and it is actually magnified by our ignorance about the basic elements used in the communication of non-human animals [8]. Nonetheless, up to now the ethologists have failed to provide clear evidence of, say, syntax in non-human languages. In fact, those languages are typically non-syntactic, i.e., signals refer to whole situations, in contrast to human language which is characterized by a hierarchy of signals; so that components of higher levels have their own lower-level meaning. Together with the language organ, that composition allows us to take advantage of combinatorics and so as linguists put that to “make infinite use of finite means.”

The ethological approach to the evolution of communication in the 1990s received a rather unexpected ally, namely, computer simulations of large communities of simple finite-state machines endowed with the capacity to emit as well as to respond to signals. This in silico approach, termed synthetic ethology by its founder Bruce MacLennan [9], aimed at realizing experiments on the evolution of communication in completely controlled and transparent set-ups, a goal much beyond the empirical capabilities of contemporary ethology. We combine this framework with MFT to address the present challenge of developing self-organizing systems with dynamic intentional behaviour. We also address the challenge for the future research in intentional systems: to demonstrate emergence of parallel language and cognitive hierarchies, in which (1) higher-level models are syntactically composed from lower level models, (2) at every level models are meaningful, (3) at every level meaning of language models correspond to meaning of cognitive models, and (4) learning of language and cognitive models mutually supports each other.

Acknowledgments

 

Effort sponsored by the Air Force Office of Scientific Research, Air Force Material Command, USAF, under grant number FA8655-04-1-3045. The U. S Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright notation thereon.

References

 

[1] T. W. Deacon, The Symbolic Species, New York: W.W. Norton & Company, 1997.

 

[2] L. I. Perlovsky,  Neural Networks and Intellect: using model-based concepts.  Oxford University Press, New York, NY, 2001.

 

[3] D. C. Dennett, Darwin’s Dangerous Idea, New York: Simon & Schuster, 1995.

 

[4] J. Fodor, The Modularity of Mind, Cambrige: MIT Press, 1983.

 

[5] S. Blackmore, The Meme Machine, Oxford: Oxford University Press, 1999.

 

[6] K. Smith, S. Kirby and H. Brighton, “Iterated Learning: a framework for the emergence of language,” Artificial Life 9, 371-386, 2003.

 

[7] R. Dunbar, Grooming, Gossip, and the Evolution of Language, Cambridge: Harvard University Press, 1996.

 

[8] M. D. Hauser, The Evolution of Communication, Cambridge: MIT Press, 1996.

 

[9] B. J. MacLennan, “Synthetic ethology: an approach to the study of communication,” Artificial Life II, SFI Studies in the Sciences of Complexity, vol. X, 631-658, Addison-Wesley, 1991.

 

[10] G. M. Burghardt, “Defining Communication,” in Communication by Chemical Signals, edited by J. W. Johnston Jr., D. G. Moulton and A. Turk, New York: Appleton-Century-Crofts, 1970.

 

[11] J. Noble and D. Cliff , “On simulating the evolution of communication,” Proceedings of the 4th International Conference on Simulation of Adaptive Behavior, Cambridge, MA, MIT Press, 1996.

 

[12] M. Mitchell, An Introduction to Genetic Algorithms, Cambridge: MIT Press, 1996.

 

[13] S. A. Boorman and P.R. Levitt, The Genetics of Altruism , New York: Academic Press, 1980.

 

[14] M. A. Nowak, N. L. Komarova and P. Niyogi, “Evolution of Universal Grammar,” Science 291, 114-118, 2001.

 

[15] Perlovsky, L.I. (2004). Integrating Language and Cognition. Feature Article, IEEE Connections, 2(2), pp. 8-12.

 

[16] M. A. Nowak, J. B. Plotkin and V. A. A. Jansen, “The Evolution of Syntactic Communication,” Nature 404, 495-498, 2000.

 

[17] S. Kirby, “Spontaneous Evolution of Linguistic Structure: an iterated learning model of the emergence of regularity and irregularity,” IEEE Trans. Evol. Comput. 5, 102-110, 2001.

 


An interpretative Recurrent Neural Networks to improve learning capabilities

 

Colin Molter, Utku Salihoglu, Hugues Bersini

 

Abstract

 

The main interest followed by the authors is to find a mechanism to store efficiently information in the dynamics of fully connected recurrent neural networks. By studying small size fully connected networks, the authors have shown in previous works how synaptic matrix randomly generated allows the exploitation of a huge number of static and above all cyclic attractors for information encoding. More recently, an iterative supervised Hebbian learning algorithm has been investigated. An essential improvement of this algorithm consists of indexing the \attractor information items" by means of external stimuli rather than by using only initial conditions as originally proposed by Hopfield. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags". The impact of the learning on the network's dynamics is the following: the more information is to be stored as limit cycle attractors of the neural network, the more chaos on the road becomes unavoidable as the background dynamical regime of the net. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. In the end, the network is no longer able to manage anything and turns out to be fully and strongly chaotic.

 

In this workshop, we propose to introduce an unsupervised learning task, more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence which is not specified a priori. In this view, the semantics of the attractors to be associated with the feeding data is left unprescribed, the network generates its own relevant information through a self-organized dynamical process. Compared to its supervised counterpart, huge enhancements both in the storing capacity and in the computational cost have been observed. Moreover, the unsupervised learning, by being more \respectful" of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex, but still very informative regime, is referred to as the “frustrated chaos".


 

Bifurcating Recursive Nodes Networks and Multi-assemblies Structures as

Tools for Intentional Dynamics Modeling

 

Emilio Del-Moral-Hernandez

Polytechnic School of the University of São Paulo

 

Richness of dynamical behavior, sensing of asynchronous changes in the environment, and the production of coherent spatio-temporal patterns are essential mechanisms for the modeling and the emulation of complex biological intelligent systems. We show how artificial neural networks composed of Recursive Processing Elements (RPEs), which are nodes governed by first order parametric recursions, naturally exhibit these key ingredients and therefore are powerful tools for the modeling of intentional dynamic systems. The coupling between recursive processing elements provides for the interaction between several parametric recursions and allows for the emergence of collective phenomena such as the phase-locking among nodes, as well as the clustering of dynamical activity of the nodes in an assembly. Instead of computing with fixed points, these networks have as their natural mode of operation periodic multidimensional attractors of several natures: oscillations are the basic ingredient for the representation and processing of information. As one sample functional unit that can be built within this framework, we address associative modules that are able to deal with different loads of internal memories, which are represented through the preferential spatio-temporal patterns which emerge during the network evolution. Association and hetero association are among the functions performed by such modules, which can be coupled among them for the production of more complex structures. It is shown how assemblies of coupled bifurcating recursive nodes can sense different modalities of external variables such as logical variables and analog quantities, as well as time varying versions of those. It is also explained how they can produce multidimensional coherent attractors of different periodicities and amplitudes of oscillations. Illustrative scenarios of mappings from sensed environment to action are discussed, in the contexts of robotics as well as in the context of biological intelligence modeling.


 

A Hardware Continuous Time Recurrent Neural Network: Design and Applications

 

John C. Gallagher1,2, Sanjay K. Boddhu1, Saranyan Vigraham1

Department of Computer Science and Engineering1

Department of Electrical Engineering2

Wright State University

Dayton, OH 45435

{jgallagh, sboddhu,svigrha}@cs.wright.edu

 

Extended Abstract

 

Continuous Time Recurrent Neural Networks (CTRNNs) are potentially fully-connected networks of Hopfield continuous model neurons without the zero diagonal constraints on their weight matrix. In principle, they are capable of approximating any smooth dynamics. Thus, they have been proposed in many circles to solve practical problems in associative memory, pattern recognition, computation, and control. They are also at least somewhat biologically defensible. They have also been used, therefore, in various biological modeling and computational neuroscience efforts.

 

We have recently proposed CTRNNs for use as the reconfigurable hardware component for evolvable hardware control systems. In this paper, we will present a complete design for a digitally programmable analog CTRNN computer built from commercially available parts. Although this effort is a stepping stone to an eventual full VLSI implementation, the prototype is fully-functional and is being used to create practical neural controllers for electro-mechanical devices. In addition to examining the design and implementation of the device, we will also discuss how it is used to suppress thermoaccoustic instability in a model combustion chamber. Particular attention will be given to discussing the neurodynamical basis of effective control in this context. It is expected that the design itself might be useful to other researchers intending to field analog neural hardware. It is hoped that the discussion of neurodynamics in the evolved systems can help further our collective ability to construct explanations of both natural and artificial neural systems.

 

Modeling the Evolution of Decision Rules in the Human Brain

Daniel S. Levine

Department of Psychology

University of Texas at Arlington

Arlington, TX 76019-0528

levine@uta.edu

www.uta.edu/psychology/faculty/levine

 

Abstract: A neural network theory is proposed for how the brain develops decision rules about classes of behaviors.  The cortical-subcortical networks proposed for these functions join four previous research streams.  The first stream is Eisler and Levine’s (2002) separate brain networks for conflicting behavior patterns — with the orbitomedial prefrontal cortex playing a crucial role in selecting between these patterns.  The second is Newman and Grace’s (1999) work on motor gates (in the basal ganglia) influenced by contextual signals (from hippocampus) and affective signals (from amygdala).  The third is Cloninger’s (1999) clinical schema for interacting character and temperament dimensions, which influence contextual tendencies toward behavior patterns.  The fourth is Levine’s (1994) mathematical theory, using simulated annealing in a Cohen-Grossberg module, of switches among attractors in personality space.

            My theory synthesizes all this research into an adaptive resonance network with top-down and bottom-up interactions between representations of general rules and of specific behavior tendencies.  Implications are discussed for change in the course of a person’s life, such as occur in psychotherapy.

 

(Note: Most of this work appears in Levine, D. S., Angels, devils, and censors in the brain, submitted to ComPlexus.)

 

 

Related events

            2004 Nonlinear spatio-temporal neural dynamics workshop

            2003 Nonlinear spatio-temporal neural dynamics workshop

            2002 Complex nonlinear neural dynamics workshop

            2001 Complex nonlinear neural dynamics workshop