Analysis of the Neurodynamic Substrate of the Action-Perception Cycle –
Experiments and Modeling
Workshop at IEEE/INNS IJCNN 2007 Conference
August 17, 2007
Organizers: Robert Kozma (rkozma<@>memphis.edu), Colin Molter (cmolter<@>brain.riken.jp) and Peter Andras (Peter.Andras<@>ncl.ac.uk)Abstracts |
|
Title: Neurodynamic Principles of
Intentionality
Author: Robert Kozma
Summary
Various
approaches are reviewed which are used for generation and utilization of
knowledge in human cognitive activity and in artificially intelligent designs.
We present a dynamical approach to higher cognition and intelligence based on
the model of intentional action-perception cycle. In this model, meaningful
knowledge is continuously created, processed, and dissipated in the form of
sequences of oscillatory patterns of neural activity distributed across space
and time, rather than via manipulation of certain symbol system. The
oscillatory patterns can be viewed as intermittent representations of
generalized symbol systems, with which brains compute. However, these dynamical
symbols are not rigid but flexible and they disappear very soon after they have
been generated through spatio-temporal phase
transitions, at the rate of 4-5 patterns per second in human brains. Human
cognition performs a granulation of the seemingly homogeneous temporal
sequences of perceptual experiences into meaningful and comprehendible chunks
of concepts and complex behavioral schemas, which are
accessed during future action selection and decisions. The proposed
biologically-motivated computing through dynamic patterns provides an
alternative to solve the notoriously hard symbol grounding problem.
We
employ the hierarchical K-set theory to describe increasingly complex neural
systems from microscopic, mesoscopic, and
macroscopic. At the top level we use the KIV system, which models multiple
cortical areas having the components for multi-sensory perception including exteroception, interoception, and
proprioception. KIV is an example of intentional
dynamic system realizing the intention-perception-action cycle. The developed
adaptive learning and control system has been implemented in various
computational and robot environments.
Title: A Multilayer Network That
Can Maximize Knowledge or
Minimize Effort
Author: Daniel S. Levine
Department of Psychology -
Summary
Human
processing of complex cognitive information, and decision making based on that
information, is governed by two drives that often contradict one another. The first is the drive to maximize coherent
knowledge of the environment, what Perlovsky has
called the knowledge instinct. The second is the drive to minimize effort by
the use of short cuts or heuristics, many of which have been illuminated by the
pioneering psychological experiments of Tversky and Kahneman.
How
does the brain circuitry incorporate both the knowledge maximizing and effort
minimizing tendencies, both of which have adaptive value in different
situations? And how does the brain’s
executive system decide which tendency to activate in which contexts? Recent brain imaging studies on the
differences between more and less heuristic-bound decision makers suggest a
possible brain-based network model of these interactions. The proposed network combines two interacting
adaptive resonance modules with vigilance levels that differ widely between
individuals, and between contexts within the same individual. Further experimental tests of this
hypothesis, involving tasks that require logical resolution of cognitive
dissonance, are in the planning stages.
Title: A
far-from-equilibrium thermodynamic model of the action-perception cycle based
in nonlinear brain dynamics
Author: Walter
J Freeman
Department of Molecular & Cell Biology -
Summary:
Cognitive
neurodynamics describes the process by which brains
direct the body into the world and learn by assimilation from the sensory
consequences of the intended actions. Repetition of the process constitutes the
action-perception cycle by which knowledge is accumulated in small increments.
Each new step yields a freshly constructed frame that is updated by input to
each of the sensory cortices (Freeman, 2004a,b). The
continually expanding knowledge base is expressed in attractor landscapes in
each of the cortices. The global memory store is based in a rich hierarchy of
landscapes of increasingly abstract generalizations (Freeman, 2005). At the
base is the landscape of attractors for the primary categories of sensory
stimuli in each modality, for example, the repertoire of odorant substances
that an animal can seek, identify, and respond to at any one stage of its
lifelong experience. Each attractor is based in a nerve cell assembly of
cortical neurons that have been pair-wise co-activated in Hebbian
association and sculpted by habituation and normalization. Its basin of attraction
is determined by the total subset of receptors that has been accessed during
learning. Convergence in the basin to the attractor gives the process of
abstraction and generalization to the category of the stimulus. This
categorization process holds in all sensory modalities (Freeman, 2006). The
convergence to and holding of a cortical state by an attractor gives a frame of
cortical action that includes the entire primary sensory cortex and lasts about
a tenth of a second. The action-perception cycle includes 3-6 frames repeating
at rates in the theta range (3-7 Hz).
Cortex
is bistable, having a receiving phase during which
the landscape is latent, and a transmitting phase during which the landscape is
brought on line by the sensory receptor input during inhalation. Selection by sensory input of one of the
basins of attraction precipitates spontaneous symmetry breaking (Freeman and Vitiello, 2006) in the form of a phase transition (Kozma et al., 2005) from the receiving phase to the
transmitting phase. Another phase transition returns the bulb to the receiving
phase. These properties are schematized by adapting the phase diagram for
water, which is the static relation between energy and entropy at equilibrium,
to the relation between the rate of increase in order
(negentropy) and power (rate of energy dissipation).
The order parameter is indexed by the inverse of the Euclidean distance between
successive digitizing steps in 64-space (He(t)); small
steps indicate high order. Power is estimated from mean square
The
critical point that governs the cortical system is identified with a non-zero
point attractor, which is maintained by mutual excitation among neurons in very
large numbers. The interaction is modeled by
nonlinear ordinary differential equations. The refractory periods of the
neurons limit the forward gains, providing a soft boundary condition.
Perturbation in the near-linear range gives impulse responses that decay to the
steady state exponentially at a rate proportional to the evoked amplitude.
Extrapolation to threshold gives zero decay rates, which implies
self-stabilization at unity gain. This gives the steady state excitatory bias
that is necessary for oscillation by negative feedback. The power in background
activity increases with arousal under brain stem neurohumoral
control, yet at all levels it is self-stabilized at unity gain.
Linearization
of the dynamics around the stable operating point reveals that a closed loop
pole at the origin of the complex plane governs the steady state (Freeman,
1975/2004). In Fig. 1 this pole is corresponds to the critical point DSOC.
The imaginary axis in the complex plane is seen as the phase boundary between
the receiving and transmitting phases (Freeman, 1975/2004; 2007). This neural
mechanism depends on robust maintenance by cortex of its scale-free dynamics
near criticality (Linkenkaer-Hansen et al, 2001;
Freeman, 2006), where all frequencies and wavelengths of activity are
simultaneously expressed, as revealed in the power-law distributions of spectra
and other properties. Scale-free dynamics can explain the fact that the first
step in a cortical phase transition is reduction in amplitude, not the surge in
dissipation expected following sensory impact. The decrease in power is
inherent in the background activity in the beta and gamma ranges, which is
interrupted by null spikes as seen in Rayleigh noise, at which background
activity approaches zero (Freeman, 2006, 2007). A phase transition to an
attractor-guided cortical output pattern selected by input occurs at the
coincidence of a null spike with a sensory sample brought in under limbic
control: a sniff, saccade or whisk. The low signal-to-noise ratio in the null
spike or vortex explains how weak but expected sensory stimuli can capture an
entire primary sensory cortex in a time window lasting very few ms.

Fig.
References
W.
J. Freeman, Mass Action in the Nervous System, Academic (1975/2004).
W.
J. Freeman, Origin, structure, and role of background EEG activity. Part 1. Analytic amplitude. Clin. Neurophysiol.
115, 2077-2088 (2004a). .
W.
J. Freeman, Origin, structure, and role of background EEG activity. Part 2. Analytic phase. Clin. Neurophysiol.
115, 2089-2107 (2004b).
W.
J. Freeman, Origin, structure, and role of background EEG activity. Part 3. Neural frame classification.
Clin.
Neurophysiol. 116 (5), 1118-1129 (2005).
W.
J. Freeman, Origin, structure, and role of background EEG activity. Part 4. Neural frame simulation. Clin. Neurophysiol.
117(3), 572-589 (2006).
W.
J. Freeman, Proposed cortical ‘shutter’ in cinematographic perception. Invited Chapter in: Neurodynamics of
Cognition and Consciousness. R. Kozma and L. Perlovsky (eds.):
W.
J. Freeman and G. Vitiello, Nonlinear brain dynamics
as macroscopic manifestation of underlying many-body field dynamics. Physics Life Rev. 3, 93-118 (2006).
R.
Kozma, M. Puljic, P. Balister, B. Bollabás and W. J.
Freeman. Phase transitions in the neuropercolation
model of neural populations with mixed local and non-local interactions. Biol. Cybern. 92: 367-379 (2005).
K.
Linkenkaer-Hansen, V. M. Nikouline,
J. M. Palva and R. J.
Iimoniemi. Long-range temporal correlations and
scaling behavior in human brain oscillations. J. Neurosci. 15:
1370-1377 (2001).
Title: Multi-Scale
Adaptive Dynamic Modularity and Cognitive Function
Author: Ali
Minai1, Simona Doboli2
1: Department of Electrical & Computer Engineering -
2: Department of Computer Science -
Summary:
In
the last decade, experiments using multi-electrode arrays and brain imaging
have provided a wealth of information on the neural basis of perception,
cognition and action. This information has, in turn, driven a great deal of
computational modeling seeking to understand the
functioning of the brain at a systemic level. The picture emerging from these
investigations is that of a continually adapting, multi-scale, networked
dynamical system that interacts with the information flowing through it to
produce the phenomena of memory, intention, cognition, behavior
and consciousness. This view stands in stark contrast with the classical idea
of the brain as the body’s information processor. In this presentation, we link
this dynamical view to our previous work on latent attractors and to recent
work in systems biology, leading to a generic conception of emergent
organization and novelty in biological systems. In particular, we look at the
structures and processes underlying cognition as a specific instance of a
broader paradigm that is ubiquitous in biology.
An
idea implicit in much recent work on cognition is the notion of emergent
response systems: functional networks that arise as a result of – and shape the
response to – the afferent stimulus stream in the context of modulatory signals. We focus on possible mechanisms for
this using the formulation of interacting modules similar to latent attractors.
In particular, we consider how the scope of possible response networks may be
controlled in a way that maximizes efficiency using prior learning without
resorting to implausibly explicit design processes. To this end, we describe a
conceptual model that allows control, flexibility and robustness in the
emergent configuration of response networks – for both internal tasks (e.g.,
memory recall or idea generation) and external ones (i.e., behavior).
The
model we present comprises the following components:
The
system works through the rapid emergent coordination and tonic activation of a
core network at the appropriate scale, creating a broader “pool” of network
elements in what Crick and Koch might term the “penumbra” of this core.
Modulation, interacting with the inherently non-homogeneous modular
connectivity structure of the elements in the selected pool, produces a rapid
search leading to the emergence of a functional network and an acceptable
response – typically a pattern of activity or a sequence of such patterns. This
network persists until the core is destabilized by a combination of salient new
input and specific recurrent activation patterns. The ability to select core
networks at different levels of specificity/scale allows the system to balance
“exploitation” based on prior learning and “exploration” motivated by novel
contexts.
In
a generic sense, our model is similar to several others, but we focus primarily
on the following specific issues:
How
can diverse, context-dependent, reliable functional networks be elicited from
the same underlying structural network?
What
kind of neural architecture and neural processes can simultaneously facilitate
rapid production of familiar responses and efficient discovery of novel ones?
To
what extent (if at all) can this model explain the phenomenology of “mundane
creativity” (e.g., ideation, coherent speech, writing, etc.) and “inspired
creativity” (e.g., musical composition, poetic composition, art, etc.)?
In
particular, we make an explicit attempt to integrate ideas from models of motor
control and theories of cognition, using interacting metastable
attractors as an enabling mechanism. The key conceptual elements of our
framework are: 1) Multi-scale modularity
allows response networks to be constructed rapidly through a selective rather
than constructive process; 2) The connectivity structure of the modular
substrate network allows the system to balance responsiveness and robustness;
and 3) As in evolution, modularity enables the emergence of virtually limitless
novelty while preserving and using previously developed structures – a
neurobiological equivalent of evolvability. We
propose that defining and studying optimal modular architectures is a major
open research issue for cognitive science, and may also lead to superior neural
architectures for engineering applications.
Title: Rich
Dynamics and Bifurcation in Populations of Spiking Model Neurons
Author: Emilio
Del-Moral-Hernandez
Summary:
Spiking
model neurons are a natural scenario for the emergence of dynamic phenomena.
The presence of dynamics playing an important role in neural and neural
assembly functionality happens at the more detailed level, when we think on the
Hodking and Huxley model describing the generation
and propagation of action potentials in the active membrane; happens at the
oscillatory behavior in neurons under consistent
stimulation; and it also happens at the level of synaptic activity and post
synaptic signals.
In
addition, when we consider a population of neurons that compose a neural
assembly, we also observe the emergence of important global dynamic behavior that is central for the reaching of complex
functionality. Non-linearity plays an important role in promoting rich dynamic behavior, by allowing dynamic shifts between stability and
instability of attractors, what gives place to the bifurcation phenomena and
the diversity of dynamic behavior. Non- linearity and
rich bifurcation allow for the emergence of rich attractor behavior
even from very simple neural networks (i.e., networks with a small number of
neurons). It also allows for the blend of ordered behavior
and chaotic dynamics, as well as for the presence of fractality
and self-similarity in the landscape of dynamic attractors.
Model
neurons based on the integrate and fire model and
their electronic counterparts (spiking relaxation oscillators) are an
interesting example of a simple structure with emergent richness associated to
its behavior. When submitted to appropriate
modulation of the stimulation signal, an isolated integrate and fire model
neuron can generate several different dynamic behaviors
with well known richness of bifurcation and cascading to chaotic dynamics, such
as the sine-circle recursive map, the logistic map, and tent map, as well as
many other dynamic systems with rich behavior. We can
show that any recursive behavior of first order can
emerge from the operation of the integrate and fire
model neuron under proper modulation of the extinction voltage of its
associated electronic relaxation oscillator. In such a formulation of the integrate and fire model neuron, the bifurcation between
two different dynamic modalities can in principle be exercised through the change
of several alternative bifurcation parameters. For example, the amplitude of a periodical
stimulation received by the model neuron can be changed to promote the bifurcation
between different modalities of oscillatory behavior.
At the same time, the change of the average level of stimulation can also
generate a similar effect. The fact that we have different ways of reaching the
switching between different dynamic behaviors enlarges
the possibilities of devising different mechanisms for bridging the emergence
of rich behavior in model neurons to the biological
neuron phenomenology.
An
even more complex and richer scenario, in terms of produced dynamics, can be
created through the coupling of several units with rich behavior
such as the ones based on spiking neuron oscillators with bifurcation and
chaotic phenomena. The phenomenology of rich repertoire of dynamic attractors
with diverse features that appears at the single neuron level reflects in
similar richness at the level of global behavior that
emerges in structures built through the coupling of several neurons. At this
network level, the understanding of the emergent phenomena requires the use of
tools targeting more macroscopic measures such as entropy measures, average flow
of information among the nodes in the network, as well as the use of measuring
and visualization tools which are appropriate for dealing with multidimensional
attractors: the dimension of the state variable of the dynamic systems being
studied and characterized grows now with the number of neurons in the network. A
macroscopic phenomena that relates to the global behavior of a coupled structure composed of several neurons
with rich dynamics is the interplay between ordered behavior
and disordered behavior. In many circumstances, we
can look at the assembly’s state variables evolution as switching between
situations of ordered behavior, which in principle
represent meaningful information or the completion of a pattern recognition
task, for example, and situations of apparently erratic behavior,
particularly during the process of network search for stored patterns. With the
consideration of multi neurons assemblies, several complex functionalities can
be implemented, through the exploration of the multidimensional nature of the
state variables. These functionalities can potentially include image understanding,
processing of multiple sensory information, multidimensional logical reasoning,
and complex motor control. Memory, association, hetero-association and pattern
recognition, are also functions that can
Title: What
Neural Processes Allow Prediction of A Bistable
Percept?
Author: Hualou Liang,
Summary:
Bistable
visual stimuli such as Rubin’s vase/face or the Necker cube refer to the
phenomena of spontaneously alternating percepts
despite constant retinal inputs.
Such
stimuli have provoked considerable interest in neuroscience research, because
the stimulus is effectively dissociated from the percept, hence it provides a
unique opportunity for addressing the question as to whether or not neurons
responding to a particular perceptual interpretation will alter their responses
when a percept emerges. Exploiting its ambiguous nature, single-cell studies
have found that the spiking activity from individual neurons is to a certain
degree correlated with an animal’s perceptual report. More recently, a few
studies revealed that the local field potential (LFP) at certain frequency
bands can also be used to predict these perceptual judgments. Although both
neural processes correlate with perception, the relationship between extracellular field potential and spiking activity remains
enigmatic.
We
have studied, with behaving monkeys, whether spikes and LFP provide
complementary information for perceptual discrimination, and their relative
importance in resolving the perceptual ambiguity during bistable
stimulation using ambiguous structure-from-motion (SFM), a powerful stimulus
that allows reconstruction of an object in depth from motion cues alone.
I
will present new data in which we directly compare spiking activity and LFP
response during ambiguous structure-from-motion in monkey area MT (middle
temporal). In general, and consistent with previous work, we found that LFP at
various frequency bands provided only a modest ability to predict
trial-by-trial fluctuations of the monkey’s percept, less than that obtained by
spiking activity. However, we report that, on the very same data, the
integration of several band-limited LFP signals resulted in significantly
improvement in predictability. Additionally, when spiking activity was
considered, the average success rate approached 72% correct. Further, when the
neural signals from multiple electrodes were combined, the perceptual states
could be accurately predicted on 80% of the ambiguous trials. I will discuss the implications of these
results for the neural concomitants of visual awareness, as well as the
relative role of different neural processes in bistable
perception.
Title: Activation
and delay in functional MRI brain signals of selective activation
Author: M.
Fabri1, G. Mascioli1,3,
A.M. Perdon2, G. Palonara3, S. R. Viola1,2,
U. Salvolini3 and T. Manzoni1
1Dipartimento di Neuroscienze
2Dipartimento di Ingegneria
Informatica, Gestionale e dell’Automazione
3Istituto di Radiologia
Università
Politecnica delle
Summary:
This
paper deals with estimating delay in functional MRI derived time series
representing the hemodynamic responses of somato-sensory cortex. Functional MRI is a non-invasive
imaging technique that provides a series of brain images showing the brain activity
of one or more cerebral areas. The detection is obtained by looking at the hemodynamic response of a particular area. Usually, fMRI output signals (thereinafter responses) are related
with a specific input signal (thereinafter stimulus) that is expressed by a
step binary function. In this work, we will focus on a motif discovery approach
on time-series based on fMRI images. The problem of
delay estimation has been fully addressed by the Literature on signal
processing theory. Signal delay has been defined in the Literature wrt two different signals, that is, the input signal, and
the output signal in terms of “closeness” along time of certain patterns
relating the two signals [1, 2, 5]. These approaches
works well when all the signals are quite homogeneous both regarding the shapes
of different signals and regarding the periodicities within the same signal.
When such homogeneity requirement cannot be assumed, alternative approaches
should be investigated for characterizing the differences within the same
series and between two or more series. One approach considered by the
Literature to address the detection of interesting knowldege
from sequence data is the motif discovery approach. The motif discovery
approach focus on the extraction from series or, more generally, sequence data,
of previously unknown and frequently appearing patterns, called “motifs” [3,
6]. Within bioengineering and modelling of biological phenomena, it has been
applied to the detection of novel and interesting knowledge in muscular
movement
series of
couples {< s1, t1 >,...,< sk , tk >} , for k=1,..., 101, in which the values of the i s belong to the set {0,1} according to different time
intervals. The patterns of interest (thereinafter activation patterns) are here
given by trends that the stimulus and the response have in common. The response
is given by a time series 1 1 { ,
,..., , } k k < r t > < r t > , for k=1,...,101
in which each value indicates the state of activation in that particular area.
A frame is obtained at a time interval of three seconds. Each series is made by
the response values to five subsequent intervals of stimulation. We are
interested in investigating the delay according to which the response starts.
In particular, we are interested in detecting – the more accurately as possible
– the frame in which the response starts. To do that, we have to deal with
variability due to the individual differences; moreover, the presence of noise in
the signals that has to be taken into account and addressed by means of
sufficiently robust approaches; eventually, we have to assume that the
responses within the same series may change, and for this reason, in order to
reduce the complexity of the problem, we are working in decomposing the
problem. To do that, we consider the responses as made by subintervals (5 subintervals)
that correspond to a not null stimulus. The motif discovery approach considers
episodes in sequences [4]. Given a set E of event types, an event is defined as
a pair (X, k) where X ÎE and k Îı is the step of
occurrence. An event sequence s of length n is a triple ( ,
, ) s e s K K where 1 1 {(
, );...( , )} n n s = s k s k is an ordered sequence
of events all belonging to E, that is i s ÎE for all
s and i i 1 k k + £ for all i=1,…,n- 1; s K and
e K are called respectively starting point and ending point, , s e K K Îı , s i
e K £ k £ K for all i=1,…,n-1, e s n = K − K . Every
session is considered an event sequence s in which every event type sÎE is a random variable assuming one of all discrete
values in E with unknown Probability Distribution Function (PDF). Episodes,
indicated by Greek letters, are defined to be partially ordered collections of
events occurring with a given order, and modeled as
directed acyclic graphs whose random vertices are the items of sequences. In
particular, in this works are searched
serial
episodes, defined as the ones for which partial order is not trivial. In serial
episodes detection the focus is put on the empirical frequency of subsequent occurrence
of object i e at step k together with object j e at
step k+1 1 ( , ; , 1) i i i j f s e k i
s e k i k n + = = = = + " £ irrespective of the
absolute value of k, that is, for the absolute position of the item inside the
sequence. For episodes detection a sliding window W(s,win) of size win, starting at step s, is used; at every
next step, 1( 1, ) i W s i
win + = + the starting point s shifts by one position on the right, while win
does not vary, until ,{ | } k k s k s + win = n ,
that is, the end of the sequence, is reached. The window size has proved to be
critical for delay detection. Several experiments show that a too large window
may lead to no detection of the activation pattern, especially when the
response is not intense (e.g. for thorax and leg). For finding the episodes in
which a response starts, we are interesting in finding the subsequence within
the series in which the values are monotonically increasing with respect to a the starting point value. This criterion suggests mapping
the numerical values of each series by comparing sequentially every observation
in the series to all the subsequent observations. More formally, given a series
of length n, we derive n-1 discrete series by mapping each value for each i x , x =1,...,n −1belonging
to the series using the following step function: ( , ) i
j s x x = +1 iff i j x < x , ( , ) i j s x x = 0 iff i
j x = x , ( , ) i j s x x =
-1 iff i j x > x for j=
2,…,|win|. Because of the variability within the signals, we define four kinds
of activation patterns: 1) a slowly ascending response, given by two subsequent
positive s; 2) a quick and intense response, given by two subsequent s whose
distance is equal to two; 3) a moderate response, given by two subsequent s
whose distance is less than two and greater than 1; 4) a small response, given
by two subsequent s whose distance is equal to 1. We evaluate the patterns
found by means of manual exploration and by means of correlation with
REFERENCES
[1]
Azaria, M. and Hertz, D. (1984). “Time
delay estimation by generalized cross-correlation methods”. IEEE Tr. on Acoustic Speech and Signal processing. ASSP
32(2):280-285.
[2]
Knapp, C., and Carter, G. (1976). “The generalized correlation
method for estimating time delay”. IEEE Tr. on
Acoustic Speech and Signal processing. 24(4):320-327.
[3]
Chiu, B., Keogh E., and Lonardi, S. (2003). “Probabilistic
Discovery of Time Series Motifs”, Proc. 9th ACM SIGKDD Int. Conf Knowledge Discovery
and Data Mining,
KDD ID 281.
[4]
Mannila, H., Toivonen, H. Verkamo, A. I.(1997) “Discovery of
Frequent Episodes in Event Sequences”. Data Mining and
Knowledge Discovery, 1.
[5]
Mueller, M. (1975). “Signal Delay”. IEEE
Tr. On Communication. COM-23, pp. 1375-1378.
[6]
Patel, P. Keogh, E., Lin, J., and Lonardi, S. (2002) “Mining
motifs in massive time series databases”. Proc. 2nd IEEE Int.
Conf. on Data Mining.
[7]
T
Title: Dynamic
Logic: Neurodynamics of Perception and Consciousness
Author: Leonid
I. Perlovsky
and the
Summary:
Neural dynamics of perception evolves from vague, fuzzy and less
conscious states to more concrete and conscious states. The talk will compare a
dynamic logic description of this process with chaotic neurodynamics
observed in EEG data, where high-dimensional chaotic states transition into
low-dimensional, “less” chaotic states. Perception and cognition are described
as interaction of mechanisms for concepts,
emotions, instincts, imaginations, and intuitions; these mechanisms are mathematically
described. Mathematical theory is related to the knowledge instinct, which
drives the mind to understand the world. This instinct is even more important
than sex or food. Mathematics of neurodynamics is
connected to the mind, the high to the mundane. I briefly discuss engineering
applications (detection, financial predictions, Internet search engines); and
present results demonstrating orders of magnitude improvement in classical
detection and tracking in noise. Future research directions are reviewed: roles
of the beautiful, music, sublime in the mind, cognition, consciousness, and
evolution of cultures. The current “East vs. West” confrontation turns out
related to differences in grammar between English and Arabic.
Title: Two functional roles for the hippocampal
dynamics of theta phase precession; spatial representation and memory formation.
Author: Colin Molter
Summary:
In rats, the hippocampus is known to play crucial
roles in spatial representation and in memory formation. Spatial representation
is linked to the presence of hippocampal place cells
and the presence of entorhinal grid cells, located
one synapse upstream the hippocampus. Memory formation is related to the
ability to create stable representations, which can be recovered from partial
cues. Additionally, these representations can be transferred to other brain
areas for long term storage.
To reach these
two cognitive roles, we demonstrate here the inevitable role played by the
theta phase precession. First we show that entorhinal
theta phase precession is necessary to explain the online formation of hippocampal place cells from entorhinal
grid cells; it demonstrates a causal relationship between spike timing and
spatial representations. Additionally, the distinction between dentate gyrus and CA3 place cells representation is discussed here.
Second, the hippocampal theta phase precession leads
to online memory formation of the trajectories, leading to the formation of a
cognitive map of the environment. During sharp waves events, in agreement with
biological observations, fast replay of behavioral activities are observed in our
model. These events can explain memory consolidation. In summary, this work
points out two fundamental roles played by the theta phase precession mechanism
in an integrative view of the entorhinal-hippocampal
network.
Title: Exploring the mechanisms of neural synchronization
Author: J.P. Thivierge,
Département de Physiologie - Université de Montréal
jean-philippe.thivierge@umontreal.ca
Summary:
Neural synchronization is of
wide interest in neuroscience, and has been argued to form the substrate for conscious
attention to stimuli, movement preparation, and the maintenance of
task-relevant representations in active memory. It is well known that different
patterns of neural connectivity can impose inherent limitations on the
repertoire of computations that a neural system can perform (Thivierge & Marcus, 2007), and influence the processing
of synchronized events. In this symposium, I will explore inter-dependences
between neural organization and information processing by examining, through a
computational approach, the consequences of functional connectivity on the
propagation of synchronized spike activity across
neural pathways. The foremost challenge in developing realistic models of
large-scale synchronization is the apparent lack of periodicity with which
network spikes (NSs) occur, as evidenced
experimentally both in vitro and in vivo. This aperiodicity cannot be explained
by several widespread accounts of synchronization, including both fixed-point
attractors and limit-cycle oscillators. In recent years, new classes of models
based on chaotic destabilization of the state space have emerged as promising
candidates to explain the aperiodic phase of NSs. By linking these models to known physiological principles,
it is possible to offer novel insights into the key mechanisms of neural
dynamics responsible for generating coherent states of synchrony. In
particular, I will address the central role of voltage-gated Ca2+ channels,
NMDA receptors, and dopamine, each associated with a unique role in the
initiation and dissolution of NSs. Through computerbased simulations, I will demonstrate ways in which
aperiodic synchronization can arise naturally with
only a minimal set of assumptions, including heterogeneous cell properties and
a non-linear rise and fall of intracellular calcium concentrations. Crucially,
by varying the functional connectivity of neuronal networks, computational
modeling can demonstrate the direct implications of different forms of network
topologies on the propagation of NSs. For instance,
one well-documented form of neuro
References
Thivierge, J.P., & Marcus, G.F. (in press). The
Topographic Brain: From Neural Connectivity to Cognition. Trends
in Neurosciences.
Thivierge, J.P., Rivest,
F., & Monchi, O. (2007). Spiking neurons, dopamine,
and plasticity: Timing is everything, but concentration also
matters. Synapse, 61, 375-390.
Title: Developmental Neuropathology as a Paradigm to Study
Complex Nonlinear Neurodynamical Systems
Author: W.F. Renan - Vitral 1,2
(1) NIPAN - Center of Computational Intelligence, Adaptive Systems and
Neurophysiology, Dept. Physiology, Biological Sciences Institute, Federal
University of Juiz de Fora, BR.
(2) ICONE – LSI, Dept. of Electronic Systems, School of Electrical
Engineering EP-USP. Email =
Summary:
This presentation intends to
raise a discussion about memory and learning on visual navigation tasks on the
context of neuroplasticity and supported by adaptive
neural networks and gene expressions. Our main concern regards to the need of
stronger challenges, which are expressed by the critics and the ethics of
computational intelligence scientists, and the ability to build
multidisciplinary teams, so extensive is the scope covered by Cognition,
including tools even more diverse, like complex nonlinear dynamical systems,
adaptive gene expression, adaptive behavior and the necessary timing. To that,
we use a mice model of developmental neuropathology as a paradigm. As we will
show, the presented hypotheses are sustained by our previous studies on
neurobiology and behavior in mice exposed to a whole body dose of 3Gy from an X rays source on the sixteenth gestational day (E16), which
produces many deficits at the adulthood, essentially on visual circuits and
systems, neocortex and hippocampus.
The most interesting effects
are, besides the significant reproducible results, the following: 1) absence of
primary visual cortex, 2) callosal agenesis, 3)
prefrontal, periventricular and hippocampal
clusters of ectopic neurons, 4) a high destruction of
retinal displaced amacrine cells at the ganglion cell
layer, 5) a high destruction of dorsal lateral geniculate
nucleus –DLGN -(about 75%), and, upon behavior, 1) a normal visual acuity
(which comprises also a normal acquisition both, on black and white
discrimination task and vertical versus horizontal discrimination task), 2) a
normal visual reference memory acquisition on a water escape testing and 3) a
severe deficit on visual working memory leading to a non-learned visual
navigation task on the Lashley III Maze Test. It is
also shown that the great reduction of DLGN occurs not as a primary effect of
ionizing radiation, but as secondary and matching with the usual time when
programmed cell death occurs, i.e., within the first 5 postnatal days. The
whole description of these data can be found on our previous publications.
These experiments lead to a
conclusion that, even with the above morphological changes, the visuo-spatial reference memory was affected at a lower
level and the animals got a visual navigation path learned. On the other hand,
we suggest that callosal agenesis and pre-frontal
cortical ectopias act jointly to disturb navigation
planning, which has a profound effect on learning acquisition dependent of visuo-spatial working memory.
These results could be
interpreted on distinct ways: 1) it would be possible that the parallel visual
memory systems can work on an independent but summed way; 2) due the fact that
the cells’ death induced by ionizing irradiation occur as apoptosis, the
cerebral reorganization could promote specific ways where plasticity would be
presented as a robust adaptation during synaptogenesis,
but working as functional in some systems and non-functional in others, and
also promoting abilities not showed on normal development; and 3) could the
adaptive behavior of the remaining cells, as radial glia
and early post-mitotic neurons, working favorably to restore normal visual
primary functions, supporting the performance on visual reference memory, being
the defects on prefrontal cortex and corpus callosum
less flexible for a functional pattern of developmental neuroplasticity.
These results support the two main visuo-spatial
memory systems, the reference and the working memory, showing distinct patterns
for the expression of plasticity capabilities.
Finally, I bring these
hypotheses for an open discussion that certainly will contribute for a better
understanding not only of the memory systems acting on visuo-spatial
memory systems, but also the sum of the behaviors working on these kind of
tasks, like saccades, head direction movements, planning, learning, ontogenetic
adaptive behavior, and others like visual attention and oculomotor
control.
Support: NIPAN, UFJF, Fapemig, CNPq, Finep, Faperj,
References to Presentation
Vitral, RWF, Vitral,
CM, Dutra, MLV. Callosal Agenesis and Absence of
Primary Visual Cortex Induced by Prenatal X Rays Impair Navigation’s Strategy
and Learning in Tasks Involving Visuo-Spatial Working
but not Reference Memory in Mice. 2006. Neuroscience Letters. 395:230-243.
Vitral, RWF, Araujo,
GF,
Schmidt, SL, Vitral, RWF and
Schmidt, SL, Vitral, RWF and
2006 Nonlinear spatio-temporal neural dynamics workshop
2005 Nonlinear spatio-temporal neural dynamics workshop
2004 Nonlinear spatio-temporal neural dynamics workshop
2003 Nonlinear spatio-temporal neural dynamics workshop
2002 Complex nonlinear neural dynamics workshop
2001 Complex nonlinear neural dynamics workshop