A couple of days ago, I was reading a document regarding a "complex system". The document's author was describing a "complicated system", but hardly complex. It seems to me, the casual term "complex" has become so broad as to become almost meaningless, and this is becoming increasingly unfortunate. Years ago, Dr. Seth Lloyd (no relation) used to collect definitions of the term - interesting, enlightening, but ultimately short of the goal. But S. Lloyds's collection does provide patterns that become interesting.
I suggest we start by some characterizations of patterns of complexity. The first separates "subjective complexity" - aspects of complexity caused by incompleteness, lack of understanding or information, uncertainty and of a probabilistic nature. This "simple" aspect of complexity is the focus of most books on complexity. Compare those aspects to "objective complexity" - aspects that emerge from 3 or more, mutual, non-linear couplings (sometimes referred to as non-linear recurrence). Careful study shows that subjective and objective aspects are themselves coupled into what becomes a foundation for potentially complex systems. Yet this is incomplete, as well.
A major contribution came from Steven Strogatz in his book "Non-linear Dynamics and Chaos, with Applicatons to Physics, Biology, Chemistry and Engineering" (Perseus, 1994). The problem was the word "chaos", which Strogatz attributed to 3 or more, mutually-coupled, non-linear systems. Strogatz's chaotic systems are only complex and chaotic at some energetic or thermodynamic non-equilibrium.
Chaos becomes associated with the exchange of entropy, information, or energy at some "distance from equilibrium"(1) that acts upon the potential complex system. Contextually, chaos has some relationship with stability, in that systems at or near informational, entropic, and energetic equilibrium are not ultimately complex, regardless of how complicated (how many parts exist). For example, we may speak of a potential instability, that is currently at equilibrium (balancing an egg on end).
When we couple the subjective incompleteness, with the energy, entropy, and informational dynamics of 3 or more, mutually coupled, non-linear objective systems , the potential complex systems becomes a realized complex system - or simply a "complex system". The 3 coupling of these aspects implies that complex systems are potentially meta-complex.
(1) See the Brussels-Austin Group and pioneering work by Ilya Prigogine.
Friday, March 18, 2011
Thursday, July 15, 2010
Sensemaking of Complex Systems
Sensemaking seems very much related to to pattern recognition - which obviously assumes you are congnizant of having seen that pattern before. Note that this is not saying, "I have seen this exact phenomonon before." For example, one might have seen collective swarming behavior in fish, in birds, in ants, even in people - there is a pattern to which we have given the name "swarm" characterized by some combination of synchrony, orientation (direction), attraction, bifurcation, (n-furcation) and dispersion.
How people go about the business of sensemaking is often quite different that the way artificial intelligence goes about sensemaking. By this I mean, humans often have much more sensual and contextual information upon which they make classifications. This is different than raw information, such as that stored in computer memory, in that human contextual information is encoded in highly coupled networks - the real neural network. Computer memory is discrete, rank and file - the substrate being independently and identically distributed (iid).
How would an artificial neural network go about sensemaking regarding the swarming behavioral pattern? The "sense" would have to be made in some of many other contextual patterns. One such contextual pattern is the "shape" of a connective, Compositional Pattern Producing Network, as found in HyperNEAT. However, that is just one context, and we need a network of contexts for sensemaking. Moreover, these contexts exist at various timescales, from nearly instantaneous to universally constant.
Some equate thought with computation. I'm not sure I agree. There is a composition between networked computation and linear computation (serial and/or parallel) that seems necessary for categorical sensemaking. And, of course, because something makes sense doesn't mean it is true or the right thing to do. That takes some interstitial experimentation - or meta-computing - with comparison to some real-world data.
So, I will go back to my Chinese Room and continue working on that categorical composition.
How people go about the business of sensemaking is often quite different that the way artificial intelligence goes about sensemaking. By this I mean, humans often have much more sensual and contextual information upon which they make classifications. This is different than raw information, such as that stored in computer memory, in that human contextual information is encoded in highly coupled networks - the real neural network. Computer memory is discrete, rank and file - the substrate being independently and identically distributed (iid).
How would an artificial neural network go about sensemaking regarding the swarming behavioral pattern? The "sense" would have to be made in some of many other contextual patterns. One such contextual pattern is the "shape" of a connective, Compositional Pattern Producing Network, as found in HyperNEAT. However, that is just one context, and we need a network of contexts for sensemaking. Moreover, these contexts exist at various timescales, from nearly instantaneous to universally constant.
Some equate thought with computation. I'm not sure I agree. There is a composition between networked computation and linear computation (serial and/or parallel) that seems necessary for categorical sensemaking. And, of course, because something makes sense doesn't mean it is true or the right thing to do. That takes some interstitial experimentation - or meta-computing - with comparison to some real-world data.
So, I will go back to my Chinese Room and continue working on that categorical composition.
Saturday, March 27, 2010
Complexity vs. Chaos
There seems to be different phenomenon between complexity and chaos that I would like to explore in this post. First, I admit to a Brussels-Austin perspective of thermodynamics. The measure of chaos seems to be a distance from thermodynamic equilibrium. In open systems, this is the distance - or degrees - above absolute zero. In closed systems, this is thermodynamic difference (in heat and other kinetic motion) between the system elements.
Compare this to objective complexity, which is simply defined by 3 (or more) degrees of mutual, non-linear coupling between system elements. This three-body problem defines the simplest form of complexity.
We can compare the complexity / chaos system of problems by considering the three-body problem at thermodynamic equilibrium. Nothing is moving, no mutual orbits. The coupling mechanism may be underdetermined (subjective complexity), but may be considered as a thermodynamic perturbation (even though the perturbation has nothing to do with the heat component). As the thermodynamics increases, there emerges motion in the three-body system. The complexity within the system moves from "potential" complexity to "actual" complexity - a form of realization.
Of course there is subjective complexity, referring to the uncertainty, stochastic nature or lack of knowledge in systems - but this happens in simple systems as well as complex systems.
The term complexity is often used indiscriminately to describe both complexity and the coupling between complexity and chaos. If it were up to me, I would create a different word for the complex/chaos coupled system - something like "chomplexity" - but I hate neologisms, so I merely add a footnote to distinguish the two.
It makes sense that the Inuits have many words for snow. We have overloaded our one word, complexity, almost to the breaking point. Maybe it's time to rethink our lexicon.
Compare this to objective complexity, which is simply defined by 3 (or more) degrees of mutual, non-linear coupling between system elements. This three-body problem defines the simplest form of complexity.
We can compare the complexity / chaos system of problems by considering the three-body problem at thermodynamic equilibrium. Nothing is moving, no mutual orbits. The coupling mechanism may be underdetermined (subjective complexity), but may be considered as a thermodynamic perturbation (even though the perturbation has nothing to do with the heat component). As the thermodynamics increases, there emerges motion in the three-body system. The complexity within the system moves from "potential" complexity to "actual" complexity - a form of realization.
Of course there is subjective complexity, referring to the uncertainty, stochastic nature or lack of knowledge in systems - but this happens in simple systems as well as complex systems.
The term complexity is often used indiscriminately to describe both complexity and the coupling between complexity and chaos. If it were up to me, I would create a different word for the complex/chaos coupled system - something like "chomplexity" - but I hate neologisms, so I merely add a footnote to distinguish the two.
It makes sense that the Inuits have many words for snow. We have overloaded our one word, complexity, almost to the breaking point. Maybe it's time to rethink our lexicon.
Saturday, February 27, 2010
Evolutionary Complexity
I've been working on a massive project to determine possible evolutionary paths in complex systems development. Some see this as engineering, while others argue that engineering is more deterministic than this characterization permits.
The difference is fundamental. Engineering applies (hopefully) scientific principles to a problem domain at specific points in time. The possiblity space however, extends from the past, through the present, and into the future. However, the evolution of the possibility space, in time, changes the nature of the problem, and therefore its engineering solutions. Therefore, it resembles an ensemble of network paths - an entanglement of path trajectories, some of which will almost certainly turn out to be wrong. From a practical perspective, how can we transition from what we discover is an incorrect path, to a more correct path, without having to "backtrack"? The answer seems to be understanding the evolution of the possibility space, contemporaneously with the paths we have actually taken.
This denies the Markov model (and I contend even hidden Markov models) as only a partial, historic artifact, thus incomplete. The reason is this: The effect of new information upon the prior changes the entire nature of the model - the understanding of its history, the understanding of its present states, and the trajectory of evolutionary "realization" it is on.
What is evolving is not only the system under investigation, but the contexts in which it is evaluated. This demands that engineering, likewise, be an evolutionary flow through possibility spaces, with the realized products and processes becoming artifacts of that evolution.
The term "artifacts" includes myriad errors and omissions along the data, information and knowledge gained.
The difference is fundamental. Engineering applies (hopefully) scientific principles to a problem domain at specific points in time. The possiblity space however, extends from the past, through the present, and into the future. However, the evolution of the possibility space, in time, changes the nature of the problem, and therefore its engineering solutions. Therefore, it resembles an ensemble of network paths - an entanglement of path trajectories, some of which will almost certainly turn out to be wrong. From a practical perspective, how can we transition from what we discover is an incorrect path, to a more correct path, without having to "backtrack"? The answer seems to be understanding the evolution of the possibility space, contemporaneously with the paths we have actually taken.
This denies the Markov model (and I contend even hidden Markov models) as only a partial, historic artifact, thus incomplete. The reason is this: The effect of new information upon the prior changes the entire nature of the model - the understanding of its history, the understanding of its present states, and the trajectory of evolutionary "realization" it is on.
What is evolving is not only the system under investigation, but the contexts in which it is evaluated. This demands that engineering, likewise, be an evolutionary flow through possibility spaces, with the realized products and processes becoming artifacts of that evolution.
The term "artifacts" includes myriad errors and omissions along the data, information and knowledge gained.
Labels:
complex,
context models,
evolution,
system models
Sunday, November 1, 2009
Dr. Stogatz, Meet Dr. Lawvere
There are interesting ways of categorically mapping the structure, behavior and evolution (morphisms) of dynamical systems ala Steven Strogatz in "Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and Engineering", by Perseus Publishing. This has been detailed by Category Theory (CT - Eilenberg, Mac Lane, Lawvere, among many more) and the use of functors - a programmatic analog affecting state evolutions.
Rather than the set-theoretic approach, with it's limitations of First Order Logic, CT provides a more accurate logical / analytical framework in which to understand complex, dynamical systems, and categorize (or classify) their patterns of change.
Curious. This particular technique seems grossly under-utilized in Complex Systems Theory, and in understanding complex, dynamical systems, their processes, growth and evolution. Furthermore, it seems that these known techniques are actually ignored.
If someone has a good explanation for this phenomenon, I'd sure like to hear it - or am I not looking for exemplars in the right places?
Rather than the set-theoretic approach, with it's limitations of First Order Logic, CT provides a more accurate logical / analytical framework in which to understand complex, dynamical systems, and categorize (or classify) their patterns of change.
Curious. This particular technique seems grossly under-utilized in Complex Systems Theory, and in understanding complex, dynamical systems, their processes, growth and evolution. Furthermore, it seems that these known techniques are actually ignored.
If someone has a good explanation for this phenomenon, I'd sure like to hear it - or am I not looking for exemplars in the right places?
Tuesday, August 25, 2009
Parallel Paths
First, for all my life I have been a champion of many educational issues. Education is far too important to just to be left in the hands of educators. It is not a commodity. By this I mean, education is both a community AND individual personal responsibility.
The processes of education runs in parallel with processes of training. These are very different functions, although these functions are often confused. Often, training and education are in conflict.
Which brings me to the point of this posting: In software engineering, we are trained to view software either from a serial perspective, or a parallel perspective, or maybe some combination of the two. It is obvious that we can run several serial tasks in parallel, and that parallel paths often consist of individual serial task lanes. Nice, neat, very deterministic.
We are taught to rollout loops and partition data for parallel processing - and that is good thing to learn. But I wonder, is the way we are trained to think about computer science and software engineering - in exclusive terms of serial and parallel fabrics - preventing us from understanding the value other computational fabrics? In other words, has our training overly constrained the purposes of our education?
Many of the systems I deal with exhibit a property called complexity. Often this is considered a "problem" - to be avoided or simplified, rather than considered offering an excellent solution strategy. From a computational point of view, complex networks offer extremely efficient encoding capabilities, and sometimes (but, not always) extremely efficient computational capabilities.
In putting together a proposal for a VLS compute cluster, I see how dissonance between training and education affects the conceptualization of computation on clusters of many highly trained professionals. Is this an educational opportunity? Or does training rule the day?
Stay tuned ...
The processes of education runs in parallel with processes of training. These are very different functions, although these functions are often confused. Often, training and education are in conflict.
Which brings me to the point of this posting: In software engineering, we are trained to view software either from a serial perspective, or a parallel perspective, or maybe some combination of the two. It is obvious that we can run several serial tasks in parallel, and that parallel paths often consist of individual serial task lanes. Nice, neat, very deterministic.
We are taught to rollout loops and partition data for parallel processing - and that is good thing to learn. But I wonder, is the way we are trained to think about computer science and software engineering - in exclusive terms of serial and parallel fabrics - preventing us from understanding the value other computational fabrics? In other words, has our training overly constrained the purposes of our education?
Many of the systems I deal with exhibit a property called complexity. Often this is considered a "problem" - to be avoided or simplified, rather than considered offering an excellent solution strategy. From a computational point of view, complex networks offer extremely efficient encoding capabilities, and sometimes (but, not always) extremely efficient computational capabilities.
In putting together a proposal for a VLS compute cluster, I see how dissonance between training and education affects the conceptualization of computation on clusters of many highly trained professionals. Is this an educational opportunity? Or does training rule the day?
Stay tuned ...
Monday, August 10, 2009
Models of Complexity
OK, I apologize for the implication that models are a paradigm of complexity.
The issue I've been discussing for the past several days with several colleagues is: "What do you mean by a model?" This stems from the fact that the term "model" means something different in science (esp. physics), math and engineering.
The reason for the problems stem from the fact that each domain has excellent reasons to adopting their unique definition. Each definition is right in some aspect, yet each is wrong. I have proposed a definition of a model extending from mathematical concepts. Everyone jumped on me for being "too theoretical". And here I thought I had some very practical, easy-to-use reasons for the math-centric foundation that allowed it to extend easily and map across the domains.
That's what I get for thinking ...
The issue I've been discussing for the past several days with several colleagues is: "What do you mean by a model?" This stems from the fact that the term "model" means something different in science (esp. physics), math and engineering.
The reason for the problems stem from the fact that each domain has excellent reasons to adopting their unique definition. Each definition is right in some aspect, yet each is wrong. I have proposed a definition of a model extending from mathematical concepts. Everyone jumped on me for being "too theoretical". And here I thought I had some very practical, easy-to-use reasons for the math-centric foundation that allowed it to extend easily and map across the domains.
That's what I get for thinking ...
Subscribe to:
Posts (Atom)