Complexity and a Pragmatic Theory of Knowledge
Researchers In the new but growing discipline of complexity theory have evolved opposing
viewpoints on the role of hierarchical structure
Simon is the forerunner of those who picture complexity as the underlying structure of the
universe. Others. and Robert Rosen
is the best
forward strong arguments that complexity is something we impose on nature for our own
heuristic needs. The import of this is that the two approaches are liable, even when applied to
the same structure, to develop descriptions that are different and possibly even contradictory.
Those who identify hierarchical levels as the basic structure of the universe see them as clearly
demarcated semiautonomous entities. Those who insist that they are simply modes of
description we impose on nature see them as having arbitrary boundaries selected for our own
purposes For them simplicity and complexity are not properties of natural things, they are simply
descriptions of the way we interact with them. In this paper I will show that by
invoking John Dewey's "Immediate Empiricism" we: can show that both of these different
outlooks combine to form the basis for a general theory of knowledge at least as powerful as
any that has been proposed to date.
. Complexity theory is an example of what Ervin Laszlo called second order systems" or systems of theories about systems. They are attempts to classify qualities of natural events which have in common certain organizational properties. However, as in most theoretical pursuits, the more formal these theories become the more they lose touch with the systems they were designed to explain. The contradiction in these two approaches to complexity is in the differing interpretations of complex structure. Those who follow Simon believe that complexity is the underlying structure of the real universe, that hierarchical levels are discernable entities like atoms and molecules. Those, who like Rosen see complex modes as arbitrary simplifications we create for our own heuristic needs point out that hierarchical levels in social and economic systems often lack clear demarcations. Simon sees the universe as a set of Chinese boxes, each box containing a hundred smaller boxes and each of those a hundred smaller yet and so on into infinity. Rosen sees the universe as a great variety of simple interactions out of which we abstract those our instruments are capable of differentiating.
The difficulty with formulating second order systems is that they tend to become removed from the world they are supposed to be explaining and result in only explaining themselves. The import of John Dewey's Immediate Empiricism is that it forces us to relate whatever it is that we theorize back to the nature from which the problem was derived. It reminds us that the only reason for science or philosophy is to solve problems encountered in our interaction with nature.
Simply stated, John Dewey's "immediate Empiricism says that the real meaning of experience is the experience itself. But this simplification masks the sophisticated import of Dewey's implementation. Dewey explained this with the aid of two analogies. The first was of an unexpected and unexplained fearsome sound which on later examination turned out to be the tapping of a shade on a window. The second concerned the illusion of Zolners lines, wjich though they appear to converge they are really parallel. The point he made to show the import of immediate empiricism is that the experience of the unexpectedand unexplained sound was fearsome. Learning later that it was only the shade tapping on the window does not change that experience. If later the same sound is experienced and is recognized, it is a different kind of experience. The same can be said of Zolners lines for as they are seen without the realization of what they are they donot just seem to converge. They are seen as lines that fo converge. the later realization that it was an optical illusion does not change the original experience.
Immediate empiriclsm expands the scope of philosophical analysis to onclude superstition, wonder, and religious expeiience by recognizing that the meaning of these experiences are the experiences themselves. To understand that, we must examine them and criticize our examination of them with the assumption that they are the only real interaction we have with the world outside. To acomplish this we construct a model using the interaction of these serial experiences with the cognitive processes within our mind. The existence or non-existence of reals outside the mind can only be detected from the result of these cognitive experiences as they relate to this model. Thus immediate empiricism deals with every imaginable kind of experience but that which relates to the development of knowledge, that is the creation of accurate models of reality within the cognitive processes in our minds. is only one particular kind experience which we call the cognitive experience.
Frederick Woodbridge has pointed out that while the cognitive is only one kind of experience, which must be separated and clearly demarcated from the others, it can be applied to others in order to tell what kind of experience they are This is where theories of complexity, those by Simon and those by Rosen, unite with immediate empiricism to form the basis for a geneneral theory of knowledge. Implicit in immediate empiricism is the simple realization that the only communication that exists between the world constructed in our minds and the existential world is immediate experience, the moment by-moment interactions through our bodily senses between the outer real world and our inner, cognitively developed, model. The question. though, is whether these theories of complexity attempt to explain the existential world, or only the model,. or, as I intend to show, do they in fact explain both and in doing so develop seemingly contradictory concepts.
Returning to the two concepts of complexity, the differences we find are elemental. Rosen began by defining complexity as the way we Interact with things, as he put it. we find out what a system is and what it does by using measuring instruments each of which gives us a limited amount of information about the system. Since we cannot work with an infinite number, we are forced to use a subset of all of the instruments that would be required and as a result develop an incomplete description of the system in question. Different people using different subsets will gather different sets of information. Rosen called these "Relative Descriptions." Thus, there are a large numbers of such subsets, each of which gives rise to a different mode of description for the same phenomenon. Therefore, there are many ways we can interact with the system depending which of the subsets we choose to make use of.
Rosen explained that we cannot just combine all of these subsets to obtain even a unified, much less a complete description. In his words, "In general merely juxtapositioning alternate modes of description does not in general lead to a more comprehensive description which embraces all of the components." This gives rise to two different mechanisms, both of which are directly related to his approach to complexity. The first is variability, the multiplicity of modes of description produced by different subsets of measuring instruments. The second is error, or the difference between the system as cognized under any specific mode, or combination of modes, and the system as it exists in reality.
The positive aspect of these two mechanisms is particularly important for Roscn. Since we seldom need to consider every conceivable property of a system, by selecting the modes of description for which the errors are inconsequential for the problem at hand, we can essentially simplify the system by creating a model with fewer degrees of freedom that faithfully models the characteristics of the system that are important to us. For example we normally interact with a modern professional camera In very different modes depending on whether we are photographers, optical engineers, or camera designers by abstracting out any characteristics of the camera that have little Impact on photographs, lens design, or aesthetic appearance respectively.
Roscn's description of complexity is implicit in his definition of the word. On the other hand. a camera not only can be dealt with on different levels it consists of several subassemblies which are made up of individual parts. If we listed the modes in in which different people interact with the camera, it would include those that would correspond with the levels the camera is made up of as well as levels with heuristic or arbitrary boundaries. This means that some of Rosen's modes mirror those of Simon. The difference between the two approaches lies in their two very different conceptions of the basic structure of ture of reality. The "Architecture of Complexity,'' according to Simon is hierarchical, each level being made up of the levels below and helping to make up the level above. Levels or modes, accordlng to Rusen, are artifacts of our measuring instruments. Rosen's modes are never the same as Simon's ]evels, though they may mirror them.,. However, In the very characteristics which separate the two,. we can detect a hidden likeness. That is, examined critically, Simon~ levels often turn out to be arbitrary, and many of Rosen's models are in fact hierarchical.
Simon's Chinese boxes seen form the top down form a tree. However,. such a description is necessarily historical. It seems that while it is possible to determine how one level was developed from the level below,. it is seldom possible to determine why it emerged with the Particular Properties it has. This is because beginning at the bottom and working up it is impossible to Predict with any degree of certainty from the interactions at one level which of them will assume an active role in the emergence of the new level above. Language is an obvious example. Given everything that can be known about any particular language, i.e. its structure, its syntax, and even its idiosyncratic elements, you could still never predict the emergence of a particular poem. You might predict the emergence of poetry if you included some information about the culture in which the language developed. But the reason you could not predict the emergence of a particular poem is that it represents one of a great many equally probable and therefore arbitrarily possible emergent forms. A poem becomes a permanent expression of a language only after it has, through immediate aesthetic experience, formed an associative relationship with cumulative values of the society in whichit has emerged. A literary work is not just words arranged according to a set of rules of grammar and syntax. It is an expression of an individual world-view that has merged with the culture itself as much through discovery as through planned action. This discovery could happen only in a system with an excess of variety. Simon himself recognized this when he said that most hierarchical levels are examples of frozen accidents, that more often than not they are examples of arbitrary structures chosen by oportunity rather than by inevitability.
Returning to Rosen's view,. we must realize that if we can abstract out modes of description that are appropriate to one particular problem, then these modes must reflect some semi-autonomous characteristic of the system itself. For example, an ethnic group in an urban setting is an integral part of the urban scene and cannot be removed from it without causing major changes In both the group and the setting. However, we can develop a mode of description of the group which is abstracted from its environment and therefore has a certain amount of semi-autonomy. If we do this for each separate group within the urban context, these form the elements of a hierarchical order which emerges into a reasonably authentic model of the urban scene itself. Our two differential approaches to complexity then are hopelessly intertwined. It is only because of the hierarchical structure of reality that Rosen's approach can isolate simple models of complex processes and while the modes of description Rosen uses do not necessarily represent real structural hierarchies, they provide a wealth of insight when they do.
There is, it appears, considerable agreement, even among those who contemplate theories of reality that do not take the Properties unique to complexity into consideration, that the existential world is hierarchically organized. Few question that things are made of molecules which are made of atoms for example. Where they differ is in their conceptions of the relationships that exist between levels and in the way levels are demarcated. While most would agree that in a complex system each successively higher level emerges later in time through organization of the levels below,. to a reductionist the emergence of a new higher level in a system is the direct result of the forces acting on the elements of the lower level and the interactions and coactions of the elements themselves. In complexity theory. the emergent kvd is derived from the interplay between the environment and the systems internal dynamics, Increasing those interaction, which favor organization and decreasing those that hinder The difference is that while the former is direct and exhaustive leading toward equilibrium, the latter includes interactions with arbitrary elements, interactions that are modified over time so that they lead instead toward homeostasis or complex adaptive systems which change their structure over time. As a result typical rcductionist techniques fail because the systems lower level interactions with external forces may simply be irrelevant to the emerging system and are as likely to result in polymorphisms as they are of predictable emergent systems predetermined by conditions of eventual equilibrium.
As this is true of the existential world, so it must be of our internal models, or else our simplifications would be consistently off the mark. In order to illustrate how this occurs we need to look at some of the special properties of the memory system of the human brain. While the internal operation of the human brain is still very much a mystery, the work of psychologists like Herbert Simon has given us some insight into its general structure. A proper appreciation of that structure requires an understanding of the two different kinds of networks involved in human memory. The first is similar to the simplest of the modern microprocessors in use today. It is called a serial stack operation. This is similar to the kind of system that is used by our short term memory. Items to be memorized are taken in chunks (identifiable sets of data) which are placed in succession into memory. Without getting more involved in the operation of this little understood process, and indeed Simon suggests that it may be variable among human beings, I want to suggests that this amounts to a transition from a serial last-in first-out stack mechanism to a parallel three-dimensional neural network. The method of discriminating between information is quite different. In the serial stack, the information is arranged in chunks which the mind has been trained to recognize and which are stored in the order which the mind has been trained to recognize and which are stored in the order in which they occur to the senses. In the neural network, the information is stored according to its relationship to information already stored in the brain. This accounts for the success of memory improvement programs which simply improve the ability of the individual to recognize associations between data that is being memorized.
we can illustrate some of the properties of a neural network using the simplest possible neural structure, a cubical matrix of binary elements arranged in a three by three by three matrix. This would represent a single "neighborhood," the neighborhood os the central element M(2,2,2). Each element of the matrix is a threshold logic element.. By this I mean a binary device with a number of inputs, the state of which is determined in such a way that if the sum of the inputs exceeds a value called the threshold value, the state of the device will be one. Otherwise the state will be either that of the last change or zero. In our simple model of a neural net each node (element) receives aninput from each of the each of the other nodes in its neighborhood that is a reproduction of the state of that node. In other words in our simplest case, each node has twenty six inputs and assuming for the sake of simplicity that the threshold value is 13 then if more than thirteen of the nodes inits neighborhood are zeros thenit will be or change to a zero. Neighborhoods are continuous. This means a change in one neighborhood affects each of its adjacent neighborhoods. Information is stored in a neural net much the same as it is stored in the more familiar serial nets we use in our modern digital computers, through patterns of binary states. However, change in a neural net can be muchmore dramatic. In fact there are only two true states of equilibrium in a neural network, either all ones or all zeros and both represent a freezing up of the network, or what we might call brain death. The existence of patterns of states results in nodes whose inputs are close to their threshold value and thus the change of a single node can start a reaction that could result in the transformation of many neighborhoods.
As we are describing the process in our brains not just as a neural net, even one considerably more complec than our simple example, but as the interface between a serial net and a neural net, we need another mechanism for our mode to approarh the kind of dynamics we are discussing. We can simulate this by considering each node in our neural net to also be an element in a serial network. Thus, the state of each node might also be determined by the output of a temporal sequence of binary activities that represent the output of a serial transformation. The susceptibility for such a transformation might be a factor of the ratio of the inputs to a particular node and its threshold value, making the resulting pattern a controlling or constraining factor of the serial events.
I don't suggest that this is a realistic description of the human brain, but as a description of the interface between a simple serial network and an elementary neural network it does illustrate some of the mechanisms that create the activities that are peculiar to complex change. Returning to our description of the nature of the cognitive experience, we first assign the role of the serial component to immediate empiricism, or the accumulation of data about our world through the moment-by-moment interactions we have with our environment through our physical senses. Now, we have already determined that information is encoded in our brains according to association. At the same time, since this is in a neural net, the information is distributed, that is, it is fitted into the matrix by modifying the information already there as well as by creating new nodes. We never just memorize new data, we add it to our model of reality by modifying that nodel to include that information. The new differs from the old by more than just the inclusion of more data. It includes the changes in the model created by its adjustment to the new information. Also, since this implies a system in constant flux, every interaction with the outside world is with a different model.
If we were nothing but receptacles for new knowledge, the resulting changes would be complex enough, but because we are constantly faced with problems we lack sufficient data to solve, we create in our minds models of reality that are to some extent guesses about what that reality is. Science is one system we have developed to improve those guesses, but it is effective only when it is directed toward those interactions that the scientific paradigm was developed to deal with. Cognization in general is applied to all experiences and not just to those things that are considered scientific. Every experience is a two-way interaction with the world outside. Our internal model interacts with the existential universe through immediate experience such that when elements of that experience can be associated with structures developed in our internal model, that model is expanded to accept the new data, and this is true regardless of whether this data is empirical fact or the results of a vivid imagination.
In order to understand how this translates into a comprehensive theory of knowledge, we must return to our two different approaches to complexity. When we study complex natural phenomena we normally choose a subset of instruments with which we nake measurements over a period of time. This provides, as Rosen suggests, a series of relative descriptions of the phenomena. If we need a fuller description we repeat the series using a different subset of instruments, perhaps several such repetitions. It is from the parallel interaction of these serial data that our improved cognition of the phenomena emerges so that in our minds we are experiencing the transition of a set of linear transitions into a parallel neural network. In order to tell whether the resulting mental construct does or does not model real-life phenomena, we must test it against reality. We accomplish this by inventing tests we can apply mentally to our model and directly to the real phenomena through immediate experience to determine if the model faithfully reproduces the dynamics of the real phenomena.
This is where the relationship between cognization and knowledge becomes important. It matters not one whit to the neural processes of the brain if a structure within the internal model was developed with a close resemblance to the real world, or if it was developed purely out of imagination. The result is the same. Ideological concepts developed without any regard to the existence of contrary forces in reality have the same force in the act of cognization as well-developed and tested scientific or artistic models, therefore data collected during immediate experience that associates with these structures does strengthen them by providing empirical corroboration. There is only one way to separate false mental pictures of theoutside world and that is to provide clear tests to be accomplished through immediate experience that will result in demarcating what is clearly imagined from what is clearly represented in reality.
Science has traditionally come as close to accomplishing this as is possible by limiting its subject matter to those things that can be clearly demarcated, but the greater detail a scientist includes in his observations, the more difficult it becomes to clearly mark off the real from the imagined until all that is left are mathematical structures which describe the dynamics of reality at the cost of structure. When we leave the domain of science, we enter a world where the structure of reality may not consist of physical entities, but rather of relations between entities, not of things but of organizations. To make matters worse, these organizations may not be the efficient result of specific causes. They may include arbitrary organizations which create their own necessity and become their own causes.
As an example of this kind of organization we can examine Ptirim Sorokin's concept of "Logico-Meaningful Union," or the cultural mentality of an integrated culture. Sorokin claims that if a culture is integrated as opposed to being simply a congery of individuals, all of its aesthetic output will reflect a single central idea, or cultural mentality.
Sorokin stated that the heterogeneity of individual experiences, together with other factors, leads to a multiplicity of modes of perception of the same phenomena by different persons. For some, he points out, reality is only that which can be perceived by the organs of sense. At the other extreme are those persons for whom true reality lies hidden from the sense organs. The former try to adapt themselves to those conditions that appear to the sense organs, the latter try to adapt to a true reality that is beyond experience, whether it is called God, nirvana, or ultimate reality. Sorokin's cultural mentality is derived from those mechanisms that develop from those interactions that are successful in increasing aesthetic experience. It was through an analysis of the aesthetic output of ~Western culture that Sorokin developed his Logico-Meaningful union. It was his contention that integration occurs only when this cultural mentality is reflected in a pure form of one of three varieties of mentality. The ideaistic, or pure spiritual, the sensate, or pure carnal, and the idealistic, or a perfectly balanced combination of both. The purpose of his great master-piece, "The Dynamics of Social and Cultural Change" was to show that these three forms could be clearly demarcated through a study of the art, philosophy, and architecture of Western Civilization as it evolved from the Greek to the modern period. We do not necessarily need to accept Sorokin's idealized models to see the importance of his concept. That is, that from the shared attitudes toward the very basic questions of reality, a level of constraints emerges that limits the variety of cultural meanings to those in concert with these constraints. As Sorokin pointed, out the aesthetic output from such a culture reflects this union and in doing so reinforces it. The sets of shared attitudes are essentially serial networks evolving over time in each individual autonomously within the limits of these constraints. The logico-meaningful union is a parallel associative neural network that forms a common model of life that is shared by members of the culture and through which they gain heightened aesthetic pleasure.
Applying the principles I have laid out in this paper, the serial moment-by-moment experience of individuals in a culture forms a two-dimensional network which changes in time. The interrelationships among the members form a three dimensional parallel network of which a neural net is but a crude analogy. If, from these interactions of this three-dimensional network, its environment, and the general characteristics of the serial events, there emerges an arbitrary structure which constrains the serial events in such a way that it increases the probability of further structural development, then this will result in the formation of a logico-meaningful union and an integrated culture. The cause of the culture will be an emergent property of that logico-meaningful union. It is teleological in the sense that the end creates its own cause, but it does not constitute a true telic structure because it does not determine its own constitution, only the direction of its change. It is a feed-forward system, not a feedback system. Such a system is goal directed, but toward a goal that exists in an uncertain future. To accomplish this the model of reality we carry in our minds includes a future as well as a past. That future is a projection of our imagination from the raw material of our past.
If, as I have suggested, the aesthetic experience reinforces those elements of our model which pertain to our cultural heritage, then the cognitive experience must reinforce those elements which successfully model the reals that exist outside the brain. However, if a structure, such as a social or cultural system, exists only as relationships between things, and the organization of those relationships which are to a great extent arbitrary, then how can we determine if in fact our models are of reality, or of an imagined structure we impose on reality? The answer can be found in the relationship between Simon, Rosen, and Dewey. We develop arbitrary models of these complex relationships of our own and impose these on the outside world, we set up a Rosen type hierarchical structure, then we test them against nature through immediate empiricism, the critical appraisal of immediate experience. To the extent that these tests show that the dynamics of our model mirror the dynamics of real world event, to that extent the models we have developed are in fact models of the real world and indicate just how well the real universe is hierarchically structured as Simon tells us it is. However we develop, imagine, or invent these arbitrary structures, it is only by testing them against our only reliable communication channel with reality, immediate experience, that we can improve them and in doing so improve our internal model of the world outside, which is the ultimate goal of knowledge.
Back to Beginning|