Mostrar mensagens com a etiqueta epistemologia. Mostrar todas as mensagens
Mostrar mensagens com a etiqueta epistemologia. Mostrar todas as mensagens

novembro 05, 2018

Parallelism

Like objects exist despite Quantum Physics, free will exists despite determinism. These terms refer to different planes of explanation.

outubro 29, 2018

Spectra

I want to murderize the term "objective truth".  Because mostly it just means "This is one of my core beliefs".  By scrapping that term, and instead saying "All beliefs are subjective, but some have more evidence in their foundations than others, and this is a difference in degree, not of kind" we can begin to stop fortifying our misconceptions with language. Those things are well-entrenched enough as it is. [...] Fact is whatever is out there in the world, the ding-am-sich. Facts are objective, but you don't have any. Nobody has access to facts. We have beliefs about facts. And those beliefs are formed exclusively from the evidence available to us, as subjects. That evidence can be wrong or misleading. You have no facts. All we have is what you so weaseley call "opinions" (a better word is beliefs). And some beliefs are more well-founded than others. Our most well-founded beliefs, however, are not discernible to us from our core beliefs. They all feel "objective" to us, no way for the individual to know which "objective fact" (actually subjective belief) is strongly held because of evidence, and which is strongly held because of cultural values on happens to hold. So we need to drop that bullshit and start instead lifting our burdens of evidence, even for the stuff our amygdala says is "objective".

Is 1+1=2 a fact?   Of course not, it never was. Maths is an artificial system. 1+1=2 because we define the system so as to give that result. Of course, arithmetic was designed to mimic certain features of reality, but this was done as an abstraction of an abstraction of an abstraction. Each abstraction makes the result less real - but easier to work with. The way we built arithmetic is perfectly analogous to how each child learns arithmetic. First you look at real objects, and produce a theory of kind, then you learn to count real objects, by grouping them in various kinds, abstracting away the individuality of the objects. Then you progress to imagined objects. Then you progress to removing all the remaining remnants of the reality you based it on, working instead with just numbers. -- Andreas Geisler

junho 23, 2016

And even more maps

It's like trying to find the treasure in the pirate's map.



junho 20, 2016

março 06, 2016

Detachment


I often remind people of the word "orrery". An orrery is a model that works perfectly well, its use results in highly accurate predictions… within bounds of application. The difference between an orrery and a theory, is that a orrery is not causally supported by foundational understandings of the universe. An orrery is internally consistent, but externally disassociated. The classic orrery of course is the painted balls on bent wires articulating around a set of gears and cams on spindles that classical scholars produced to explain the motion the sun and moon and other celestial bodies. Obviously, such models predict the behavior of the system, but don't come near explaining the causal hierarchy that leads to behavior. planets and stars aren't hooked to huge rods. It is theoretically possible to concoct an infinite number of orreries, causally false models, to explain and model any one system or domain. Randall Lee Reetz

dezembro 22, 2015

Hubris

We decide on the amount and on the quality of the information we possess. The easier case is when the problem outcome is already known (the coin flip was heads, now what?); this is a case where we do not need to predict anything, the event already occurred. The next case is when we do not know the outcomes but know their probabilities (this is a biased coin, with a chance of 55% heads). We know how to solve this type of problem, and call it risk management. The harder case is when not even the probabilities are known (oops, this coin might even have two equal faces...). Usually, the literature calls it uncertainty, which is a different, uglier, beast than risk. And humans do not like uncertainty. We try to solve them using what we know about risk, by simplifying our models -- and possibly forgetting about this fact -- so that we can analyse the problem and compute its solution with the tools (like statistics) and assumptions (like normality) we understand. Sometimes our assumptions are not that far away and our decisions are good approximations/predictions of what happens. Sometimes we miss the mark and the chosen action results in disaster. Black swans, an expression coined by Nassim Taleb, are examples of this. It is when we assume that extraordinary events are impossible and one of them occurs nonetheless. This is the price of recklessness or laziness or both; the hubris of assuming too much.

dezembro 18, 2015

Quantum Randomness

@Eamon: "I wonder what a *genuine* stochastic process *would* look like if not the ones I mentioned. In other words, what possible observational state of affairs would, for you, serve as evidence for genuine indeterminism?"

Good question. I think I would be willing to accept indeterminism despite my gut feeling that it is a mistake, if accepting indeterminism answered more questions than it raised. The prima facie reason for accepting indeterminism is the examples you cited (radioactive decay, etc.), but that interpretation has the following problem.

- Pretty much everybody accepts the reality of superposition, which is in effect Many-Worlds.
- Then, in order to explain the fact that our "thread of consciousness" sees only one discrete outcome, the Copenhagen interpretation posits a wavefunction collapse -essentially, all superpositions instantaneously "die," except the one containing my "thread of consciousness."
- But this collapse is the ONLY known phenomenon in physics that is inherently random/acausal, along with a bunch of other weird properties like non-locality, non-CPT-symmetry, faster-than-light influence...
- And then we get to the crux of the matter: what is this queer collapse theory trying to explain in the first place?
- Answer: why our "thread of consciousness" sees only one outcome, which outcome is apparently not predictable in principle.
- So the question becomes: what happens if we drop our intuitive idea of a single unique "thread of consciousness?"
- Answer: there is nothing left to explain. All branches of the wavefunction remember their own history, and to all of them it prima facie appears that their "thread" made it while the others didn't.

Summary: quantum randomness/collapse "explains" why our "thread of consciousness" sees only one outcome, but this is explained equally well by no collapse and no randomness. Quantum randomness does ZERO explanatory work (although at first intuitive glance, it seems to). -- Ian Pollock [link]

novembro 25, 2015

Rabbit Hole

"At bottom there are no things, and hence not even protons, quarks or strings, there are only structures. These structures generate patterns, and science is in the business of describing such patterns. At one level, the pattern can best be captured by talk of protons and electrons; at another level (i.e., for material science, and of course for our everyday experience) they are captured by objects like tables. Tables, then, are not illusions at all, at least no more than protons and electrons are illusions; rather, they are the most appropriate way to describe a certain stable pattern." Massimo Pagliucci

novembro 17, 2015

Allocation Practice

Democracy is an efficient method to allocate assent to available power holders. Capitalism is an efficient method to allocate resources to available uses. And Science is an efficient method to allocate truth to available hypothesis. They all have plenty of defects and need vigilance. But alternatives, like autocracy, communism, or mythical explanations have proven themselves to be much worse.

junho 18, 2015

Its from Bits

A distinction is often made between theories based upon explicit mechanisms of causation versus theories based upon statistical or other seemingly non-mechanistic assumptions. Evolution is a mechanistic theory in which the mechanism is selection and hereditability of traits acting in concert. In a nutshell, stressful forces upon organisms that differ genetically select those individuals possessing genes that confer on the individual and its offspring the greatest capacity to reproduce under those stresses. 

Consider next a statistical explanation of the observation that the heights of a large group of children in an age cohort are well described by a Gaussian (aka normal) distribution. Invocation of the central limit theorem (CLT) provides a statistical explanation; but the question remains as to why that theorem applies to this particular situation. The applicability of the theorem hinges on the assumption that each child’s height is an outcome of a sum of random influences on growth. So are we not also dealing here with a mechanistic explanation, with the mechanism being the collection of additive influences on growth that allow the applicability of the CLT? If the influences on growth were multiplicative rather than additive, we might witness a lognormal distribution. Is it possible that all scientific explanation is ultimately mechanistic? Let us look more carefully at the concept of mechanism in scientific explanation, for it is not straightforward.

In everyday usage, we say that phenomenon A is explained by a mechanism when we have identified some other phenomenon, B, that causes, and therefore explains, A. The causal influence of B upon A is a mechanism. However, what is accepted by one investigator as an explanatory mechanism might not be accepted as such by another. [...] Does the search for mechanism inevitably propel us into an infinite regress of explanations? Or can mechanism be a solid foundation for the ultimate goal of scientific theory-building? Consider two of the best established theories in science: quantum mechanics and statistical mechanics. Surprisingly, and despite their names, these theories are not actually based on mechanisms in the usual sense of that term. Physicists have attempted over past decades to find a mechanism that explains the quantum nature of things. This attempt has taken bizarre forms, such as assuming there is a background “aether” comprised of tiny things that bump into the electrons and other particles of matter, jostling them and creating indeterminancy. While an aether can be rigged in such a way as to simulate in matter the behavior predicted by Heisenberg’s uncertainty principle, and some other features of the quantum world, all of these efforts have ultimately failed to produce a consistent mechanistic foundation for quantum mechanics. Similarly, thermodynamics and statistical mechanics are mechanism-less. Statistical arguments readily explain why the second law of thermodynamics works so well. In fact, it has been shown that information theory in the form of Maximum Entropy provides a fundamental theoretical foundation for thermodynamics.

If we pull the rug of mechanism out from under the feet of theory, what are we left with? The physicist John Archibald Wheeler posited the radical answer “its from bits,” by which he meant that information (bits)—and not conventional mechanisms in the form of interacting things moving around in space and time—is the foundation of the physical world (its). There is a strong form of “its from bits,” which in effect states that only bits exist, not its. More reasonable is a weaker form, which asserts that our knowledge of “its” derives from a theory of “bits.”

[...] Mechanistic explanations either lead to an infinite regress of mechanism within mechanism, or to mechanism-less theory, or perhaps to Wheeler’s world with its information-theoretic foundation. What is evident is that as we plunge deeply into the physical sciences, we see mechanism disappear. Yet equally problematic issues arise with statistical theories; we cannot avoid asking about the nature of the processes governing the system that allow a particular statistical theory to be applicable. In fact, when a statistical theory does reliably predict observed patterns, it is natural to seek an underlying set of mechanisms that made the theory work. And when the predictions fail, it is equally natural to examine the pattern of failure and ask whether some mechanism can be invoked to explain the failure. -- John Harte, Maximum Entropy and Ecology, pp.8--11

dezembro 21, 2014

Georgescu-Roegen - The Entropy Law & the Economic Process (1973)

[M]y point is not that arithmetization of science is undesirable. 'Whenever arithmetization can be worked out, its merits are above all words of praise. My polnt is that wholesale arithmetization is impossible, that there is valid knowledge even without arithmetization, and that mock arithmetization is dangerous if peddled as genuine. Let us also note that arithmetization alone does not warrant that a theoretical edifice is apt and suitable. As evidenced by chemistry -- a science in which most attributes are quantifiable, hence, arithmomorphic -- novelty by combination constitutes an even greater blow to the creed "no science without theory." (p.15)

The verdict is indisputable: no social science can subserve the art of government as efficaciously as physics does the art of space travel, for example. Nevertheless, some social scientists simply refuse to reconcile themselves to this verdict and, apparently in despair, have come out with a curious proposal: to devise means which will compel people to behave the way "we" want, so that "our" predictions will always come true. The project, in which we recognize the continual striving for a ''rational" society beginning with Plato's, cannot succeed (not even under physical coercion, for a long time) simply because of its blatant petitio principii: the first prerequisite of any plan is that the behavior of the material involved should be completely predictable, at least for some appreciable period. But aggressive scholarship will never run out of new plans for the "betterment of mankind." Since the difficulties of making an old society behave as we want it can no longer be concealed, why not produce a new society according to our own "rational" plans? (p.16)

It is fashionable nowadays to indulge in estimating how large a population our earth can support. Some estimates are as low as five billions, others as high as forty-five billions. However, given the entropic nature of the economic process by which the human species maintains itself, this is not the proper way to look at the problem of population. Perhaps the earth can support even forty-five billion people, but certainly not ad infinitum. We should therefore ask "how long can the earth maintain a population of forty-five billion people?'' And if the answer is, say, one thousand years, we still have to ask "what will happen thereafter?" All this shows that even the concept of optimum population conceived as an ecologically determined coordinate has only an artificial value. [...] Man's natural dowry, as we all know, consists of two essentially distinct elements: (1) the stock of low entropy on or within the globe, and (2) the flow of solar energy, which slowly but steadily diminishes in intensity with the entropic degradation of the sun. But the crucial point for the population problem as well as for any reasonable speculations about the future exosomatic evolution of mankind is the relative importance of these two elements, For, as surprising as it may seem, the entire stock of natural resources is not worth more than a few days of sunlight! [...] In a different way than in the past, man will have to return to the idea that his existence is a free gift of the sun. (p.20ff)

Anatomically, theoretical science is logically ordered knowledge. A mere catalog of facts, as we say nowadays, is no more science than the materials in a lumber yard are a house. Physiologically, it is a continuous secretion of experimental suggestions which are tested and organically integrated into the science's anatomy. In other words, theoretical science continuously creates new facts from old facts, but its growth is organic, not accretionary. Its anabolism is an extremely complex process which at times may even alter the anatomic structure. We call this process "explanation" even when we cry out "science does not explain anything." Teleologically, theoretical science is an organism in search of new knowledge. (p.37)

There can be no doubt that the decumulation of a machine is not a mechanical spreading in time of the machine as is the case with the stock of provisions of an explorer, for instance. When we "decumulate" a machine we do not separate it into pieces and use the pieces one after another as inputs until all parts are consumed. Instead, the machine is used over and over again in a temporal sequence of tasks until it becomes waste and has to be thrown away. A machine is a material stock, to be sure, but not in the sense the word has in "a stock of coal." If we insist on retaining the word, we may say that a machine is a stock of services (uses). But a more discriminating (and hence safer) way of describing a machine is to say that it is a fund of services

The difference between the concept of stock and that of fund should be carefully marked, lest the hard facts of economic life be distorted at everyone's expense. If the count shows that a box contains twenty candies, we can make twenty youngsters happy now or tomorrow, or some today and others tomorrow, and so on. But if an engineer tells us that one hotel room will probably last one thousand days more, we cannot make one thousand roomless tourists happy now. We can only make one happy today, a second tomorrow, and so on, until the room collapses. [...] The use of a fund (i.e., its "decumulation") requires a duration. Moreover, this duration is determined within very narrow limits by the physical structure of the fund. We can vary it only little, if at all. If one wishes to "decumulate" a pair of shoes, there is only one way open to him: to walk until they become waste (of course, one may sell the shoes. But this would mean decumulation of the shoes as a stock not decumulation of the shoes as a fund of services.) In contrast with this, the decumulation of a stock may, conceivably take place in one single instant, if we wish so. And to put the dots on all significant i's, let us also observe that the "accumulation" of a fund too, differs from the accumulation of a stock. A machine does not come into existence by the accumulation of the services it provides as a fund: it is not obtained by storing these services one after another as one stores winter provisions in the cellar. Services cannot be accumulated as the dollars in a saving account or the stamps in a collection can. They can only be used or wasted.

Nothing more need be said to prove that also the use of the term "flow" in connection with the services of a fund is improper if "flow" is defined as a stock spread over time. In fact, the generally used expression "the flow of services" tends to blur -- at times, it has blurred -- the important differences between two mechanisms, that by which the prices of services and that by which the prices of material objects are determined. The inevitable trap of this ambiguous use of "flow" is that, because a flow can be stored up, we find it perfectly normal to reason that services are "embodied" in the product. Only the materials that flow into a production process can be embodied in the product. The services of the tailor's needle, for example, cannot possibly be embodied in the coat -- and if one finds the needle itself embodied there it is certainly a regrettable accident. The fact that in certain circumstances the value of services passes into the value of the product is to be explained otherwise than by simply regarding a machine as a stock of services that are shifted one after another into the product. 

The difference between flow and service is so fundamental that it separates even the dimensionalities of the two concepts. For this reason alone, physicists would not have tolerated the confusion for long. The amount of a flow is expressed in units appropriate to substances (in the broad sense) -- say pounds, quarts, feet, etc. The rate of flow, on the other hand, has a mixed dimensionality, (substance)/(time). The situation is entirely reversed in the case of services. The amount of services has a mixed dimensionality in which time enters as a factor, (substance) x (time). If a plant uses one hundred workers during a working day (eight hours), the total of the services employed is eight hundred man x hour. If by analogy with the rate of flow we would like to determine the ratio of service for the same situation, by simple algebra the answer is that this rate is one hundred men, period. The rate of service is simply the size of the fund that provides the service and consequently is expressed in elemental units in which the time factor does not intervene. (p.226ff)

A leading symptom is that purists maintain that thermodynamics is not a legitimate chapter of physics. Pure science, they say, must abide to the dogma that natural laws are independent of man's own nature, whereas thermodynamics smacks of anthropomorphism. And that it does so smack is beyond question. But the idea that man can think of nature in wholly nonanthropomorphic terms is a patent contradiction in terms. Actually, force, attraction, waves, particles, and, especially, interpreted equations, all are man-made notions. Nevertheless, in the case of thermodynamics the purist viewpoint is not entirely baseless: of all physical concepts only those of thermodynamics have their roots in economic value and, hence, could make absolutely no sense to a nonanthropomorphic intellect.

A nonanthropomorphic mind could not possibly understand the concept of order-entropy which, as we have seen, cannot be divorced from the intuitive grasping of human purposes. For the same reason such a mind could not conceive why we distinguish between free and latent energy, should it see the difference at all. All it could perceive is that energy shifts around without increasing or decreasing. It may object that even we, the humans, cannot distinguish between free and latent energy at the level of a single particle where normally all concepts ought to be initially elucidated.

No doubt, the only reason why thermodynamics initially differentiated between the heat contained in the ocean waters and that inside a ship's furnace is that we can use the latter but not the former. But the kinship between economics and thermodynamics is more intimate than that. Apt though we are to lose sight of the fact, the primary objective of economic activity is the self-preservation of the human species. Self-preservation in turn requires the satisfaction of some basic needs-which are nevertheless subject to evolution. The almost fabulous comfort, let alone the extravagant luxury, attained by many past and present societies has caused us to forget the most elementary fact of economic life, namely, that of all necessaries for life only the purely biological ones are absolutely indispensable for survival. The poor have had no reason to forget it. And since biological life feeds on low entropy, we come across the first important indication of the connection between low entropy and economic value. For I see no reason why one root of economic value existing at the time when mankind was able to satisfy hardly any non biological need should have dried out later on.

Casual observation suffices now to prove that our whole economic life feeds on low entropy, to wit, cloth, lumber, china, copper, etc., all of which are highly ordered structures. But this discovery should not surprise us. It is the natural consequence of the fact that thermodynamics developed from an economic problem and consequently could not avoid defining order so as to distinguish between, say, a piece of electrolytic copper -- which is useful to us -- and the same copper molecules when diffused so as to be of no use to us. We may then take it as a brute fact that low entropy is a necessary condition for a thing to be useful. (p.277ff)

The corresponding symptoms in analytical studies are even more definite. First, there is the general practice of representing the material side of the economic process by a closed system, that is, by a mathematical model in which the continuous inflow of low entropy from the environment is completely ignored. But even this symptom of modern econometrics was preceded by a more common one: the notion that the economic process is wholly circular. Special terms such as roundabout process or circular flow have been coined in order to adapt the economic jargon to this view. One need only thumb through an ordinary textbook to come across the typical diagram by which its author seeks to impress upon the mind of the student the circularity of the economic process.

The mechanistic epistemology, to which analytical economics has clung ever since its birth, is solely responsible for the conception of the economic process as a closed system or circular flow. As I hope to have shown by the argument developed in this essay, no other conception could be further from a correct interpretation of facts. Even if only the physical facet of the economic process is taken into consideration, this process is not circular, but unidirectional. As far as this facet alone is concerned, the economic process consists of a continuous transformation of low entropy into high entropy, that is, into irrevocable waste or, with a topical term, into pollution. The identity of this formula with that proposed by Schrödinger for the biological process of a living cell or organism vindicates those economists who, like Marshall, have been fond of biological analogies and have even contended that economics "is a branch of biology broadly interpreted.". The conclusion is that, from the purely physical viewpoint, the economic process is entropic: it neither creates nor consumes matter or energy, but only transforms low into high entropy. (p.281)

Low entropy is a necessary condition for a thing to have value. This condition, however, is not also sufficient. The relation between economic value and low entropy is of the same type as that between price and economic value. Although nothing could have a price without having an economic value, things may have an economic value and yet no price. For the parallelism, it suffices to mention the case of poisonous mushrooms which, although they contain low entropy, have no economic value. (p.282)

we cannot mine the stock of solar energy at a rate to suit our desires of the moment. We can use only that part of the sun's energy that reaches the globe at the rate determined by its position in the solar system. With the stocks of low entropy in the earth's crust we may be impatient and, as a result, we may be impatient-as indeed we are with their transformation into commodities that satisfy some of the most extravagant human wants. But not so with the stock of sun's energy. Agriculture teaches, nay, obliges man to be patient-a reason why peasants have a philosophical attitude in life pronouncedly different from that of industrial communities. (p.297)

In a broad perspective we may say that mankind disposes of two sources of wealth: first, the finite stock of mineral resources in the earth's crust which within certain limits we can decumulate into a flow almost at will, and second, a flow of solar radiation the rate of which is not subject to our control. In terms of low entropy, the stock of mineral resources is only a very small fraction of the solar energy received by the globe within a single year. More precisely, the highest estimate of terrestrial energy resources does not exceed the amount of free energy received from the sun during four days! [...] because the low entropy received from the sun cannot be converted into matter in bulk, it is not the sun's finite stock of energy that sets a limit to how long the human species may survive. Instead, it is the meager stock of the earth's resources that constitutes the crucial scarcity. Let S be this stock and r the average rate at which it may be decumulated. Clearly, S = r x t, where t stands for the corresponding duration of the human species. This elementary formula shows that the quicker we decide to decumulate S, the shorter is t. Now, r may increase for two reasons. First, the population may increase. Second, for the same size of population we may speed up the decumulation of the natural resources for satisfying man-made wants, usually extravagant wants.

The conclusion is straightforward. If we stampede over details, we can say that every baby born now means one human life less in the future. But also every Cadillac produced at any time means fewer lives in the future. Up to this day, the price of technological progress has meant a shift from the more abundant source of low entropy-the solar radiation to the less abundant one--the earth's mineral resources. True, without this progress some of these resources would not have come to have any economic value. But this point does not make the balance outlined here less pertinent. Population pressure and technological progress bring ceteris paribus the career of the human species nearer to its end only because both factors cause a speedier decumulation of its dowry. The sun will continue to shine on the earth, perhaps, almost as bright as today even after the extinction of mankind and will feed with low entropy other species, those with no ambition whatsoever. For we must not doubt that, man's nature being what it is, the destiny of the human species is to choose a truly great but brief, not a long and dull, career.

"Civilization is the economy of power [low entropy]," as Justus von Liebig said long ago, but the word economy must be understood as applying rather to the problems of the moment, not to the entire life span of mankind. Confronted, in the distant future, with the impending exhaustion of mineral resources (which caused Jevons to become alarmed about the coal reserves), mankind -- one might try to reassure us -- will retrace its steps. The thought ignores that, evolution being irrevocable, steps cannot be retraced in history. (p.303ff)

[T]he usual denunciation of standard economics on the sole ground that it treats of "imaginary individuals coming to imaginary markets with ready-made scales of bid and offer prices" is patently inept. Abstraction, even if it ignores Change, is "no exclusive privilegium odiosum" of the economic science, for abstraction is the most valuable ladder of any science. In social sciences, as Marx forcefully argued, it is all the more indispensable since there "the force of abstraction" must compensate for the impossibility of using microscopes or chemical reactions. However, the task of science is not to climb up the easiest ladder and remain there forever distilling and redistilling the same pure stuff. Standard economics, by opposing any suggestion that the economic process may consist of something more than a jigsaw puzzle with all its elements given, has identified itself with dogmatism. And this is a privilegium odiosum that has dwarfed the understanding of the economic process wherever it has been exercised. (p.319)

The question is why a science interested in economic means, ends, and distribution should dogmatically refuse to study also the process by which new economic means, new economic ends, and new economic relations are created. (p.320)

[T]he immense satisfaction which Understanding derives from arithmomorphic models should not mislead us into believing that their other roles too are the same in both social and natural sciences. In physics a model is also "a calculating device, from which we may compute the answer to any question regarding the physical behavior of the corresponding physical system." [Bridgman, The Nature of Physical Theory] The same is true for the models of engineering economics. The specific role of a physical model is better described by remarking that such a model represents an accurate blueprint of a particular sector of physical reality. But [...] an economic model is not an accurate blueprint but an analytical simile. Economists are fond of arguing that since no model, whether in physics or economics, is accurate in an absolute sense we can only choose between a more and a less accurate model. Some point out also that after all how accurate we need to be depends on our immediate purpose: at times the less accurate model may be the more rational one to use. All this is perfectly true, but it does not support the further contention -- explicitly stated by Pareto -- that it is irrelevant to point out the inaccuracy of economic models. Such a position ignores an important detail, namely, that in physics a model must be accurate in relation to the sharpest measuring instrument existing at the time. If it is not, the model is discarded. Hence, there is an objective sense in which we can say that a physical model is accurate, and this is the sense in which the word is used in" accurate blueprint." In social sciences, however, there is no such objective standard of accuracy. Consequently, there is no acid test for the validity of an economic model. And it is of no avail to echo Aristotle, who taught that a model is "adequate if it achieves that degree of accuracy which belongs to its subject matter." One may always proclaim that his model has the proper degree of accuracy. Besides, the factors responsible for the absence of an objective standard of accuracy also render the comparison of accuracy a thorny problem. (pg.332ff)

From the deterministic viewpoint, the notion of "rational behavior" is completely idle. Given his tastes, his inclinations, and his temperament, the person who smokes in spite of the warning that "smoking may be hazardous to your health" acts from a definite ground and, hence, cannot be taxed as irrational. And if we accept the conclusions biologists have derived from the study of identical twins, that every man's behavior is largely determined by his genotype, then criminals and warmongers are just as "rational" as the loving and peaceful people. But for a determinist even nurture (whether ecological, biotic, or cultural) cannot be otherwise than what it is: together with nature, nurture holds the individual in a predetermined and unrelenting grip. This is probably why, when a social scientist speaks of irrational behavior, he generally refers to a normative criterion. Take the villagers in some parts of the world who for the annual festival kill practically all the pigs in the village. They are irrational-we say-not only because they kill more pigs than they could eat at one feast but also because they have to starve for twelve months thereafter. My contention is that it is well-nigh impossible to name a behavior (of man or any other living creature) that would not be irrational according to some normative criterion. This is precisely why to an American farmer the behavior of a Filipino peasant seems irrational. But so does the behavior of the former appear to the latter. The two live in different ecological niches and each has a different Weltanschauung. The student of man should know better than to side with one behavior or another. The most he can do is to admit that the two behaviors are different, search for the reasons that may account for the differences, and assess the consequences. (p.345ff)

Like the social insects, man lives in society, produces socially and distributes the social product among his fellows. But, unlike the social insects, man is not born with an endosomatic code capable of regulating both his biological life and his social activity. And since he needs a code for guiding his complex social activity in a tolerable manner, man has had to produce it himself. This product is what we call tradition. By tradition man compensates for his "birth defect," for his deficiency of innate social instincts. So, man is born with an endosomatic (biological) code but within an exosomatic (social) one. It is because of the endosomatic code that a Chinese, for example, has slanted eyes and straight hair. It is because of the exosomatic code that a Filipino peasant cultivates his fields in the manner all Filipino peasants do, participates in the extravagant festivals held by his village at definite calendar dates, and so on. A biological process sees to it that the pool of genes is transmitted from one generation to another. Tradition does the same for what we call "values" or, more appropriately, "institutions," i.e., the modes by which every man acts inside his own community. (p.359)

novembro 25, 2014

Bateson - Mind & Nature (1979)

[N]othing has meaning except it be seen as in some context. [...] Without context, words and actions have no meaning at all. [...] It is the context that fixes the meaning. (p.14ff)

There is a parallel confusion in the teaching of language that has never been straightened out. Professional linguists nowadays may know what's what, but children in school are still taught nonsense. They are told that a "noun" is the "name of a person, place, or thing," that a "verb" is "an action word," and so on. That is, they are taught at a tender age that the way to define something is by what it supposedly is in itself not by its relation to other things. Most of us can remember being told that a noun is "the name of a person, place, or thing." And we can remember the utter boredom of parsing or analyzing sentences. Today all that should be changed. Children could be told that a noun is a word having a certain relationship to a predicate. A verb has a certain relation to a noun, its subject. And so on. Relationship could be used as basis for definition, and any child could then see that there is something wrong with the sentence " 'Go' is a verb." (p.17)

Science sometimes improves hypotheses and sometimes disproves them. But proof would be another matter and perhaps never occurs except in the realms of totally abstract tautology. We can sometimes say that if such and such abstract suppositions or postulates are given, then such and such must follow absolutely. But the truth about what can be perceived or arrived at by induction from perception is something else again.

Let us say that truth would mean a precise correspondence between our description and what we describe or between our total network of abstractions and deductions and some total understanding of the outside world . Truth in this sense is not obtainable. And even if we ignore the barriers of coding , the circumstance that our description will be in words or figures or pictures but that what we describe is going to be in flesh and blood and action-even disregarding that hurdle of translation, we shall never be able to claim final knowledge of anything whatsoever. (p.27)

Knowledge at any given moment will be a function of the thresholds of our available means of perception. The invention of the microscope or the telescope or of means of measuring time to the fraction of nanosecond or weighing quantities of matter to millionths of a gram all such improved devices of perception will disclose what was utterly unpredictable from the levels of perception that we could achieve before that discovery. [...] Science probes; it does not prove (p.29)

All experience is subjective. (p.31)

The division of the perceived Universe into parts and wholes is convenient and may be necessary but no necessity determines how it shall be done [...] Explanation must always grow out of description, but the description from which it grows will always necessarily contain arbitrary characteristics. (p.38)

If I throw a stone at a glass window, I shall, under appropriate circumstances, break or crack the glass in a star-shaped pattern. If my stone hits the glass as fast as a bullet, it is possible that it will detach from the glass a neat conical plug called a cone of percussion. If my stone is too slow and too small, I may fail to break the glass at all. Prediction and control will be quite possible at this level. I can easily make sure which of three results (the star, the percussion cone, or no breakage) I shall achieve, provided I avoid marginal strengths of throw.

But within the conditions which produce the star-shaped break, it will be impossible to predict or control the pathways and the positions of the arms of the star. 

Curiously enough , the more precise my laboratory methods, the more unpredictable the events will become. If I use the most homogeneous glass available, polish its surface to the most exact optical flatness, and control the motion of my stone as precisely as possible, ensuring an almost precisely vertical impact on the surface of the glass, all my efforts will only make the events more impossible to predict.

If, on the other hand, I scratch the surface of the glass or use a piece of glass that is already cracked (which would be cheating), I shall be able to make some approximate predictions. For some reason (unknown to me), the break in the glass will run parallel to the scratch and about 1/100 of an inch to the side, so that the scratch mark will appear on only one side of the break. Beyond the end of the scratch, the break will veer off unpredictably.

Under tension, a chain will break at its weakest link. That much is predictable. What is difficult is to identify the weakest link before it breaks. The generic we can know, but the specific eludes us. Some chains are designed to break at a certain tension and at a certain link. But a good chain is homogeneous, and no prediction is possible. And because we cannot know which link is weakest, we cannot know precisely how much tension will be needed to break the chain. (p.41)

[G]radual growth in a population, whether of automobiles or of people, has no perceptible effect upon a transportation system until suddenly the threshold of tolerance is passed and the traffic jams. The changing of one variable exposes a critical value of the other.

A pure description would include all the facts (i.e., all the effective differences) immanent in the phenomena to be described but would indicate no kind of connection among these phenomena that might make them more understandable. For example, a film with sound and perhaps recordings of smell and other sense data might constitute a complete or sufficient description of what happened in front of a battery of cameras at a certain time. But that film will do little to connect the events shown on the screen one with another and will not by itself furnish any explanation. On the other hand, an explanation can be total without being descriptive. "God made everything there is" is totally explanatory but does not tell you anything about any of the things or their relations.

In science, these two types of organization of data (description and explanation) are connected by what is technically called tautology. Examples of tautology range from the simplest case, the assertion that "If P is true, then P is true," to such elaborate structures as the geometry of Euclid, where "If the axioms and postulates are true, then Pythagoras' theorem is true. " Another example would be the axioms, definitions, postulates, and theorems of Von Neumann's Theory of Games. In such an aggregate of postulates and axioms and theorems, it is of course not claimed that any of the axioms or theorems is in any sense "true" independently or true in the outside world.

Indeed , Von Neumann, in his famous book, expressly points out the differences between his tautological world and the more complex world of human relations . All that is claimed is that if the axioms be such and such and the postulates such and such, then the theorems will be so and so. In other words , all that the tautology affords is connections between propositions. The creator of the tautology stakes his reputation on the validity of these connections. 

Tautology contains no information whatsoever, and explanation (the mapping of description onto tautology) contains only the information that was present in the description. The "mapping" asserts implicitly that the links which hold the tautology together correspond to relations which obtain in the description. Description, on the other hand, contains information but no logic and no explanation. For some reason, human beings enormously value this combining of ways of organizing information or material. [...] An explanation has to provide something more than a description provides and , in the end , an explanation appeals to a tautology.

Now, an explanation is a mapping of the pieces of a description onto a tautology, and an explanation becomes acceptable to the degree that you are willing and able to accept the links of the tautology. If the links are "self-evident" (i.e., if they seem undoubtable to the self that is you), then the explanation built on that tautology is satisfactory to you. (p.81ff)

Information consists of differences that make a difference. If I call attention to the difference between the chalk and a piece of cheese, you will be affected by that difference, perhaps avoiding the eating of the chalk, perhaps tasting it to verify my claim. Its noncheese nature has become an effective difference. But a million other differences-positive and negative, internal and external to the chalk remain latent and ineffective.

Bishop Berkeley was right, at least in asserting that what happens in the forest is meaningless if he is not there to be affected by it.

We are discussing a world of meaning, a world some of whose details and differences, big and small, in some parts of that world, get represented in relations between other parts of that total world . A change in my neurons or in yours must represent that change in the forest, that falling of that tree. But not the physical event, only the idea of the physical event. And the idea has no location in space or time---Only perhaps in an idea of space or time. (p.99)

janeiro 17, 2014

A impossibilidade do realismo

"What we would all like [...] is an understanding of the fundamental processes that govern the Universe, an understanding that is not just useful for calculation but an understanding that is true in some deeper sense. Typically, a scientist sees the latter point as either obvious and important, or else completely irrelevant. I would like to argue that we don’t have a choice; there is some very clear sense in which truth is not what is returned by any finite scientific investigation; all that is returned is plausibilities (some of which become very very high), and those plausibilities relate not directly to the truth of the hypotheses in question, but rather to their use or value in describing the data. 

The fundamental reason scientific investigations can’t obtain literal truth is that no scientific investigator ever has an exhaustive (and mutually exclusive) set of hypotheses. Plausibility calculations are calculations of measure in some space, which for our purposes we can take to be the space formed by the union of every possible set of scientific hypotheses, with their parameters and adjustments set to every possible set of values." -- David Hogg, Is cosmology just a plausibility argument?.

janeiro 15, 2014

Reductionism

[...] reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy. [...] we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the Relativistic Heavy Ion Collider.  A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark. So is the 747 made of something other than quarks?  No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747.  The map is not the territory. [...] As the saying goes, "The map is not the territory, but you can't fold up the territory and put it in your glove compartment."  Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory.  The scale of a map is not a fact about the territory, it's a fact about the map.

[...]

To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift.  There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings.  It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.
"What?" cries the antireductionist.  "Are you telling me the 747 doesn't really have wings?  I can see the wings right there!"
The notion here is a subtle one.  It's not just the notion that an object can have different descriptions at different levels.
It's the notion that "having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.

It's not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought.  Rather we, for our convenience, use different simplified models at different levels.

[...]

So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level.  The airplane is too large.  Even a hydrogen atom would be too large.  Quark-to-quark interactions are insanely intractable.  You can't handle the truth.

But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces.  You can't handle the raw truth, but reality can handle it without the slightest simplification.  (I wish I knew where Reality got its computing power.)

The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.

This, as I see it, is the thesis of reductionism.  Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.  Understanding this on a gut level dissolves the question of "How can you say the airplane doesn't really have wings, when I can see the wings right there?"  The critical words are really and see." -- Yudkowsky (http://lesswrong.com/lw/on/reductionism/)

DanielLC reply to another comment: "One minor quibble; how do we know there is any most basic level?". Levels are an attribute of the map. The territory only has one level. Its only level is the most basic one. Let's consider a fractal. The Mandelbrot set can be made by taking the union of infinitely many iterations. You could think of each additional iteration as a better map. That being said, either a point is in the Mandelbrot set or it is not. The set itself only has one level.

março 08, 2013

Ontology & Epistemology

I tend to gravitate around "the Map is not the Territory" concept in the ontology/epistemology discussion. I understand «territory» as the event generator, aka reality. We are only able to measure events indirectly using our senses and tech (events with no effects are non-existent for all purposes). The 'map' is a tangled web of shared and private beliefs that we, Humanity, build and maintain for centuries. The «Map» is the meaning generator (I'm dropping the guillemets now).

The terms objective/subjective imho only make sense in the Map. Objective beliefs are those not dependent of personal mind states, and those dependent are subjective (this is more like a spectrum than a boolean feature, but let's keep it simple). Beliefs not dependent of private or social features (even if they are known just because of specific historical contexts) and which are known using logic/evidence/reason are (more) objective like Math. This does not mean that objective beliefs are necessarily 'true' (it depends on the semantics of the word 'true') but they are not, or should not be, dependent of persons X's or Y's state of mind. This also does not mean that objective beliefs are necessarily better than subjective ones (that requires a value judgement which is context-dependent). Anyway, this is why I think that, say, my liking of ice-cream is subjective. That is a private belief that would not exist if I would not exist. It depends of my current mind state. On the other hand, the theory of evolution by natural selection or the Central Limit Theorem are beliefs that do not depend on any person's mind. But, either objective or subjective, all are map denizens. Even scientific models are just that: maps; not intrinsically true or false, just more or less adequate to the known relevant evidence and current knowledge. 

However this way of classifying beliefs is just one way not the way. Thinking about the divide between public or private beliefs is as important as seeing them as objective or subjective (Ethics and Politics seems a much more interesting and important subject that Ontology and Epistemology but that's my perspective). 

One more thing: in a subtle way, every belief belongs to the Territory -- human beliefs are caused by certain neuro-electric impulses, and those are measurable events -- which is a trivial fact and not that interesting (even if it is important, because it protects this model of ontology against the charge of dualism, the Map is not independent of the Territory). The meaning of those brain impulses only makes sense in the Map. Without humans -- the makers and keepers of the Human Map -- the only thing that would exist would be the physical phenomena that we label with words and inject with meaning. Without a 'Map' there would be no stars, no colors or sounds, no art or philosophy or love. There would only exist 'meaningless indifferent stuff' (for lack of better words).

janeiro 16, 2013

Fora do circuito

Fora da academia, do que é considerado conhecimento estabelecido, há um grande conjunto de obras de variado valor. A história da ciência, da filosofia e da matemática mostra variados exemplos de pessoas que defenderam ideias fora do sistema, tendo algumas delas sido ostracizadas e até levadas ao suicídio, cujas ideias que defendiam vingaram e tornaram-se respeitadas e até mesmo no próprio sistema. Casos como os infinitos de Cantor, a interpretação estatística da Termodinâmica de Boltzmann ou as ideias de Nietzsche são disto bons exemplos.


Mas será arrogante assumir que o desenvolvimento do conhecimento humano não tem falsos positivos (ideias vigentes que não são as melhores entre as disponíveis) e falsos negativos (ideias erradamente rejeitadas por preconceito, desconhecimento ou falta de evidência). Entre os falsos positivos destacam-se, a meu ver, a Economia Clássica (com o seu axioma do agente racional já falsificado empiricamente, eg, cf. Kahneman, Thinking Fast and Slow) e a Estatística Clássica, onde o principal adversário, a Inferência Bayesiana, teve um renascimento com o advento dos computadores mas que continua persona non grata da academia (era pior há umas décadas). Entre os falsos negativos ficaremos, em alguns casos, na eterna dúvida se um dado autor teria mesmo razão, no sentimento vago de estarmos a perder alguma coisa importante. Claro que um sistema conceptual recusado não possa ser resgatado parcialmente. A sociedade pode absorver parcelas que as torna suas, sem ter de digerir a totalidade do que o respectivo autor defendeu. 

Entre os autores destes potenciais falsos negativos encontro especial interesse nos seguintes:

Julian Jaynes: The origin of consciousness in the breakdown of the bicameral mind. Neste livro é defendida uma tese que o nascimento da consciência humana é um fenómeno historicamente recente (por volta da Grécia Antiga, entre Homero e Péricles) e que os humanos anteriores a essa época ainda não eram pessoas. Não consigo fazer juz ao livro mas é muito interessante e bem escrito, e como seria interessante perceber que Jaynes afinal tinha tido razão!


Thomaz Szasz: The Myth of Mental Illness (entre outros livros deste autor). Szasz é um médico que há décadas critica a forma como a profissão psiquiátrica categoriza a doença mental. Em parte, algumas das suas teses foram absorvidas pelo mainstream, sendo provável que a sua posição actual seja mais extremada do que justificariam os procedimentos actuais. Apesar disso uma consulta à DSM, que classifica por exemplo o travestismo  como doença mental, nos deixe a pensar que talvez ainda haja muito trabalho a fazer nesta frente.


Alfred Korzybski: Science and Sanity. É deste livro que vem o aforismo "o mapa não é o território" e a sistematização da ideia que as teorias e conceitos humanos não têm existência para lá de nós mesmos. De certa forma, apesar da ostracização que sofreu (o livro é realmente um bocado alucinado) estas ideias vingaram e são muito presentes na nossa cultura. Hoje em dia é mais fácil afirmar que uma teoria é «apenas» o mapa de um padrão reconhecido do que fora no passado. Não esqueçamos que no início do século XX ainda se discutia ao mais alto nível científico se a luz tinha uma natureza corpuscular ou ondulatória, como se a luz fosse realmente ou uma partícula ou uma onda.

setembro 06, 2012

"A well-made language is no indifferent thing; not to go beyond physics, the unknown man who invented the word heat devoted many generations to error. Heat has been treated as a substance, simply because it was designated by a substantive, and it has been thought indestructible". The Foundations of Science, Henri Poincaré

março 15, 2012

Definições e Abusos

[A] definition can be anything we choose. But the arbitrariness of definitions doesn’t make truth arbitrary. Rather, it just means that in order to understand which proposition it is whose truth we’re being asked about, we need to know what the words mean. Once again, it is just a matter of pinning down the meaning in order to pin down the truth. [...] whenever something substantive seems to depend on a choice of definition—for example, if whether to take a contemplated action seems to depend on whether the action falls within the scope of some proposed definition of right—we should suspect that a tacit definition is being smuggled in, and a sleight-of-hand substitution of the tacit definition for the explicit one is occurring. Here’s a good diagnostic technique: define some made-up word in place of the familiar one that is being defined, and see what apparent difference that substitution makes. [...] A definition is just an arbitrary association between a symbol and a concept; it has nothing to do with what is true or false about the world. [...] If concepts yielded to our attempts to equate them just by our proclaiming definitions in that manner, then definitions would be like magic spells, capable by their mere incantation of somehow rearranging the substantive facts of the world. Obviously, definitions have no such power. [We need arguments, not definitions] [e.g. ownership] A supporter of libertarian capitalism may argue that you are morally entitled to use your own property for your exclusive benefit, because such entitlement is the very definition of the word own. But by that definition, you have not established that anything is your own until you have (somehow) established that you are morally entitled to use it for your exclusive benefit. However, there is another definition of own that is often implicitly smuggled in—roughly, that if you have obtained an item by purchasing it, inheriting it, building it, and so forth, then you own it. Sleight-of-hand alternation between the explicit and implicit definition creates the illusion of having established that whatever you build, purchase, inherit, and so forth, you are necessarily entitled to use for your exclusive benefit. [You need to argue that the latter implies the former]. - Gary L Drescher, Good and Real.

março 12, 2012

Atribuição

Um modelo científico não é verdadeiro ou falso. Ele pode ser adequado à evidência relevante, passível ou não de estabelecer predições, coerente internamente e ainda, espera-se, compatível com o corpo científico relacionado. É sobre estas propriedades que devemos avaliar um dado modelo, não se ele corresponde à verdade. Qualquer modelo, qualquer conceito dele derivado, qualquer crença estruturada, é uma narrativa, um conjunto de construções cognitivas socialmente construídas e biologicamente limitadas. O mesmo não se pode dizer dos respectivos referentes, aos quais apenas temos impressões indirectas, projecções incompletas sobre os efeitos que produzem. Esta é a componente real que tentamos entender e sobre a qual há pouco a dizer (imagino a epistemologia como um corpo de conhecimento imensamente maior que o da ontologia). E este entendimento é procurado seja através das impressões subjectivas do que é a verdade -- pelo respeito da tradição, pelas diversas convicções colectivamente chamadas fé --, seja através da abordagem empírica e racionalmente crítica que designamos por método científico. Ou, muitas vezes, por uma mistura das duas.

fevereiro 16, 2012

Falácia de Projecção Mental

Alfred North Whitehead no seu livro Process and Reality de 1929 apresentou o que ficou conhecido como The Fallacy of Misplaced Concreteness:
neglecting the degree of abstraction involved when an actual entity is considered merely so far as it exemplifies certain categories of thought (pg.11).
Este é um aviso sobre o erro de confundir o abstracto com o concreto. Esta ideia possui várias denominações. Talvez a mais conhecida seja a de Alfred Korzybski, 'O mapa não é o território'.

A este mesmo problema E.T. Jaynes designou por Falácia de Projecção Mental. No texto seguinte Jaynes usa-o para discutir as interpretações da teoria quântica e como a confusão entre estes dois níveis -- entre ontologia e epistemologia -- pode ter estado na origem do célebre desacordo de que deus não joga aos dados entre Einstein e Bohr:

The failure of quantum theorists to distinguish in calculations between several quite different meanings of 'probability', between expectation values and actual values, makes us do things that don't need to be done; and to fail to do things that do need to be done. We fail to distinguish in our verbiage between prediction and measurement. For example, the famous vague phrases: 'It is impossible to specify...'; or 'It is impossible to define...' can be interpreted equally well as statements about prediction or statements about measurement. Thus the demonstrably correct statement that the present formalism cannot predict something becomes perverted into the logically unjustified (and almost certainly false) claim that the experimentalist cannot measure it!

We routinely commit the Mind Projection Fallacy: supposing that creations of our own imagination are real properties of Nature, or that our own ignorance signifies some indecision on the part of Nature. It is then impossible to agree on the proper place of information in physics. This muddying up of the distinction between reality and our knowledge of reality is carried to the point where we find some otherwise rational physicists, on the basis of the Bell inequality experiments, asserting the objective reality of probabilities, while denying the objective reality of atoms! These sloppy habits of language have tricked us into mystical, pre-scientific standards of logic, and leave the meaning of any QM result ambiguous. Yet from decades of trial-and-error we have managed to learn how to calculate with enough art and tact so that we come out with the right numbers!

The main suggestion we wish to make is that how we look at basic probability theory has deep implications for the Bohr-Einstein positions. Only since 1988 has it appeared to the writer that we might be able finally to resolve these matters in the happiest way imaginable: a reconciliation of the views of Bohr and Einstein in which we can see that they were both right in the essentials, but just thinking on different levels.

Einstein's thinking is always on the ontological level traditional in physics; trying to describe the realities of Nature. Bohr's thinking is always on the epistemological level, describing not reality but only our information about reality. The peculiar flavor of his language arises from the absence of all words with any ontological import. J. C. Polkinghorne (1989, pp. 78,79) came independently to this same conclusion about the reason why physicists have such difficulty in reading Bohr. He quotes Bohr as saying:
"There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature."
[...] Bohr would chide both Wigner and Oppenheimer for asking ontological questions, which he held to be illegitimate. Those who, like Einstein (and, up until recently, the present writer) tried to read ontological meaning into Bohr's statements, were quite unable to comprehend his message. This applies not only to his critics but equally to his disciples, who undoubtedly embarrassed Bohr considerably by offering such ontological explanations as "Instantaneous quantum jumps are real physical events." or "The variable is created by the act of measurement.", or the remark of Pauli quoted above, which might be rendered loosely as "Not only are you and I ignorant of x and p; Nature herself does not know what they are."

We disagree strongly with one aspect of Bohr's quoted statement above; in our view, the existence of a real world that was not created in our imagination, and which continues to go about its business according to its own laws, independently of what humans think or do, is the primary experimental fact of all, without which there would be no point to physics or any other science.

The whole purpose of science is learn what that reality is and what its laws are. On the other hand, we can see in Bohr's statement a very important fact, not sufficiently appreciated by scientists today as a necessary part of that program to learn about reality. Any theory about reality can have no consequences testable by us unless it can also describe what humans can see and know. For example, special relativity theory implies that it is fundamentally impossible for us to have knowledge of any event that lies outside our past light cone. Although our ultimate goal is ontological, the process of achieving that goal necessarily involves the acquisition and processing of human information. This information processing aspect of science has not, in our view, been sufficiently stressed by scientists (including Einstein himself, although we do not think that he would have rejected the idea).

Although Bohr's whole way of thinking was very different from Einstein's, it does not follow that either was wrong. In the writer's present view, all of Einstein's thinking (in particular the EPR argument) remains valid today, when we take into account its ontological purpose and character. But today, when we are beginning to consider the role of information for science in general, it may be useful to note that we are finally taking a step in the epistemological direction that Bohr was trying to point out sixty years ago.

But our present QM formalism is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature - all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble. Yet we think that the unscrambling is a prerequisite for any further advance in basic physical theory. For, if we cannot separate the subjective and objective aspects of the formalism, we cannot know what we are talking about; it is just that simple. E.T. Jaynes, Probability in Quantum Theory (1996).