Mostrar mensagens com a etiqueta ciência. Mostrar todas as mensagens
Mostrar mensagens com a etiqueta ciência. Mostrar todas as mensagens

novembro 20, 2015

Private Belief and Public Knowledge

To believe incorrectly is never a crime, but simply to believe is never to have knowledge.

In other words, liberal science does not restrict belief, but it does restrict knowledge. It absolutely protects freedom of belief and speech, but it absolutely denies freedom of knowledge: in liberal science, there is positively no right to have one's opinions, however heartfelt, taken seriously as knowledge. Just the contrary: liberal science is nothing other than a selection process whose mission is to test beliefs and reject the ones that fail. A liberal intellectual regime says that if you want to believe the moon is made of green cheese , fine. But if you want your belief recognized as knowledge, there are things you must do. You must run your belief through the science game for checking. And if your belief is a loser, it will not be included in the science texts. It probably won't even be taken seriously by most respectable intellectuals. In a liberal society, knowledge - not belief  - is the rolling critical consensus of a decentralized community of checkers, and it is nothing else. That is so, not by the power of law, but by the deeper power of a common liberal morality.

Of course, if your belief is rejected by the critical consensus, you are free to reject the consensus and keep believing. That's freedom of belief. But you are not entitled to expect that your belief will be taught to schoolchildren or accepted by the intellectual establishment as knowledge. Any school curriculum is necessarily restrictive. It cannot not be restrictive. My point is that the right way to set a curriculum is to insist that it teach knowledge, and that this knowledge should consist only of claims which have been thoroughly checked by no person (or group) in particular. We should never teach anything as knowledge because it serves someone's political needs. We should teach only what has checked out.[...] academic freedom consists in freedom to doubt, to inquire, to check, and to believe as you like. It does not consist in the freedom of one party or another to reset the rules for inquiry or checking. Someone who wants to insist that the theory of relativity is false and that some other theory is true is, of course, entitled to do so; but passing laws or using intimidation to make teachers (or anyone else) take him seriously has nothing to do with the freedom to inquire. It has to do with the centralized regulation of knowledge. If the consensus of critical checkers holds that evolution checks out but creationism does not, and clearly it does hold this, then that is our knowledge on the subject.

And who decides what the critical consensus actually is? The critical society does, arguing about itself. That is why scholars spend so much time and energy "surveying the literature" (i.e., assessing the consensus so far). Then they argue about their assessments. The process is long and arduous, but there you are. Academic freedom would be trampled instead of advanced by, say, requiring that state financed universities put creationists on their biology faculties or give Afrocentrists rebuttal space in their journals. Wh n a state legislature or a curriculum committee or any other political body decrees that anything in particular is, or has equal claim to be, our knowledge, it wrests control over truth from the liberal community of checkers and places it in the hands of central political authorities. 

And that is illiberal. If the principle is ever established that political bodies can say what our knowledge is or is not, or which ideas are worth taking seriously, then watch out. Everyone with an opinion would be busy lobbying legislatures for equal-time laws, demanding that biology books describe prayer as an alternative treatment for cancer, picketing universities for astrology departments, suing journals for rebuttal space, demonstrating for proportionate representation in footnote citations. We would find ourselves in a world where knowledge was made by voting and agitating. Then we really would find ourselves living Bertrand Russell's nightmare, where "the lunatic who believes that he is a poached egg is to be condemned solely on the ground that he is in the minority." In that case, those of us who believe in science had better hope that we can persuade a majority and round up a quorum-and whether we can do so is not at all clear on issues like astrology.

One cannot overemphasize: intellectual liberalism is not intellectual majoritarianism or egalitarianism. You do not have a claim to knowledge either because 51 percent of the public agrees with you or because your "group" was historically left out; you have a claim to knowledge only to the extent that your opinion still stands up after prolonged exposure to withering public testing. Now, it is true that when we talk about knowledge's being a scientific consensus we are talking about a majority of scientists. But we are not talking about a mere majority. For a theory to go into a textbook as knowledge, it does not need the unanimity of checkers' assent, but it does need far more than a bare majority's. It should be generally recognized as having stood up better than any competitor to most of the tests that various critical debunkers have tried. [...] Because space and time in textbooks and classrooms are limited, each of those groups will make demands at the expense of others. And that is how creed wars begin. 

[...] only after an idea has survived checking is it deserving of respect. Not long ago, I heard an activist say at a public meeting that her opinion deserved at least respect. The audience gave her a big round of applause. But she and they had it backwards. Respect was the most, not the least, that she could have demanded for her opinion. Except insofar as an opinion earns its stripes in the science game, it is entitled to no respect whatever. This point matters, because respectability is the coin in which liberal science rewards ideas that are duly put up for checking and pass the test. You may not get rich by being shown to be right, you may not even become famous, and you almost certainly will not be loved; but you will be paid in the species of respectability. That is why it is so important that creationists and alien-watchers and radical Afrocentrists and white supremacists be granted every entitlement to speak but no entitlement to have their opinions respected. They should expect, if they scoff at the rules by which the game of science is played, to have their beliefs scoffed at; they should expect, if for any reason (in eluding minority status) they refuse to submit their ideas for checking by public criticism, that their opinions will be ignored or ridiculed - and rightly so. Respect is no opinion's birthright. People, yes, are entitled to a certain degree of basic respect by dint of being human. But to grant any such claim to ideas is to raid the treasury of science and throw its capital to the winds.

Let us remember, then, that the proposition "We must all respect others' beliefs" is nowhere near as innocent as it sounds. If it is enshrined in policies or practices giving "rights" to minority opinions, the damage it causes is immediate and severe. Liberal science cannot exert discipline if it cannot use its tool of marginalization to drive unsupported or bogus beliefs from the agenda. When you pass laws requiring equal time for somebody's excluded belief, you effectively make marginalization illegal. You say, "In our society, a belief is respectable - and will be taught and treated respectfully - if the politically powerful say it is." Once you have said that, you face a very stark choice. You can open the textbooks only to those "oppressed" beliefs whose proponents have political pull. Or you can take the principled egalitarian position, and open the books and the schools to all sincere beliefs. If you do the former, then you have replaced science with power politics. If you do the latter, then you have no principled choice but to teach, for example, "Holocaust revisionism" (the claim that the Holocaust didn't happen) as an "alternative theory" held by an "excluded minority"-which means, in practice, not teaching twentieth-century history at all. Either way, you have taken in hand silly and even execrable opinions and ushered them from the fringes of debate to the very center. At a single stroke, you have disabled liberal society's mechanism for marginalizing foolish ideas, and you have sent those ideas straight to the top of the social agenda with a safe-conduct.

Is the liberal standard for respectability fair? That, really, is the big question today. If you believe that a society is just only when it delivers more or less equal outcomes, you will think liberalism is unfair. You will insist on admitting everyone's belief into respectability as knowledge. Or at least you will insist on admitting the beliefs of people whom you regard as oppressed-affirmative action for knowledge. Personally, I cannot think of anything good about that kind of standard for knowledge. It is bound to lead to fights over who gets what. Groups will appoint leaders, and leaders will negotiate, and when negotiations break down schism or intellectual warfare will ensue; or if negotiations are successful, then certain beliefs will be locked in place by delicate compromise, and a knowledge-making system whose greatest virtue is its adaptiveness will turn sclerotic.

Kindly Inquisitors, Jonathan Rauch

novembro 18, 2015

junho 18, 2015

Its from Bits

A distinction is often made between theories based upon explicit mechanisms of causation versus theories based upon statistical or other seemingly non-mechanistic assumptions. Evolution is a mechanistic theory in which the mechanism is selection and hereditability of traits acting in concert. In a nutshell, stressful forces upon organisms that differ genetically select those individuals possessing genes that confer on the individual and its offspring the greatest capacity to reproduce under those stresses. 

Consider next a statistical explanation of the observation that the heights of a large group of children in an age cohort are well described by a Gaussian (aka normal) distribution. Invocation of the central limit theorem (CLT) provides a statistical explanation; but the question remains as to why that theorem applies to this particular situation. The applicability of the theorem hinges on the assumption that each child’s height is an outcome of a sum of random influences on growth. So are we not also dealing here with a mechanistic explanation, with the mechanism being the collection of additive influences on growth that allow the applicability of the CLT? If the influences on growth were multiplicative rather than additive, we might witness a lognormal distribution. Is it possible that all scientific explanation is ultimately mechanistic? Let us look more carefully at the concept of mechanism in scientific explanation, for it is not straightforward.

In everyday usage, we say that phenomenon A is explained by a mechanism when we have identified some other phenomenon, B, that causes, and therefore explains, A. The causal influence of B upon A is a mechanism. However, what is accepted by one investigator as an explanatory mechanism might not be accepted as such by another. [...] Does the search for mechanism inevitably propel us into an infinite regress of explanations? Or can mechanism be a solid foundation for the ultimate goal of scientific theory-building? Consider two of the best established theories in science: quantum mechanics and statistical mechanics. Surprisingly, and despite their names, these theories are not actually based on mechanisms in the usual sense of that term. Physicists have attempted over past decades to find a mechanism that explains the quantum nature of things. This attempt has taken bizarre forms, such as assuming there is a background “aether” comprised of tiny things that bump into the electrons and other particles of matter, jostling them and creating indeterminancy. While an aether can be rigged in such a way as to simulate in matter the behavior predicted by Heisenberg’s uncertainty principle, and some other features of the quantum world, all of these efforts have ultimately failed to produce a consistent mechanistic foundation for quantum mechanics. Similarly, thermodynamics and statistical mechanics are mechanism-less. Statistical arguments readily explain why the second law of thermodynamics works so well. In fact, it has been shown that information theory in the form of Maximum Entropy provides a fundamental theoretical foundation for thermodynamics.

If we pull the rug of mechanism out from under the feet of theory, what are we left with? The physicist John Archibald Wheeler posited the radical answer “its from bits,” by which he meant that information (bits)—and not conventional mechanisms in the form of interacting things moving around in space and time—is the foundation of the physical world (its). There is a strong form of “its from bits,” which in effect states that only bits exist, not its. More reasonable is a weaker form, which asserts that our knowledge of “its” derives from a theory of “bits.”

[...] Mechanistic explanations either lead to an infinite regress of mechanism within mechanism, or to mechanism-less theory, or perhaps to Wheeler’s world with its information-theoretic foundation. What is evident is that as we plunge deeply into the physical sciences, we see mechanism disappear. Yet equally problematic issues arise with statistical theories; we cannot avoid asking about the nature of the processes governing the system that allow a particular statistical theory to be applicable. In fact, when a statistical theory does reliably predict observed patterns, it is natural to seek an underlying set of mechanisms that made the theory work. And when the predictions fail, it is equally natural to examine the pattern of failure and ask whether some mechanism can be invoked to explain the failure. -- John Harte, Maximum Entropy and Ecology, pp.8--11

setembro 22, 2014

Orthogonal quotes

"What happens, happens," Carla offered gnomically. "Everything in the Cosmos has to be consistent. All we get to do is talk about it in a way that makes sense to us"

---

[...] imagine the time when wave mechanics powers every machine and everyone takes it for granted. Do you really want them thinking that it fell from the sky, fully formed, when the truth is that they owe their good fortune to the most powerful engine of change in the history: people arguing about science.

---

The cosmos is what it is. The laws of optics and mechanics and gravity are simple and elegant and universal... but a detailed description of all the things on which those laws play out seems to be nothing but a set of brute facts that need to be discovered individually. I mean, a 'typical' cosmos, in statistical terms, would be a gas in thermal equilibrium filling the void, with no solid objects at all. There certainly wouldn't be steep entropy gradients. We've only be treating the existence of such gradient as a 'law' because it was the most prominent fact in our lives: time came with a arrow distinguishing the past from the future.

Greg Egan, Orthogonal (books II & III)

setembro 01, 2014

Primitive man, aware of his helplessness against the forces of Nature but totally ignorant of their causes, would try to compensate for his ignorance by inventing hypotheses about them [...]  For one who has no comprehension of physical law, but is aware of his own consciousness and volition, the natural question to ask is not: "What is causing it?", but rather: "Who is causing it?"  [...] The error [Mind Projection Fallacy] occurs in two complementary forms, which we might indicate thus: (a) My own imagination => Real Property of Nature, and (b) My own ignorance => Nature is indeterminate.

The philosophical dierence between conventional probability theory and probability theory as logic is that the former allows only sampling distributions, interprets them as physically real frequencies of "random variables", and rejects the notion of probability of an hypothesis as being meaningless. We take just the opposite position: that the probability of an hypothesis is the fundamental, necessary ingredient in all inference, and the notion of "randomness" is a red herring, at best irrelevant. [...] by "probability theory as logic" we mean nothing more than applying the standard product and sum rules of probability theory to whatever propositions are of interest in our problem.

We do not seek to explain "statistical behavior" because there is no such thing; what we see in Nature is physical behavior, which does not conflict in any way with deterministic physical law.

[...] as in any other problem of inference, we never ask, "Which quantities are random?" The relevant question is: "Which quantities are known, and which are unknown?"


janeiro 17, 2014

A impossibilidade do realismo

"What we would all like [...] is an understanding of the fundamental processes that govern the Universe, an understanding that is not just useful for calculation but an understanding that is true in some deeper sense. Typically, a scientist sees the latter point as either obvious and important, or else completely irrelevant. I would like to argue that we don’t have a choice; there is some very clear sense in which truth is not what is returned by any finite scientific investigation; all that is returned is plausibilities (some of which become very very high), and those plausibilities relate not directly to the truth of the hypotheses in question, but rather to their use or value in describing the data. 

The fundamental reason scientific investigations can’t obtain literal truth is that no scientific investigator ever has an exhaustive (and mutually exclusive) set of hypotheses. Plausibility calculations are calculations of measure in some space, which for our purposes we can take to be the space formed by the union of every possible set of scientific hypotheses, with their parameters and adjustments set to every possible set of values." -- David Hogg, Is cosmology just a plausibility argument?.

fevereiro 18, 2013

Alternatives

Evidence for a model (or belief) must be considered against alternative models. Let me describe a neutral (and very simple) example: Assume I say I have Extra Sensorial Perception (ESP) and tell you that the next dice throw will be 1. You throw the dice and I was right. That is evidence for my claim of ESP. However there's an alternative model ('just a lucky guess') that also explains it and it's much more likely to be the right model (because ESP needs much more assumptions, many of those in conflict with accepted facts and theories). This is a subject of statistical inference. It's crucial to consider the alternatives when we want to put our beliefs to the test.

fevereiro 11, 2013

Never ending stories

Arguments and topics that consider data as irrelevant become exercises is Aesthetics, which is not intrinsically bad but might turn into a never ending discussion. On the other hand, I'm not sure we should easily dismiss a good argument that goes against current evidence. Perhaps the evidence is not good enough, and the argument can show new ways to search for new knowledge. Heliocentrism comes to mind.

setembro 24, 2012

Modelos

What is a random variable? That’s easy. It’s a measurable function on a probability space. What’s a probability space? Easy too. It’s a measure space such that the measure of the entire space is 1. 

Probability theory avoids defining randomness by working with abstractions like random variables. This is actually a very sensible approach and not mere legerdemain. Mathematicians can prove theorems about probability and leave the interpretation of the results to others. 

As far as applications are concerned, it often doesn’t matter whether something is random in some metaphysical sense. The right question isn’t “is this system random?” but rather “is it useful to model this system as random?” Many systems that no one believes are random can still be profitably modeled as if they were random. 

Probability models are just another class of mathematical models. Modeling deterministic systems using random variables should be no more shocking than, for example, modeling discrete things as continuous. For example, cars come in discrete units, and they certainly are not fluids. But sometimes it’s useful to model the flow of traffic as if it were a fluid. (And sometimes it’s not.) 

Random phenomena are studied using computer simulations. And these simulations rely on random number generators, deterministic programs whose output is considered random for practical purposes. This bothers some people who would prefer a “true” source of randomness. Such concerns are usually misplaced. In most cases, replacing a random number generator with some physical source of randomness would not make a detectable difference. The output of the random number generator might even be higher quality since the measurement of the physical source could introduce a bias. John D. Cook 

março 21, 2012

Impotência

"Se uma pessoa decide ir contra os factos da História e contra os factos da Ciência e da Tecnologia, não há muito que possamos fazer por ela. Na maioria dos casos, sinto apenas pena por termos falhado na sua educação" -- Harrison Schmitt

fevereiro 16, 2012

Falácia de Projecção Mental

Alfred North Whitehead no seu livro Process and Reality de 1929 apresentou o que ficou conhecido como The Fallacy of Misplaced Concreteness:
neglecting the degree of abstraction involved when an actual entity is considered merely so far as it exemplifies certain categories of thought (pg.11).
Este é um aviso sobre o erro de confundir o abstracto com o concreto. Esta ideia possui várias denominações. Talvez a mais conhecida seja a de Alfred Korzybski, 'O mapa não é o território'.

A este mesmo problema E.T. Jaynes designou por Falácia de Projecção Mental. No texto seguinte Jaynes usa-o para discutir as interpretações da teoria quântica e como a confusão entre estes dois níveis -- entre ontologia e epistemologia -- pode ter estado na origem do célebre desacordo de que deus não joga aos dados entre Einstein e Bohr:

The failure of quantum theorists to distinguish in calculations between several quite different meanings of 'probability', between expectation values and actual values, makes us do things that don't need to be done; and to fail to do things that do need to be done. We fail to distinguish in our verbiage between prediction and measurement. For example, the famous vague phrases: 'It is impossible to specify...'; or 'It is impossible to define...' can be interpreted equally well as statements about prediction or statements about measurement. Thus the demonstrably correct statement that the present formalism cannot predict something becomes perverted into the logically unjustified (and almost certainly false) claim that the experimentalist cannot measure it!

We routinely commit the Mind Projection Fallacy: supposing that creations of our own imagination are real properties of Nature, or that our own ignorance signifies some indecision on the part of Nature. It is then impossible to agree on the proper place of information in physics. This muddying up of the distinction between reality and our knowledge of reality is carried to the point where we find some otherwise rational physicists, on the basis of the Bell inequality experiments, asserting the objective reality of probabilities, while denying the objective reality of atoms! These sloppy habits of language have tricked us into mystical, pre-scientific standards of logic, and leave the meaning of any QM result ambiguous. Yet from decades of trial-and-error we have managed to learn how to calculate with enough art and tact so that we come out with the right numbers!

The main suggestion we wish to make is that how we look at basic probability theory has deep implications for the Bohr-Einstein positions. Only since 1988 has it appeared to the writer that we might be able finally to resolve these matters in the happiest way imaginable: a reconciliation of the views of Bohr and Einstein in which we can see that they were both right in the essentials, but just thinking on different levels.

Einstein's thinking is always on the ontological level traditional in physics; trying to describe the realities of Nature. Bohr's thinking is always on the epistemological level, describing not reality but only our information about reality. The peculiar flavor of his language arises from the absence of all words with any ontological import. J. C. Polkinghorne (1989, pp. 78,79) came independently to this same conclusion about the reason why physicists have such difficulty in reading Bohr. He quotes Bohr as saying:
"There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature."
[...] Bohr would chide both Wigner and Oppenheimer for asking ontological questions, which he held to be illegitimate. Those who, like Einstein (and, up until recently, the present writer) tried to read ontological meaning into Bohr's statements, were quite unable to comprehend his message. This applies not only to his critics but equally to his disciples, who undoubtedly embarrassed Bohr considerably by offering such ontological explanations as "Instantaneous quantum jumps are real physical events." or "The variable is created by the act of measurement.", or the remark of Pauli quoted above, which might be rendered loosely as "Not only are you and I ignorant of x and p; Nature herself does not know what they are."

We disagree strongly with one aspect of Bohr's quoted statement above; in our view, the existence of a real world that was not created in our imagination, and which continues to go about its business according to its own laws, independently of what humans think or do, is the primary experimental fact of all, without which there would be no point to physics or any other science.

The whole purpose of science is learn what that reality is and what its laws are. On the other hand, we can see in Bohr's statement a very important fact, not sufficiently appreciated by scientists today as a necessary part of that program to learn about reality. Any theory about reality can have no consequences testable by us unless it can also describe what humans can see and know. For example, special relativity theory implies that it is fundamentally impossible for us to have knowledge of any event that lies outside our past light cone. Although our ultimate goal is ontological, the process of achieving that goal necessarily involves the acquisition and processing of human information. This information processing aspect of science has not, in our view, been sufficiently stressed by scientists (including Einstein himself, although we do not think that he would have rejected the idea).

Although Bohr's whole way of thinking was very different from Einstein's, it does not follow that either was wrong. In the writer's present view, all of Einstein's thinking (in particular the EPR argument) remains valid today, when we take into account its ontological purpose and character. But today, when we are beginning to consider the role of information for science in general, it may be useful to note that we are finally taking a step in the epistemological direction that Bohr was trying to point out sixty years ago.

But our present QM formalism is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature - all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble. Yet we think that the unscrambling is a prerequisite for any further advance in basic physical theory. For, if we cannot separate the subjective and objective aspects of the formalism, we cannot know what we are talking about; it is just that simple. E.T. Jaynes, Probability in Quantum Theory (1996).

janeiro 30, 2012

Evidências II


Este tipo de preconceito irá ocorrer mais cedo ou mais tarde (a não ser que o mundo actual impluda). Vai ser curioso ver filósofos e cientistas a torcerem-se para negar a evidência crescente de comportamento consciente da parte da nossa futura Inteligência Artificial. Algo similar está hoje a ocorrer com as discussões sobre o liver arbítrio contra a evidência da neurociência.

janeiro 03, 2012

Esvaziamento

O livre-arbítrio é um conceito filosófica e cientificamente estéril. Considerando qualquer situação passada, se repetíssemos o mesmo estado do mundo, agiríamos sempre da mesma forma. A evidência existente aponta de forma esmagadora para esta possibilidade. A não ser que aceitemos uma perspectiva dualista - e.g., a existência de uma alma imaterial - não há fundamentos para afirmar que as mesmas condições poderiam dar lugar a decisões diferentes. E actualmente, considera-se o dualismo como um conceito epistemologicamente inútil, tal como os deuses, a referida alma, o vitalismo ou o elã vital.

É comum, nesta temática, referir o problema do determinismo como ameaça ao livre-arbítrio. Por isso, os seus defensores tentam minar a afirmação do determinismo, levantando dúvidas e argumentos à sua existência mais ou menos relevantes. Os dois lados podem concordar num ponto: nunca teremos a certeza que o mundo é determinista. A noção de certeza é uma assimptota, um ideal ao qual apontamos como objectivo mas que não chegamos a alcançar. Nunca teremos a certeza sobre um certo X mas podemos, com esforço e método, chegar a um ponto ao qual afirmamos X muito para lá da dúvida razoável. Este tipo de certeza ocorre no corpo de conhecimento de disciplinas como a física, a química ou a biologia. E a maioria desse conhecimento aponta para um mundo determinista. Os poucos casos na teoria clássica que parecem indeterministas, como as singularidades nuas entre outras situações patológicas, estão tão afastadas do nosso universo local (assumindo que realmente existem), são tão pouco relevantes no nosso dia-a-dia, que não encerram força argumentativa na discussão da cognição humana. Situações potencialmente locais, como o choque simultâneo de três corpos, se bem que não deterministas no contexto clássico, são eventos de probabilidade negligenciável (correspondem a conjuntos de eventos de medida nula).

O último reduto de indeterminismo parece ser a teoria quântica. Não na dinâmica resultante das suas equações, que é totalmente determinista, mas no acto de interferência ou observação. A interpretação de Copenhaga é manifestamente não determinista mas admiti-la levanta demasiados problemas sendo alvo de muita controvérsia desde o início da sua formulação. Já a interpretação de muitos mundos (many-worlds) defende que nada de mais acontece para lá da dinâmica determinista da equação de Schrödinger. Para explicar o aparente não-determinismo resultante do acto de observação, esta interpretação refere apenas que nós vivemos apenas em um de múltiplos ramos da dinâmica quântica. É deixado intocado o determinismo que já se encontrava nas equações, sacrificando para isso a nossa capacidade última de observação, uma perspectiva menos antropocêntrica e mais racional que a interpretação de Copenhaga. Mas, mesmo assumindo a interpretação de Copenhaga, é difícil imaginar como um evento não-determinista no mundo sub-atómico possa resgatar um conceito tantos níveis de abstracção acima como é o processo de decisão de uma mente humana (ou seja, como o arbítrio quântico se transforma em arbítrio humano?).

Há também quem argumente de acordo com a ideia que o mundo não pode ser determinista porque isso implicaria o colapso das noções de liberdade e de responsabilidade individual. Mas esta linha argumentativa é um non sequitur. O mundo é o que é. Uma ameaça à civilização, por mais grave ou iminente que seja, não tem qualquer efeito sobre a natureza do mundo externo. E, de qualquer forma, estes importantes conceitos não precisam ser abandonados:
  • A liberdade é a possibilidade de termos disponíveis mais escolhas e de agirmos de acordo com a nossa escolha preferencial. Isso não depende do mundo ser determinista: uma pessoa tem opções e age segundo elas, seja o processo de decisão determinista ou não (aliás, um processo demasiado não determinista seria, este sim, uma ameaça a regras sociais estáveis);
  • A responsabilidade é um conceito social. Ela é julgada moral e socialmente de acordo com as regras da sociedade em questão. E mais uma vez, isso não depende da natureza ontológica do mundo externo. Depende sim dos diversos modelos - sociais, científicos, éticos, religiosos - que a sociedade partilha e utiliza no seu dia-a-dia. É natural que novo conhecimento seja incorporado na sociedade e altere a sua perspectiva, como a moderna noção de inimputabilidade de certos doentes mentais. Mas, em última análise, as sociedades têm de ser capazes de manter um mecanismo coerente e sustentável de responsabilidade individual, qualquer que seja o conhecimento adquirido. A justiça, tradicionalmente o corpo social que formaliza e gere conflitos de responsabilidade, é um corpo independente. Ela recebe ajuda da ética, da ciência e da lógica, mas não depende delas para se fundar e funcionar.
Temos cada vez melhores modelos sobre o mundo, o cérebro humano e a forma como a mente funciona e interage. Cada vez mais parecem vazias ou incoerentes noções como o livre-arbítrio e o não-determinismo. Cada vez mais as discussões filosóficas nestes temas se parecem com as discussões sobre anjos ou almas da teologia antiga. Este é apenas mais um campo onde as discussões foram esvaziadas de significado pelo progresso do conhecimento humano.

outubro 11, 2011

Navalha

O conceito de deus inclui um método de explicação do mundo natural (e.g., os mitos da criação). Este método nunca pode ser reconciliado com as metodologias actuais, designadas genericamente por método científico. Se um evento for observado repetidamente e que não possui explicação ou predição possível nas teorias científicas correntes, a única explicação científica para este facto é admitir que essas teorias são incompletas e precisam de reforma ou, raramente, de substituição. Não existe espaço para complementar estes modelos com um deus ex machina. Este argumento é, na essência, baseado na indução de séculos de acumulação de conhecimento científico onde cada evento interpretado como mágico e misterioso ou se encontrou um modelo científico (e.g., a electricidade, as ervas curativas) ou foi eliminado por testes e experiências controladas (e.g., os fantasmas, a premonição). Não existem contra-exemplos desta tendência.

setembro 15, 2011

Definições

Conceitos como o livre-arbítrio ou o conhecimento têm sido motivo de discussões filosóficas desde a Grécia Antiga. Qualquer um que tente uma definição sobre um destes temas encontra rapidamente, na comunidade filosófica, argumentos e contra-exemplos igualmente sólidos

O caso de conhecimento é exemplar. Ao fim de inúmeras discussões parecia que a comunidade intelectual tinha chegado a um acordo sobre a definição de conhecimento: a pessoa A conhece B se A acredita em B, A tem uma justificação para acreditar em B e B é verdadeiro, ou simplesmente, algo é conhecimento se é uma crença justificada e verdadeira. Mas chega 1963 e Edmund Gettier publica um breve artigo (e o seu único, curiosamente) que apresenta contra-exemplos suficientemente convincentes para voltar a pôr a questão do conhecimento na ordem do dia [1]. Cinquenta anos passados e, tanto quanto sei, ainda não se alcançou um novo consenso (deve-se apertar ou relaxar a definição inicial? O que é justificação? E, já que falamos disto, o que é uma crença?).

Outro tipo de efeito é o que ocorre actualmente com o livre-arbítrio. Com os avanços científicos no estudo do cérebro e do comportamento, tem-se testemunhado a uma confusão sobre o que significa. Quando se recolhem evidências que vão contra uma definição, existe um esforço para ajustá-la até que deixe de ser testável (na maior parte das vezes de forma inconsciente, sendo algumas motivadas ideologicamente, e.g., pela noção cristã da alma). Poderíamos talvez sugerir a seguinte pré-condição à definição de livre-arbítrio: «algo suficientemente vago sobre a capacidade de decisão individual que não possa ser contestado por neurociêntistas e psicólogos». O mesmo parece ocorrer com outras definições cognitivas, como a consciência ou a inteligência ou até o comportamento moral, persistindo como o bastião último que nos separa do restante reino animal.

Existe algo fugidio sobre o problema das definições, sobre o mapear de um conceito abstracto a múltiplas situações concretas. Pergunto-me se palavras como 'tristeza' ou 'gigante' tivessem, por algum motivo histórico ou social, recolhido a mesma atenção, se não estaríamos também a discuti-los e a publicar artigos e livros sobre o assunto. Se, por um lado, as definições possuem um papel na resolução de questões epistemológicas [2], levar esse papel ao limite (da precisão, da exaustão dos casos conhecidos) pode ter um efeito paralizante sobre a discussão filosófica ou científica.

Que se evite definições demasiado vagas ou excessivamente restritas para o problema em questão. Mas encontrar a definição quase perfeita, que englobe todos os casos possíveis e imaginários (como muitos dos contra-exemplos propostos) é ilusório. Devemos investir numa definição apropriada, incompleta sim mas cujas falhas conhecemos, com a qual podemos sincronizar o mapa semântico da comunidade de interessados e realizar trabalho.

Um exemplo recente foi a despromoção de Plutão de planeta para planetóide. A descoberta de vários objectos trans-neptunianos levou à redefinição de planeta. Agora, um planeta é um objecto suficientemente grande para a gravidade o tornar esférico e capaz de limpar a órbita que o limita de planetesimais. Significa isto que se descobrirmos objectos que acompanham a órbita da Terra (e de facto, descobrimos alguns em 2010) a Terra deixa de ser um planeta? Só se exigirmos a rigidez de uma definição exacta que de nada serve porque (quase) nada é representado por ela.

[2] Uma definição de definição: http://plato.stanford.edu/entries/definitions/

abril 29, 2011

fevereiro 25, 2011

Resíduo

"The third-person methods of the natural sciences suffice to investigate consciousness as completely as any phenomenon in nature can be investigated, without significant residue. What is the import of “significant” here? Simply this: If scientists were to study a single grain of sand, there would always be more that could be discovered about it, no matter how long they worked. The sums of the attractive and repulsive forces between all the subatomic particles composing the atoms composing the grain will always have some residual uncertainty in the last significant digit we have calculated to date, and backtracking the location in space-time of the grain of sand over the eons will lead to a spreading cone of indiscernibility. But our ignorance will not be significant. The principle of diminishing returns applies. My claim is that if we use the third-person methods of science to study human consciousness, whatever residual ignorance we must acknowledge “at the end of the day” will be no more unsettling, no more frustrating or mystifying, than the ignorance that is ineliminable when we study photosynthesis, earthquakes, or grains of sand. In short, no good reasons have been advanced for the popular hypothesis that consciousness is, from the point of view of third-person science, a mystery in a way that other natural phenomena are not" - Daniel Dennett, Sweet Dreams

fevereiro 16, 2011

A Emergência da Polícia

Uma economia onde quem contribuí para a sociedade (via impostos e taxas, por exemplo) mas que não tenha um sistema de fiscalização, permite a invasão de elementos parasitas (free-riders) cuja estratégia é aproveitar-se do bem comum sem colaborar (e, assim, obter maiores ganhos). Desta forma, é natural que surjam esquemas de punição para coagir os elementos da comunidade a participar no esforço conjunto. Isto ocorre mesmo em comunidades pequenas, onde não existe um estado central que produz normas e fiscaliza o seu cumprimento. Experiências recentes feitas em laboratório mostram evidência que até em interacções anónimas (onde não existe reputação nem redes familiares que motivam e reforçam a colaboração) emerge naturalmente um esquema de punição que força, com sucesso, a colaboração média da comunidade. Este tipo de investigação vê a cooperação como um sinónimo de punição altruísta [1,2,3].

Existem dois esquemas básicos de punição: a vingança (termo técnico: peer punishment) e a polícia (pool punishment). Numa sociedade com poucos parasitas, a vingança é economicamente mais eficiente (manter uma polícia sai caro) mas é menos estável. Na maior parte dos casos reais, a polícia acaba sempre por emergir (exemplo: no Wild West um dos primeiros acordos comunitários numa nova cidade era atribuir a alguém os poderes de xerife).

Também em laboratório estudou-se as causas responsáveis pela emergência de uma polícia [4,5,6]. Em resumo: manter uma polícia é custoso e é vista pela comunidade como um bem comum. Um cidadão que não tenha um comportamento parasita mas que não contribua para o esforço de manter a polícia, passa a ser visto como um parasita de 2ª ordem, ao qual é adequado forçar certo tipo de punição. O que se verificou é que quando esta punição de 2ª ordem não é estabelecida, a polícia tende a dissolver-se (cada vez há menos pessoas a sustentá-la) e passa-se para um esquema de peer punishment onde a punição pelas próprias mãos passa a ser a norma. Quando a punição de 2ª ordem é imposta à comunidade, a pool punishment torna-se a estratégia dominante (resultando numa baixa percentagem de parasitas de 1ª e 2ª ordem).

Refs:

fevereiro 02, 2011

"[...] many statistical results from scientific studies that showed great significance early in the analysis are less and less robust in later studies. For instance, a pharmaceutical company may release a new drug with great fanfare that showed extremely promising results in clinical trials, and then later, when numbers from its use in the general public trickle back, shows much smaller effects. Or a scientific observation of mate choice in swallows may first show a clear preference for symmetry, but as time passes and more species are examined or the same species is re-examined, the effect seems to fade.

This isn't surprising at all. It's what we expect, and there are many very good reasons for the shift.

  • Regression to the mean: As the number of data points increases, we expect the average values to regress to the true mean…and since often the initial work is done on the basis of promising early results, we expect more data to even out a fortuitously significant early outcome.

  • The file drawer effect: Results that are not significant are hard to publish, and end up stashed away in a cabinet. However, as a result becomes established, contrary results become more interesting and publishable.

  • Investigator bias: It's difficult to maintain scientific dispassion. We'd all love to see our hypotheses validated, so we tend to consciously or unconsciously select reseults that favor our views.

  • Commercial bias: Drug companies want to make money. They can make money off a placebo if there is some statistical support for it; there is certainly a bias towards exploiting statistical outliers for profit.

  • Population variance: Success in a well-defined subset of the population may lead to a bit of creep: if the drug helps this group with well-defined symptoms, maybe we should try it on this other group with marginal symptoms. And it doesn't…but those numbers will still be used in estimating its overall efficacy.

  • Simple chance: This is a hard one to get across to people, I've found. But if something is significant at the p=0.05 level, that still means that 1 in 20 experiments with a completely useless drug will still exhibit a significant effect.

  • Statistical fishing: I hate this one, and I see it all the time. The planned experiment revealed no significant results, so the data is pored over and any significant correlation is seized upon and published as if it was intended. See previous explanation. If the data set is complex enough, you'll always find a correlation somewhere, purely by chance.

[...] Yes, science is hard. Especially when you are dealing with extremely complex phenomena with multiple variables, it can be extremely difficult to demonstrate the validity of a hypothesis (I detest the word "prove" in science, which we don't do, and we know it; Lehrer should, too). What the decline effect demonstrates, when it occurs, is that just maybe the original hypothesis was wrong." P.Z.Myers, Science is not Dead, blog Pharyngula.

janeiro 04, 2011

Ontologia

"[...] in our view, the existence of a real world that was not created in our imagination, and which continues to go about its business according to its own laws, independently of what humans think or do, is the primary experimental fact of all, without which there would be no point to physics or any other science. The whole purpose of science is learn what that reality is and what its laws are. [...] Any theory about reality can have no consequences testable by us unless it can also describe what humans can see and know. For example, special relativity theory implies that it is fundamentally impossible for us to have knowledge of any event that lies outside our past light cone. Although our ultimate goal is ontological, the process of achieving that goal necessarily involves the acquisition and processing of human information. This information processing aspect of science has not, in our view, been sufficiently stressed by scientists. [...] We suggest that the proper tool for incorporating human information into science is simply probability theory and not the currently taught "random variable" kind, but the original llogical inference" kind of James Bernoulli and Laplace. For historical reasons [...] this is often called "Bayesian inference". When supplemented by the notion of information entropy, this becomes a mathematical tool for scientific reasoning of such power and versatility that we think it will require Centuries to explore all its capabilities." E.T. Jaynes, Probability in Quantum Theory (1996)