American Author

“Conscientious Thinking and the Modern Sciences”

                                                            

by David Bosworth  

Raritan (Volume 33, Number 3, Winter 2014)

 

A kind of post-modern reasoning has been reforming our sciences, with consequences we have yet to imagine. Despite the diversity of the fields involved, many of these changes, which include dramatic proposals in cosmology and a stunning revival of Lamarckian principles in evolutionary biology, are inherently like-minded. And although they do represent what Thomas Kuhn long ago labeled a “paradigm shift” in professional understanding, they also reflect a much broader reordering of cultural common sense. More than just signs of a significant revision in the preferential logic of natural philosophy (as science used to be called), they mark the further articulation of an emerging worldview, one that has been challenging the social presumptions of modernity as well.

Such radical revisions in commonsense thinking are inherently controversial, and given the multiple and often messy meanings of the terms in play, an immediate clarification is required. When using the words modern or modernity, I am referring to the broad revolution in Western thought that led to both the scientific and industrial revolutions—at its best, politically, to a revival of democracy; at its worst, to the utopian delusions that fueled mass slaughter. I am not citing the arts movement of the early twentieth century commonly called modernism (some of whose methods ironically prefigure postmodern techniques); nor will I use these words as a loose signifier for anything that is contemporary. Because the logic of modernity has calibrated America’s core institutions, it remains powerfully present in all our lives. But not only is that logic now in decline, as exemplified perhaps by the “expert” policies that generated our economy’s recent collapse, it is also being relentlessly contested by a surging set of post-modern methods, measures, and values.

Post-modern is itself, of course, a highly problematic term, much associated with and tainted by our culture wars. Here, aiming to escape those partisan associations by heightening the most literal meaning of the word, I have employed a hyphenated spelling. Worldviews do pass. Whether for good or for ill, we have been moving away from many of the presumptions of our foundational past as the world’s first truly modern society, caught up instead in an ambiguous transition now a century in the making. We are indisputably postmodern in the sense that we are living after the height of the modern era, but without possessing as yet a coherent consensus as to what ought to replace it. And nowhere is that shift more telling than in the very sciences whose achievements have been a hallmark of the modern mind.

The defining features of modern consciousness and culture began to cohere in the seventeenth century as a radical alternative to a medieval worldview long in decline. Their appearance was secondary to technological advances in both expert measurement and everyday knowledge, including the telescope, the microscope, and, especially, the information explosion ignited by the printing press and the spread of literacy. The logic of the phonetic alphabet, much accentuated by the psychological impact of silent reading, favored the mental habits of isolation, abstraction, and specialization; and the alphabetic strategy of “atomizing” language into its simplest parts soon led to similar theories about the underlying order of both the physical and social worlds.

The subsequent rise of skepticism (rational thought methodically segregated from emotional affiliations and traditional beliefs, as codified in Descartes’s Discourse on Method) and individualism (a new emphasis on the social atom over the community, and even the family, as the primary unit of human experience) proved deeply disturbing in both the psychological and political senses. By the mid-seventeenth century, the default logic of Western culture was under assault on all sides by the aggressive dissections of the modernizing mind. New “scientific” theories and entrepreneurial ambitions were advanced only to be squelched by ecclesiastical censors and royal fiats, and this tense rhythm of dissension and repression soon exploded into civil wars as religious schisms gave way to sectarian violence, and political revolutions erupted across the continent and England.

Only after many years of turmoil did a new cultural order evolve: a liberal modernity that, even as it endorsed individual rights, intellectual specialization, a scientific outlook, an aesthetic perspective, and an economy of entrepreneurial free agents, found the means to discipline them all with the checks and balances of new duties, standards, expressive forms, political constitutions, and contractual laws. With its fitful but profound recalibration of commonsense thinking and customary practices, this is the historical period that best illuminates the challenges we currently face as we struggle with our own transition between two sets of ruling beliefs. Only now, it is modern logic itself that is under assault on all sides by a significant shift in the prevailing patterns of everyday experience, as those patterns are being inscribed by the post-modern machinery of our digital era.

We do “make sense” through the evidence of our senses, and our radically upgraded technological devices have changed the ways that evidence is both arrayed and conveyed. Accelerating a process that began long ago with the first electronic media, our daily use of desk tops, laptops, mobile phones and tablets with their search engines, shareware, and wiki-empowered social networking sites has been revising our default expectations as to what seems natural, right, and pleasing to behold in ways that challenge (without necessarily replacing) long-standing beliefs about the true, the good, and the beautiful. Such a shift can be traced in social relations as the value of stoic reticence has given way to the importance of “sharing,” and in the visual arts as the singular, fixed, and “immortal” object (a sculpture by Rodin) has been supplemented by environmental installations and here-then-gone staged events (the effigies of Burning Man).

In a deliberate play on science, I have been calling these postmodern ways of assessing the world conscientious thinking. The etymology of that first word—marrying a prefix con, meaning with or together, to a root scientia, meaning knowledge—clarifies the kind of reasoning it describes and how that reasoning contrasts with the preferred operations of the modern mind. Where the heightened attention of modern science strove to know the world through isolation and specialization, post-modern thinking now aims to know with. It stresses con-scientious measures that can account for the “togetherness” of experience, naturally preferring relation over isolation, hybridity over purity, and the authority of consensus over individual genius. Collaborative, interdisciplinary, multisensory, multicultural: in various ways, the conscientious mind strives to marry thought with feeling, the sciences with the arts, the visual with the auditory, the familiar with the foreign, the present with the past, this medium or genre with that. Where the primary metaphor of the modern mind was the atom, the post-modern mind has widened our attention to whole fields of interactive effects.

This ongoing shift in the organizing grammar of authoritative thought can be traced in every discipline from art and anthropology to physics and genetics, and the conflicts it has spurred have not been limited to the spheres and feuds of intellectual pursuit. The contention between scientific and conscientious (modern and postmodern) modes of reasoning has erupted in nearly every precinct of American life, challenging and often changing the ways classrooms are run, friends relate, money is made, crimes are committed, mates are chosen, children are raised, and the dead are memorialized. Our default conceptions of space and time and of the proper relationship between self and society have been shifting at an ever accelerating rate. (To provide an example of the latter: the handwritten personal diary is being supplanted today by the blog, pod cast, and Facebook page, the introspective self of literate modernity giving way to the publicized self of digital interconnectivity.) Although attachment to older forms persists and many questions remain about the mature shape that the newer forms may take, this shift in emphasis away from atom, text, and figure toward field, context, and ground—from, broadly speaking, the logic of anatomy to the logic of ecology when mapping the world—now appears to be irreversible: a conclusion that I am asserting here more as an anthropological observation than an ideological endorsement.

The new emphasis on knowledge of and through “togetherness” has been accompanied by an even more contentious trend: a renewed insistence on the inherent fallibility of human reasoning, even when practiced at its very best. This insistence on the fallibility of our thought has two related dimensions: first, a profound suspicion that, contrary to the utopian presumptions of Cartesian philosophy, we will never “know it all,” never finally solve the mystery of our place and so, too, never fully control our destiny here; and second, a recognition that the very objectivity that the modern mind idealized as the means for achieving such final knowledge is far more fragile than first believed. Although substantial material progress has been made under the reign of modern thought, human imperfection hasn’t been revoked. As we shall soon see, even in rigorous laboratory research, both motives and methods can torque experimental results, calling into question how and even whether objectivity can be achieved. To marry Shakespeare to Marshall McLuhan: it is still frequently the case that “reason panders will,” and “the medium” of our thought can indeed unconsciously prefigure the likely shape of its final “message.”

Although these broader themes of conscientious thinking, its tilt toward togetherness and toward admitting fallibility, have been in play for a century now, modernity’s ruling common sense has not gone gently into the good night of cultural obsolescence. Just as a premodern allegiance to the value of collectivity and humility survived in pockets throughout the modern era, only to be revived in these new and different forms, the atomistic values and cultural construction of modernity will surely persist for many years to come. What the following survey does suggest, however, is the degree to which conscientious thinking has already been adopted, often without acknowledgment, by mainstream science.

 Physics: The modern presumption that the completion of knowledge was not only possible but near at hand reached its cultural apex at the end of the nineteenth century, only to be challenged in the ensuing three decades by a series of transformative discoveries in anthropology, physics, literature, and the arts. Artistic truths and mathematical ones cannot equate, of course, but they can be metaphorically alike. The use of multiple points of view in literary fiction and multiple perspectives in sculpture and painting; the cataloguing of multiple tribal beliefs, each with its own interpretation of the human experience; the double blow to the clockwork universe delivered by general relativity and quantum mechanics: progress in each of these fields may have been following its own developmental logic, but the changes they expressed were also profoundly akin. Each was, in its own way, calling into question the notion of fixed and final truths that, following Descartes, had calibrated the core of the scientific worldview, and their near-simultaneous emergence in the early twentieth century marked the real birth of the post-modern turn toward conscientious thinking.

The revolution in physics, for example, defied the expectations of modern reasoning in multiple ways. The strict segregation of categories inherent in Newton’s clockwork universe was undermined when Einstein proposed, and later experiments seemed to prove, that space and time themselves were interrelated, existing in a kind of dynamic continuum. Today, their inherent “togetherness” is so accepted within the field that the words themselves are commonly fused, practitioners exploring and debating the fundamental features of spacetime. Meanwhile, as physicists began to study the subatomic world, the order and behavior of material reality appeared even more alien to modern expectations. When probed with the most sensitive instruments, the structure of light became bizarrely ambivalent in the literal sense, conscientiously exhibiting qualities of both a particle and a wave. Stranger still, subatomic entities could occupy multiple places at the same time, a condition called superposition, and they could influence each other instantly from afar, exhibiting a “spooky” togetherness (Einstein’s adjective) called entanglement that defied either the logic of mechanical causality or the supposedly inviolable limit imposed by the speed of light.

The loose use of quantum mechanics’ uncertainty principle by nonspecialists like myself often irritates physicists today, especially when it is hijacked by those who are striving to save an unplumbable space for their version of the divine. But separate from the low comedy of our recent god wars, with their dueling bumper stickers, strident court cases, and best-selling polemics largely illiterate in the reasoning of the positions they scorn, the claims first made by Werner Heisenberg in the 1920s were symbolically important. They signified an honorable moment in our intellectual history when, through the rigorous use of its own best tools, modern science rediscovered the reality of limits: when it first began to know that it couldn’t “know it all”—at least not with the absolute certainty and pure objectivity it had previously presumed. Because, at the subatomic level, the act of observation became a form of participation that changed the shape of the measurable world, our assessment of the smallest units of reality couldn’t be fixed and final, after all. Each experimental protocol would influence that experiment’s results: the way we chose to study reality would predetermine the nature of reality itself.

Unlike, say, the discovery of nuclear fission in 1938, the immediate social impact of Heisenberg’s claims was minimal: no war-winning weapons or transformative consumer products were fashioned from its view of the subatomic world; it didn’t change the course of elections, or revise the moral management of domestic households. Yet for science itself, the implications of the uncertainty principle were profound. If true, it fatally undermined the modern presumption that the completion of knowledge could be achieved through the cumulative addition of small-grained certainties: as a route to finality in natural philosophy, atomism was now discredited. In the realm of elementary particles, future events might be probable but they couldn’t be certain, and it was at just this point, the reintroduction of uncertainty into natural philosophy, that the new logos of theoretical physics began to resemble the old mythos expressing the tragic sense of life. Its post-modern turn away from some of the core conceptions of the Cartesian epistemology was also a pivoting toward pre-modern notions about the inherent limits of human knowledge.

Other notable thinkers, including William James and Henri Bergson, had challenged those same core conceptions, but Heisenberg’s principle was based on experimental results in modern science’s most prestigious field at the time, theoretical physics, which made its apparent apostasy all the more disturbing. For that reason, most scientists chose to treat the uncertainty principle as a temporary mystery, an incompletion to be fixed by a more inclusive calculation, a new “theory of everything” that would not only fulfill the old promise of perfection and completion but fulfill it on familiar terms. Like the original atomists, many physicists continued to search for that one elementary unit out of whose innumerable pieces the whole of reality was presumed to be constructed. With all the recalcitrance of an unconscious habit, the atomizing strategy of the phonetic alphabet was still being projected onto the material world.

Yet, so far at least, the material world has stubbornly resisted the simplicity of that model. Through the use of ever-more sophisticated instruments, the ongoing search for that one elementary unit has uncovered instead a diverse zoo of subatomic entities: quarks, leptons, fermions, bosons, with various charges and forms of “charm.” All this hard-won data keeps complicating the task of perfecting a final map, defying the sort of elegant formulations that modern science prefers. Attempts to corral its diversity have led instead to exotic proposals, including the “multiverse” (the existence of multiple parallel universes) and the current reigning version of superstring theory which requires a material world with ten spatial dimensions rather than three.

The messiness of these theories and the suspicion that their claims can never be tested experimentally have now led to a conceptual revolt against the goal itself. The theoretical biologist Stuart Kaufmann, the physicists Andreas Albrecht and Lee Smolin, and the philosopher Roberto Unger have all suggested in their own way that the pursuit of a “theory of everything” is fundamentally misconceived. They argue that the modern presumption that we can locate immutable laws, true everywhere and forever, is itself misconceived, and both Smolin and Kaufmann suggest that the physical order of the universe may be evolving over time, just as the biological order does. Even Stephen Hawking, the most influential physicist of the age, conceded in 2010 that no single theory of the universe is ever likely to hold true for all places and times. Whether through the superstring approach or something else, the best we are likely to achieve is “a family of interconnected theories, each describing its own version of reality.” In cosmology, then, as in pedagogy, “multilogue” has been supplanting monologue and metamorphic history challenging universal theory as the better models for authoritative thought. None of these challenges to the ruling orthodoxy is arguing that the universe is utterly unknowable, or that all theories are equally true; the implication instead is that even the best accounts produced by us can only be partial in both senses of that word: incomplete and necessarily slanted by whatever method we choose to use.

Such revisions are telling, but because discoveries in the subatomic and telescopic realms have little immediate relevance to our everyday lives, one has a hard time making a convincing case for conscientious thinking based on the findings of physics alone. Horrible as it may be to admit, the influence of that field on contemporary consciousness and culture peaked with the invention of atomic weaponry during and after World War II. Since then, with the discovery of DNA, multiple hominid fossil finds, and the success of medical research in prolonging our lives, the capacity for social influence and the prestige that goes with it have clearly shifted to the biological sciences. Here, too, though, the characteristic elements of conscientious reasoning have been revising the prevailing thinking for years. As the primary metaphor in physics has been shifting from atom to field, the emphasis in biology has been flipping from anatomy to ecology. Wherever one turns, the same trends seem to emerge. Just as “to know” is to “know with” in our post-modern era, one can’t study biology today without thinking sym-biotically.

 Evolution: Shifts within evolutionary theory are illustrative of this change. Darwin’s The Origin of Species (1859) was published in a Victorian England where modernity’s logic ruled supreme. Reflecting the cultural common sense of the day, most of the theory’s early proponents—including Herbert Spencer, who coined the phrase “the survival of the fittest”—reconceived nature as the marketplace writ large, interpreting life as an agonistic struggle between biological atoms that competed for resources and the chance to reproduce. Over the last forty years, however, that stress on atomistic competition (nature “red in tooth and claw,” as envisioned by Tennyson) has been offset by a dawning recognition of the importance of cooperation in natural selection: how fitness could be achieved through strategies of togetherness. Closing the gap between human and nonhuman life, scientists were discovering that other species—not just the great apes but also dolphins, whales, wolves, and crows —were impressively intelligent (in the adaptationist ways that biologists define that word), and that the intelligence of a species was strongly correlated with its sociability. Individual competition in breeding, for example, often seemed to coexist with astute cooperation in hunting, feeding, and self-defense. Even breeding could be a collective enterprise, as within their herd or pride, female elephants and lions were observed sharing their maternal duties.

Socialization implied intraspecies communication, which investigators, with the aid of more powerful tools, were finding to be far more prevalent than previously imagined. Whales as well as birds were found to have songs that vary by group and carry for hundreds of nautical miles. In a kind of choreographic mapping, honeybees “dance” to convey to their hive-mates where flowers can be found, and when attacked by insects, mute and stationary plants emit chemical signals that warn others of their kind to take defensive measures. Even the most primitive forms of life, those supposedly solitary microbes that constitute half of the planet’s biomass and most of its diversity, have now turned out to be avid communicators. Conversing through a biochemical process called quorum sensing, bacteria are constantly taking a census of their surrounding area, and when sufficient numbers of their kind are found to be present (when a quorum is reached), group action is taken. As with far more sophisticated animals, microbial communication leads to social cooperation, generating coordinated shifts in collective behavior that range from lighting up the ocean with bioluminescence to releasing toxins into the bloodstream of an infected host.

Medicine: The realization that even the unicellular microbe (the atom of all life forms) is interactively communicating to explore its environment and to direct group behavior has suggested an entirely new approach to the pharmaceutical treatment of infectious diseases: one that seeks to block or confuse the quorum-sensing capabilities of pathogens. A similar set of findings has been revolutionizing our understanding of cancer. In line with the atomistic biases of the modern mindset, the prevailing model in the past for both the etiology of the disease and its treatment had been “tumor-centric.” The initial presumption had been that a single cell had gone awry and, further, that one or two damaged genes had caused it to multiply uncontrollably. Following that same logic, it was presumed at first that, with the decoding of the human genome, a point-to-point map linking most cancers to a particular mutation could be drawn, leading eventually to gene-based therapies.

All of these presumptions have proven to be overly simplistic, highlighting again the limits of the modern mindset’s atomistic approach to comprehending reality. At the genetic level, relatively few forms of cancer have demonstrated such a clear-cut causality, with a single mutation automatically initiating the disease. The danger of most genetic errors now appears to be more probabilistic than deterministic. And not only does an intracellular chemistry far more complex than mutation alone affect the cancerous expression of genetic misinformation; evidence now exists—in the words of Joseph H. Nadeau, director of scientific development at the Institute for Systems Biology—that “the function of one particular gene sometimes depends on the specific constellation of genetic variations surrounding it.” Malignancy itself may depend on particular varieties of pathological “togetherness.”

A like-minded shift in emphasis from atom to field has been occurring on the physiological level. Tiny tumors are proving to be more common than was formerly understood, which means that, in normal circumstances, the body must possess the biochemical means to contain them. These findings suggest that some cancers only spread when those as yet unknown protective processes fail, and that notion, in turn, implies a broader definition of the disease itself, one that includes not just mutating genes and a malfunctioning cell but also the overall health of the neighboring tissue. Or, to rephrase Nadeau’s observation: the mass multiplication of any particular tumor “sometimes depends on the specific constellation of physiological variations surrounding it.” That constellation may include physical injuries, infections, chronic inflammation, or the cumulative damage caused by aging, and because some of the most common current treatments, such as mastectomies and even biopsies, can generate injury and inflammation on their own, these findings challenge standard practices in troubling ways even as they suggest a whole new arena for therapeutic investigation.

Lamarckian Genetics: In all of the fields examined so far, a serious reorientation has been occurring, and this shift follows the logic of the post-modern turn. A prophetic observation made in 1927 by astrophysicist Sir Arthur Eddington—that “secondary physics” would prove to be “the study of and”—is now also proving true for the “secondary” phases of evolution, microbiology, and medicine as each of these sciences has enriched its understanding of the living world by turning its attention from text to context, from atomistic incident to interactive environment. But as with physics, this ongoing transformation has been fitful and often fiercely contested. The conflict between the two common senses, an established but declining modern logic and an insurgent but still evolving conscientious reasoning, has been reenacted in field after field.

In evolutionary genetics, the debate between the neo-Darwinian advocates of a competitive atomism (such as Richard Dawkins’s “selfish gene” theory) and the new proponents of togetherness, who tend to emphasize kin and group selection, has been passionate, their highly technical arguments at times even charged with political insinuations. In cancer research, the financial as well as psychological investment in atomistic reasoning led to a self-interested intransigence that has only recently abated. In 2008 Dr. Mina Bissell, an early advocate of the importance of a tumor’s physiological environment, won the prestigious Excellence in Science award, with the award committee praising her for creating a paradigm shift in our fundamental understanding of the disease. But back in 1984 when she handed one of her early papers to a leading researcher in the field, his immediate response was to drop it into a wastebasket.

But no idea had been more ridiculed, no stance had seemed more dead, than the claim by the early French biologist Jean-Baptiste Lamarck that the features acquired through experience by one generation could be passed on to the next—a claim that, during the Cold War, seemed permanently tainted by its association with Soviet science and its faith in the creation of a new “Soviet Man.” To even entertain the possibility that acquired characteristics might be heritable was a career-killer—the equivalent to professing an ongoing belief in the existence of phlogiston or spontaneous generation. According to reigning orthodoxy, lasting changes in heritable features are solely the result of random mutations in a species’ genes as those mutations are proven to be conducive to survival over multiple generations. Evolutionary change, therefore, tends to be glacially slow and the life experiences of any single parent largely irrelevant to the physical heritage he or she passes on. Both simple and comprehensive, this theory does possess the hallmark elegance that modern science prefers. Unfortunately for its many adherents, who had tossed Lamarck’s competing thesis into history’s wastebasket, it now also appears to be wrong.

Epidemiological studies of a small town in Sweden, conducted by Marcus Pembrey and others, have demonstrated that the life expectancy of villagers has been affected in statistically significant ways by the life experiences of their parents and grandparents. Boys whose grandfathers suffered food shortages when they were eight to twelve years old were themselves likely to die sooner. If a woman experienced a similar shortage when very young, then her sons’ daughters were also more likely to die at an earlier age. And in a finding with scary relevance to an America now afflicted with an epidemic in obesity, if a Swedish man overate as a child, his sons were four times more likely to develop diabetes and were more susceptible to heart disease as well.

Important lab work has uncovered a likely physiological source of these remarkably Lamarckian results. Genetic expression, it turns out, is profoundly affected by a biochemical system of on-off switches that determine which of our 25,000 protein-encoding genes are active and which are not. And because the pattern of these epigenetic switches can be both changed by life experiences and, it now seems clear, passed on through a parent’s egg or sperm, the behavior of a single generation, and the material circumstances of their bad or good luck, can indeed change the probable health of their offspring. Michael Skinner, a pioneer in the field, has shown that exposure to a commonly used fungicide causes epigenetic shifts that are passed on through at least four generations of rats to the detriment of their reproductive health. In a study that may explain the behavior-based heritability of diabetes, a group of Australian scientists discovered that the pancreases of female rats whose fathers had been deliberately overfed contained 642 epigenetic switches in the wrong position. Meanwhile, Nadeau’s Institute for Systems Biology has tracked over one hundred intracellular or behavioral traits that are affected by epigenetic change.

We have only begun to map the extent, pace, and durability of these non-mutational transformations in genetic expression, much less to consider their implications for public health policy. But their stunning ratification of a position long ridiculed by modern science provides one more example of the ongoing shift toward a more conscientious conception of reality. Following modernity’s atomistic biases, as modeled in the extreme by the ideas of Richard Dawkins, the neo-Darwinian orthodoxy has assumed that heritability was a process defined and determined by its smallest pieces. Just as modern cancer research had been tumor-centric, the orthodox conception of biological inheritance reductively focuses on the individual gene as a replication machine, on its “selfish” drive to persist in time, and on evolutionary change as the result of random errors in replication that just so happen to provide a survival advantage.

But even the gene, it now seems, to recall a famous line by John Donne, is less an independent “island” than an integral part of a larger and multidimensional “Main.” The boundaries of our genetic identities are proving to be far more porous than neo-Darwinian theory allows; the benefits and dangers of the macro-environment, as directed in part by our own behavior, do seep through to interact with intracellular processes, causing durable changes in biochemical, physiological, and behavioral traits. In the words of one researcher, these discoveries and others are now “vexing” the orthodox definition of a gene “with multiple layers of complexity.”

A medical researcher long ago supplied a metaphor that might apply to all these fields that are now breaking away from the modern model toward a more conscientious understanding of the biological order. Arguing that malignancy was not the result of a rogue cell but a disorder of cellular organization, Dr. D. W. Smithers wrote, “Cancer is no more a disease of cells than a traffic jam is a disease of cars. A lifetime of study of the internal combustion engine would not help anyone understand our traffic problems.” The year was 1962 and although Smithers’s article appeared in The Lancet, Britain’s most prestigious medical journal, it might as well have been thrown into a wastebasket. Thanks to a three-hundred-year-old predisposition to frame every problem in atomistic terms, “a lifetime of study” in the field of cancer research may have been tragically misdirected by focusing largely on the cellular “engine.”

Symbiosis: The “traffic” of interactivity and the multiple layers of coordination that distinguish biological systems from the linear causality of clockwork mechanisms have had a powerful impact on evolution itself, and cooperation between species as well as within them is proving to be an indispensable feature of life’s innate togetherness. As Bonnie Bassler and others have demonstrated, the lowly bacterium cannot only converse with its kind; it is often bilingual, possessing one chemical language for its own species and a second, more universal one to communicate with many other microorganisms. This expanded capacity to signal chemically can result in complex forms of coordinated activity between multiple species to their mutual benefit, allowing, for example, some six hundred varieties of bacteria to organize themselves into dental plaque.

Other examples of evolutionary interdependency are evident in every ecological niche. Skyscraping trees exchange life-sustaining nutriments with subterranean fungi; ants protect aphids and are fed in turn by their sugary secretions; both termites and cows provide protective environments for microorganisms that then assist their hosts in absorbing food; nectar attracts the birds and bees whose anatomies have been exquisitely shaped not only to feed on but to facilitate the reproduction of flowering plants. There are at least ten times as many microbes in or on a human body as there are human cells, and many of those are vital to our survival, protecting us from ultraviolet rays, aiding in our digestion, producing the crucial vitamins K and B12.

Not all these relationships are beneficial to both partners. Symbiosis—from the Greek sumbiosis, “companionship”; from sumbion, “living together”—is a broad category that includes four kinds of co-evolving relationships: the predatory and the parasitic (where one species benefits while harming the other), the commensal (one benefiting while the other is unaffected), and the mutual (both benefiting, as in all the examples cited above). But whether beneficial or harmful, mutual or one-sided, symbiosis insists that evolution can only be fully understood by studying the interactive relationships between species: that to live at all is to live with, and that to live with is to be changed by one’s biological neighbors in meaningful ways.

Further “vexing” evolutionary science’s initial emphasis on one-to-one relationships “with multiple layers of complexity,” the field of ecology has expanded the concept of symbiosis to include the whole host of species living in any particular area along with key elements of their physical environment. And the Gaia hypothesis has completed this post-modern turn toward holistic thinking by redefining Earth itself as a single interactive system: a kind of über-organism that homeostatically maintains a range of conditions (temperature, salinity, solar radiation) hospitable to all the life forms that have evolved within its biosphere.

The Scientific Method: Finally, in a development with disturbing implications for many fields that returns us to the issue of intellectual fallibility: the reliability of mainstream research has now been called into question. These new concerns are not just limited to the corruption that occurs when corporations are the primary sponsors of research, as has long been the case with the testing of new pharmaceuticals. Even peer-reviewed studies untainted by corporate moneys and published in the most prestigious journals have been proving unreliable. All too frequently, the results of highly influential papers have either been weakened or contradicted by later studies. For those of us attempting to improve our health by following the latest research results, this herky-jerky rhythm of assertion and then reversal will seem all too familiar. Claims about the benefits of vitamin E, oat bran, bypass surgery, mammograms, daily aspirin doses, and hormonal supplements—to mention just a few of the more publicized examples—have been announced and applied only to be seriously challenged or officially reversed upon further study.

This anecdotal sense of the unreliability of medical research has recently received the imprimatur of science itself. Widely respected Stanford epidemiologist John Ionnidas has tracked research claims in the biomedical field with shocking results that are bluntly summed up in the title of his best-known paper: “Why Most Published Research Findings Are False.” In one analytical exercise, Ionnidas and his team examined the forty-nine most cited studies in the three major medical journals. Most were randomized controlled trials that met the highest methodological standards, yet of those initial claims that were subjected to further study, 41 percent were either directly contradicted or the size of their correlation significantly reduced. And given that the majority of published studies are not randomized controlled trials, the numbers in general are far worse. Ionnidas has been quoted as believing that “as much as 90 percent of the published medical information that doctors rely on is flawed” in some way.

This inability to replicate initially significant research findings has not been limited to the field of medicine; similar problems have arisen in biology, ecology, and psychology as well. The best available explanations so far involve a belated recognition of biases built into both the normative practices of researchers themselves and the system that has arisen to reward their results. Separate from the worrisome trend of undue corporate influence, scientific publications, for example, strongly prefer positive results. (One meta-study dating back to 1959 found that ninety-seven percent of articles published were ones that confirmed their initial hypothesis.) And because to succeed in their field researchers need to be published, there is significant pressure to cherry-pick from the available data in ways that ratify the proposition they have been testing.

To an outsider, such preferential thinking might seem an obvious instance of scientific fraud, but the data collected are often complex and ambiguous, requiring just the sort of subtle interpretation that is most susceptible to unconscious skewing. Called “selective reporting,” this subliminal inclination to cite the data most favorable to one’s original hypothesis is—according to the Canadian biologist Richard Palmer, who conducted a number of meta-studies on the subject—“everywhere in science,” a discovery that both stunned and depressed him. In a published review, Palmer summarized for his colleagues the profoundly unsettling implications for their field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a priori beliefs often repeated.”

That final phrase—“a collective illusion nurtured by strong a priori beliefs often repeated”—mirrors exactly the post-modern critique of modern science’s faith in the accessibility of fixed and final truths. Even devoted researchers who are true believers in the scientific method are now finding that the intellectual certainty they have idealized is far more elusive than they once presumed. In the hard sciences too, reason, it seems, routinely panders will, and merely fashionable ideas can hold sway for a time under the guise of universal truths.

This belated recognition of the subjectivity inherent in what was presumed to be the most objective results is not the most disturbing news emerging from today’s laboratories. Distinct from the issues of corporate corruption, publication bias, and selective reporting, other findings call into question the very ability of the scientific method to assess accurately certain aspects of reality on a routine basis. Even some researchers who, repeating their own experiments, have every incentive to confirm their results are finding that replication is inexplicably elusive. This phenomenon, known as the “decline effect,” is especially prevalent in studies of human and animal behavior. It perversely inverts the sequential progress that scientists expect: that is to say, the more often an initially successful experiment is repeated, the less significant its original correlations become. Rather than fixed and final, the truths discovered in these cases seem to be distressingly transient, suggesting that the initial correlations may have been more random than real and highlighting the possibility that, even when rigorously applied, the scientific method may be, in some cases, inadequate to the task of mapping reality with the accuracy we expect from it.

A caution here: immoderate attacks on the legitimacy of scientific knowledge present their own obvious dangers. Much of the material knowledge that we have gained in the last two centuries, the very health and wealth that we take for granted, are the tangible rewards of science’s scrupulous investigations of the real. Yet no human practice, however useful its past, should be immune to critique. And because critics like Ionnidas and Palmer are not radical nihilists but devoted rationalists who have been using science’s own preferred methods to question the reliability of mainstream research, their doubts demand a serious response. The systemic fallibility their work has uncovered not only suggests the necessity of professional reforms such as easing the publication bias; it also poses profound questions about what we can and cannot learn through the standard techniques of modern science alone.

As currently conceived, the scientific method is inherently reductionist. It atomizes experience, presuming that through radically narrowing the number of variables present in any given design, its experiments can discover correlations (between, say, vitamin E intake and heart disease) that reliably reflect physical causation in everyday life. But as we have seen in this survey, such an approach has its limits, especially in biological systems where its models of causation are often “vexed” by “multiple layers of complexity”—that is, by the innate and interactive togetherness of the natural world. Clearly, the current version of the scientific method has allowed us to map material reality at a depth unimaginable prior to its evolution. But to cite Emerson, “under every deep another deep opens,” and to explore that deep may require a more conscientious methodology than we currently practice.

My focus on the hard sciences has been due in part to a recognition of their social standing. If one is striving to convince a wider readership, it is always helpful to engird one’s argument with the reassurance that, yes, “studies have been done” which support its claims. Yet the ironic but undeniable implication of the studies reviewed here is how deeply the revisionary logic of post-modern reasoning has penetrated into the very fields most associated with the modern mindset. The professional feuds that have accompanied that penetration may seem fierce to their participants, but they pale before the social and political conflicts now arising out of a parallel shift in the grammar and syntax of everyday experience. The post-modern rediscovery of the fallibility of human reasoning has, for example, spurred numerous revolts against the modern approach to truth-finding, and when paired with software that radically empowers digital collaboration, these rebellions have led to a new model of intellectual authority, one that values the self-correcting field of interactive consensus over the fixed atom of individual expertise: a shift exemplified by Wikipedia’s near-instant marginalization of the Encyclopaedia Britannica, the dominant reference in English for over two hundred years. No society in history has more

emphasized the social atom than ours. Yet the very authority we have invested in individualism is now being called into question by both the inner logic of our daily practices and by the recent findings of our social sciences, which have been rediscovering the extent to which our decision making is nonrational (see Predictably Irrational by Dan Ariely) and unconsciously conformist (see Connected by Nicholas Christakis and James Fowler).

Such findings challenge the very core of our political economy’s self-conception. What, after all, do “self-reliance” and “enlightened self-interest” really mean if we are constantly being influenced on a subliminal level by the behavior of those around us? Can private property rights continue to seem right when an ecologically minded, post-modern science keeps discovering new ways in which our private acts transgress our deeded boundaries to harm or help our neighbors? Can our allegiance to the modern notions of ownership, authorship, and originality continue to make sense in an economy whose dominant technologies expose and enhance the collaborative nature of human creativity? And in an era of both idealized and vulgarized “transparency,” can privacy—the social buffer that cultivates whatever potential for a robust individualism we may actually possess—retain anything more than a nostalgic value? To be sure, even the Communist states failed to eradicate individualism in societies with no real democratic tradition, so we can be sure that some of its characteristic claims and cultural forms certainly will persist in an America long invested in an ethos of self-reliance. But the specific nature of those surviving claims and cultural forms, and how they might mesh with the new inclination toward interdependence, is not at all clear yet.

Probing the painful divide between America’s foundational identity as the first modern society and the post-modern logic that has been relentlessly redirecting our daily lives, these are not trivial issues. The conversion to digital tools will likely promote some sort of epochal change, just as the proliferation of a print-based literacy once did. The crucial questions are whether that revision will prove to be a civilizing one and, keeping in mind the wars that terrorized early modern Europe, whether we can minimize the social wreckage wrought during this transition.

That same historical record suggests that many of our current beliefs are unlikely to survive the radical change in cultural conditions we are now experiencing. But it also reminds us that the initial expression of beliefs more attuned to those conditions is likely to be both powerful and crude—and inherently destructive until they have been refined. Will the post-modern rediscovery of our fallibility restore a sense of intellectual humility, sparing us the utopian hubris of the modern mind that caused so much destruction? Or, dismissing all standards, will it lead to a cynical nihilism that only serves to justify the worst behaviors? And will the post-modern recovery of togetherness both ease the alienation that was a characteristic affliction of modernity and boost our intelligence through enhanced self-correction? Or will it degenerate into a kind of digital mob rule and to the lowest common denominator of taste and practice? All of these questions, for rhetorical purposes, pose bipolar possibilities; but of course our future prospects may—in true post-modern form—be as much a matter of “both…and” as of “either…or.”

Now more than ever, to borrow from Yeats, it should be “our first business to paint, or describe, desirable people, places, states of mind” for this new environment that our own inventions have been generating. And, given the stakes, that business will need to be conducted— in the traditional as well as the revised meaning of the word—conscientiously.

 

***