Thứ Hai, 29 tháng 9, 2025

4 The Best We Had

 


 

The scene: a woodworking shop, and two fellows—we’ll call them Al and Frank—are happily chatting away while Al feeds a huge sheet of plywood into the jagged blades of a giant circular saw. Suddenly you notice that Al has not used the safety guard for that saw blade—and your heartbeat speeds up as you see his thumb is headed toward that nasty sharp-toothed circle of steel.

 

Al and Frank are lost in their chatting, both oblivious to the danger at hand, even as that thumb heads closer to the whirring blade. Your heart races and beads of sweat form on your brow. You have the urgent wish to warn Al—but he’s an actor in the film you’re watching.

 

It Didn’t Have to Happen, made by the Canadian Film Board to scare woodworkers into using their machine’s safety devices, depicts three shop accidents in its twelve short minutes. Like that thumb heading inexorably into the blade, each of them builds in suspense until the moment of impact: Al loses his thumb to the circular saw; another worker has his fingers lacerated, and a wooden plank flies into the midsection of a bystander.

 

The film had a life quite apart from its intended warning to woodworkers. Richard Lazarus, a psychologist at the University of California at Berkeley, deployed those depictions of gruesome accidents as a reliable emotional stressor in more than a decade of his landmark research.1 He generously gave Dan a copy of the film to use in the research at Harvard.

 

Dan showed the film to some sixty people, half of them volunteers (Harvard students taking psychology courses) who had no meditation experience, the other half meditation teachers with at least two years of practice. Half the people in each group meditated just before watching the film; he taught the Harvard novices to meditate there in the lab. Dan told those assigned to a control group picked at random to simply sit and relax.

 

As their heart rate and sweat response jumped and subsided with the shop accidents, Dan sat in the control room next door. Experienced meditators tended to recover from the stress of seeing those upsetting events more quickly than people who were new to the practice.2 Or so it seemed.

 

This research was sound enough to earn Dan a Harvard PhD and to be published in one of the top journals in his field. Even so, looking back with closer scrutiny, we see a plethora of issues and problems. Those who review grants and journal articles have strict standards for what research designs are best—that is, have the most trustworthy results. From that viewpoint, Dan’s research—and the majority of studies of meditation even today—has flaws.

 

For instance, Dan was the person who taught the volunteers to meditate or told them to just relax. But Dan knew the desired outcome, that meditation should help more—and that could well have influenced how he spoke to the two groups, perhaps in a way that encouraged good results from meditation and poor ones from the control condition who just relaxed.

 

Another point: of the 313 journal articles that cited Dan’s findings, not one attempted to redo the study to see if they would get similar outcomes. These authors just assumed that the results were sturdy enough to use as grounds for their own conclusions.

 

Dan’s study is not alone; that attitude prevails still today. Replicability, as it’s known in the trade, stands as a strength of the scientific method; any other scientist should be able to reproduce a given experiment and yield the same findings—or reveal the failure to reproduce them. But very, very few ever even try.

 

This lack of replication looms as a pervasive problem in science, particularly when it comes to studies of human behavior. While psychologists have made proposals for making psychological studies more replicable, at present little is known about how many of even the most commonly cited studies would hold up, though possibly most would.3 And only a tiny fraction of studies in psychology are ever targets of replication; the field’s incentives favor original work, not duplication. Plus, psychology, like all sciences, has a strong inbuilt publication bias: scientists rarely try to publish studies when they get no significant results. And yet that null finding itself has significance.

 

Then there’s the crucial difference between “soft” and “hard” measures. If you ask people to report on their own behaviors, feelings, and the like— soft measures—psychological factors like a person’s mood of the moment and wanting to look good or please the investigator can influence enormously how they respond. On the other hand, such biases are less (or not at all) likely to influence physiological processes like heart rate or brain activity, which makes them hard metrics.

 

Take Dan’s research: he relied to some extent on soft measures where people evaluated their own reactions. He used a popular (among psychologists) anxiety assessment that had people rate themselves on items like “I feel worried,” from “not at all” to “very much so,” and from “almost never” to “almost always.”4 This method by and large showed them feeling less stressed after their first taste of meditation—a fairly common finding over the years since in meditation studies. But such self-reports are notoriously susceptible to “expectation demand,” the implicit signals to report a positive outcome.

 

Even beginners in meditation report they feel more relaxed and less stressed once they start. Such self-reports of better stress management show up much earlier in meditators’ data than do hard measures like brain activity. This could mean that the sense of lessened anxiety that meditators experience occurs before discernible shifts in the hard measures—or that the expectation of such effects biases what meditators report.

 

But the heart doesn’t lie. Dan’s study deployed physiological measures like heart rate and sweat response, which typically can’t be intentionally controlled, and so yield a more accurate portrait of a person’s true reactions —especially compared to those highly subjective, more easily biased selfreport measures.

 

For his dissertation Dan’s main physiological measure was the galvanic skin response, or GSR, bursts of electrical activity that signify a dollop of sweat. The GSR signals the body’s stress arousal. As some speculation has it, in early evolution sweat release might have made the skin less brittle, protecting humans during hand-to-hand combat.5

 

Brain measures are even more trustworthy than “peripheral” physiological ones like heart rate. But we were too early for such methods, the least biased and most convincing of all. In the 1970s, brain imaging systems like the fMRI, SPECT, and fine-grained computerized analysis of EEG had not yet been invented.6 Measures of responses distant in the body from the brain—heart and breath rates, sweat—were the best Dan had.7 Because those physiological responses reflect a complex mix of forces, they are a bit messy to interpret.8

 

Another weakness of the study stems from the recording technology of the day, long before such data were digitized. Sweat rates were tracked by the sweep of a needle on a continuous spool of paper. The resulting scrawl was what Dan pored over for hours, converting ink blips into numbers for data analysis. This meant counting the smirches that signified a spurt of sweat before and after each shop accident.

 

The key question: Was there a meaningful difference between the four conditions—expert versus novice, told to meditate or just sit quietly—in their speed of recovery from the heights of arousal during the accidents? The results, as recorded by Dan, suggested that meditating sped up the recovery rate, and that seasoned meditators recovered quickest.9

 

That phrase as recorded by Dan speaks to another potential problem: it was Dan who did the scoring, and the whole endeavor was meant to support a hypothesis he endorsed. This situation favors experimenter bias, where the person designing a study and analyzing its data might skew the results toward a desired outcome.

 

 Dan’s dim (okay, very dim) recollection after nearly fifty years is that among the meditators, when there was an ambiguous GSR—one that might have been at the peak of reaction to the accident, or just afterward—he scored it as at the peak rather than at the beginning of the recovery slope. The net effect of such a bias would be to make meditators’ sweat response seem to react more to the accident, while recovering more quickly (however, as we shall see, this is precisely the pattern found in the most advanced meditators studied so far).

 

Research on bias has found two levels: our conscious predilections and, harder to counter, our unconscious ones. To this day Dan cannot swear that his scoring of those inkspots was unbiased. Along those lines, Dan shared the dilemma of most scientists who do research on meditation: they are themselves meditators, which can encourage such bias, even if unconscious.

 

UNBIASING SCIENCE

 

It could have been a scene straight out of a Bollywood version of the Godfather movies: a black Cadillac limo pulled up at an assigned time and place, the back door opened, and Dan got in. Seated next to him was the big boss—not Marlon Brando/Don Corleone, but rather a smallish, bearded yogi clad in a white dhoti.

 

Yogi Z had come from the East to America in the 1960s and quickly captured headlines by mingling with celebrities. He attracted a huge following, and recruited hundreds of young Americans to become teachers of his method. In 1971, just before his first trip to India, Dan attended a teacher training summer camp the yogi ran.

 

Yogi Z somehow heard that Dan was a Harvard grad student about to travel to India on a predoctoral fellowship. The yogi had a plan for this predoc. Handing Dan a list of names and addresses of his own followers in India, Yogi Z instructed him to look each one up, interview them, and then write a doctoral dissertation with the thesis and conclusion that this particular yogi’s method was the only way to become “enlightened” in this day and age.

 

For Dan the idea was abhorrent. Such outright hijacking of research to promote a particular brand of meditation typifies the hustle that, regrettably, has characterized a certain kind of “spiritual teacher” (remember Swami X). When such a teacher engages in the self-promotion typical of some commercial brand, it signals that someone hopes to use the appearance of inner progress in the service of marketing. And when researchers wed to a particular brand of meditation report positive findings, the same questionable bias arises, as well as another question: Were there negative results that went unreported?

 

For instance, the meditation teachers in Dan’s study taught Transcendental Meditation (TM). TM research has had a somewhat checkered history in part because most of it has been done by staff at Maharishi University of Management (formerly Maharishi International University), which is a part of the organization that promotes TM. This raises the concern of a conflict of interest, even when the research has been well done.

 

For this reason, Richie’s lab intentionally employs several scientists who are skeptical of meditation’s effects, and who raise a healthy number of issues and questions that “true believers” in the practice might overlook or sweep under the rug. One result: Richie’s lab has published several nonfindings, studies that test a specific hypothesis about the effect of meditation and fail to observe the expected effect. The lab also publishes failures to replicate—studies that do not get the same results when duplicating the method of previously published papers that found meditation has some beneficial effect. Such failures to replicate earlier findings call them into question.

 

Bringing in skeptics is but one of many ways to minimize experimenter bias. Another would be to study a group that is told about meditation practices and their benefits but gets no instruction. Better: an “active control,” where one group engages in an activity unlike meditation, one that they believe will benefit them, such as exercise.

 

A further dilemma in our Harvard research, also still pervasive in psychology, was that the undergrads available for study in our lab were not typical of humanity as a whole. Our experiments were done with subjects known in the field as WEIRD: Western, educated, industrialized, rich, and from democratic cultures.10 And using Harvard students, an outlier group even among the WEIRD, makes the data less valuable in searching for universals in human nature.

 

THE VARIETIES OF THE MEDITATIVE EXPERIENCE

 

Richie in his dissertation research was among the first neuroscientists to ask if we can identify a neural signature of attention skill. That basic question was, in those days, quite respectable.

 

But Richie’s PhD research was in the spirit of that concealed excursion into the mind in his undergraduate work. The agenda embedded, sub rosa, in the study: exploring if signs of skill in attention differed in meditators and nonmeditators. Did meditators get better at focusing? In those days, that was not a respectable question.

 

Richie measured the brain electrical signals from the scalp of meditators as they heard tones or saw flashing LED lights, while he instructed them to focus on the sounds and ignore the lights, or vice versa. Richie analyzed the electrical signals for “event-related potential” (ERP), indicated by specific blips in response to a light and/or tone. The ERP, embedded in a chorus of noise, is a signal so minuscule it is measured in microvolts—millionths of a volt. These tiny signals offer a window on how we allocate our attention.

 

Richie found that the size of these tiny signals was diminished in response to the tone when meditators focused on the light, while the signals triggered by the light were reduced in size when the meditators focused their attention on the tone. That finding alone would be ho-hum; we would expect that. But this pattern of blocking out the unwanted modality was much stronger in the meditators than in the controls—some of the first evidence that meditators were better at focusing their attention than nonmeditators.

 

Since selecting a target for focus and ignoring distractions marks a key attention skill, Richie concluded that brain electrical recordings—the EEG —could be used for this assessment (routine today, but a step in scientific progress back then). Still, the evidence that meditators were any better at this than the control group, who had never meditated, was rather weak.

 

In retrospect, we can see one reason why this evidence was in itself questionable: Richie had recruited a mix of meditators, who deployed various methods. Back in 1975 we were quite naive about how important these variations in technique were. Today we know there are many aspects of attention, and that different kinds of meditation train a variety of mental habits, and so, impact mental skills in varying ways.

 

For example, researchers at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, had novices practice daily for a few months three different types of meditation: focusing on breathing; generating loving-kindness; and monitoring thoughts without getting swept away by them.11 Breath focus, they found, was calming— seeming to confirm a widespread assumption about meditation’s usefulness as a means to relax. But in contradiction to that stereotype, neither the loving-kindness practice nor monitoring thoughts made the body more relaxed, apparently because each demands mental effort: for example, while watching thoughts you continually get swept up in them—and then, when you notice this has happened, need to make a conscious effort to simply watch again. In addition, the loving-kindness practice, where you wish yourself and others well, understandably created a positive mood, while the other two methods did not.

 

So, differing types of meditation produce unique results—a fact that should make it a routine move to identify the specific type being studied. Yet confusion about the specifics remains all too common. One research group, for instance, has collected state-of-the-art data on brain anatomy in fifty meditators, an invaluable data set.12 Except that the names of the meditation practices being studied reveals a mixture of types—a hodgepodge. Had the specific mental training entailed by each meditation type been methodically recorded, that data set might well yield even more valuable findings. (Even so, kudos for disclosing this information, which too often goes unnoted.)

 

As we read through the now vast trove of research on meditation, we sometimes wince when we come across the confusion and naiveté of some scientists about the specifics. Too often they are simply mistaken, like the scientific article that said that in both Zen and Goenka-style vipassana, meditators have their eyes open (what’s wrong here: Goenka has people close their eyes).

 

A handful of studies have used an “antimeditation” method as an active control. In one version of this so-called antimeditation, volunteers were told to concentrate on as many positive thoughts as possible—which actually resembles some contemplative methods, such as the loving-kindness meditation we will review in chapter six. The fact that those experimenters thought this was unlike meditation speaks to their confusion about what exactly they were researching.

 

The rule of thumb—that what gets practiced gets improved—underscores the importance of matching a given mental strategy in meditation to its result. This is true equally for those who study meditation and those who meditate: one must be aware of the likely outcomes from a given meditation approach. They are not all the same, contrary to the misunderstanding among some researchers, and even practitioners.

 

In the realm of mind (as everywhere else), what you do determines what you get. In sum, “meditation” is not a single activity but a wide range of practices, all acting in their own particular ways in the mind and brain.

 

Lost in Wonderland, Alice asked the Cheshire Cat, “Which way should I go?”

 

He replied, “That depends on where you want to get to.”

 

The Cheshire Cat’s advice to Alice holds, too, for meditation.

 

COUNTING THE HOURS

 

Each of Dan’s “expert” meditators, all Transcendental Meditation teachers, had practiced TM for at least two years. But Dan had no way of knowing how many total hours they had put in over those years. Nor did he know what the actual quality of those hours might have been.

 

Few researchers, even today, have this crucial piece of data. But, as we will see in more detail in chapter thirteen, “Altering Traits,” our model of change tracks how many lifetime hours of practice a meditator has done and whether it was daily or on retreat. These total hours are then connected with shifts in qualities of being and the underlying differences in the brain that give rise to them.

 

Very often meditators are lumped into gross categories of experience, like “beginner” and “expert,” without any further specifics. One research group reported the daily time the people they studied put into meditation—ranging from ten minutes a few times a week to 240 minutes daily—but not how many months or years they had done so, which is essential in calculating lifetime hours of practice.

 

Yet this calculation goes missing in the vast majority of meditation studies. So that classic Zen study from the 1960s showing a failure to habituate to repeated sounds—one of the few existing then and one that had gotten us interested in the first place—actually gave sparse data on the Zen monks’ meditation experience. Was it an hour a day, ten minutes, zero on some days, or six hours every day? How many retreats (sesshins) of more intensive practice did they do, and how many hours of meditation did each involve? We have no idea.

 

To this day the list of studies that suffer from this uncertainty could go on and on. But getting detailed information about the total lifetime hours of a meditator’s practice has become standard operating procedure in Richie’s lab. Each of the meditators they study report on what kind of meditation practice they do, how often and for how long they do it in a given week, and whether they go on retreats.

 

 If so, they note how many hours a day they practice on retreat, how long the retreat is, and how many such retreats they have done. Even further, the meditators carefully review each retreat and estimate the time spent doing different styles of meditation practice. This math allows the Davidson group to analyze their data in terms of total hours of practice and separate the time for different styles and for retreat versus home hours.

 

As we will see, there sometimes is a dose-response relationship when it comes to the brain and behavioral benefits from meditation: the more you do it, the better the payoff. That means that when researchers fail to report the lifetime hours of the meditators they are studying, something important has gone missing. By the same token, too many meditation studies that include an “expert” group show wild variation in what that term means— and don’t use a precise metric for how many hours those “experts” have practiced.

 

If the people being studied are meditating for the first time—say, being trained in mindfulness—their number of practice hours is straightforward (the instruction hours plus however many they do at home on their own). Yet many of the more interesting studies look at seasoned meditators without calculating each person’s lifetime hours, which can vary greatly. One, for example, lumped together meditators who had from one year of experience to twenty-nine years!

 

Then there’s the matter of expertise among those giving meditation instruction. A handful of studies among the many we looked at thought to mention how many years of experience in meditation the teachers had, though none calculated their lifetime hours. In one study the upper number was about fifteen years; the lowest was zero.

 

BEYOND THE HAWTHORNE EFFECT

 

Way back in the 1920s, at the Hawthorne Works, a factory for electrical equipment near Chicago, experimenters simply improved lighting in that factory and slightly adjusted work schedules. But, with even those small changes for the better, people worked harder—at least for a while.

 

The take-home: any positive intervention (and, perhaps, simply having someone observe your behavior) will move people to say they feel better or improve in some other way. Such “Hawthorne effects,” though, do not mean there was any unique value-added factor from a given intervention; the same upward bump would occur from any change people regarded as positive.

 

Richie’s lab, sensitized to issues like the Hawthorne effect, has devoted considerable thought and effort to using proper comparison conditions in their studies of meditation. The instructor’s enthusiasm for a given method

can infect those who learn it—and so the “control” method should be taught with the same level of positivity as is true for the meditation.

 

To tease out extraneous effects like these from the actual impacts of meditation, Richie and his colleagues developed a Health Enhancement Program (HEP) as a comparison condition for studies of mindfulness-based stress reduction. HEP consists of music therapy with relaxation; nutritional education; and movement exercises like posture improvement, balance, core strengthening, stretching, and walking or jogging.

 

In the labs’ studies, the instructors who taught HEP believed it would help, just as much as did those who taught meditation. Such an “active control” can neutralize factors like enthusiasm, and so better identify the unique benefits of any intervention—in this case, meditation—to see what it adds over and above the Hawthorne edge.

 

Richie’s group randomly assigned volunteers to either HEP or mindfulness-based stress reduction (MBSR) and then before and after the training had them fill out questionnaires that in earlier research had reflected improvements from meditation. But in this study, both groups reported comparable improvement on these subjective measures of general distress, anxiety, and medical symptoms. This led Richie’s group to conclude that much of the stress relief improvements beginners credit to meditation do not seem to be that unique.13

 

Moreover, on a questionnaire that was specifically developed to measure mindfulness, absolutely no difference was found in the level of improvement from MBSR or HEP.14

 

This led Richie’s lab to conclude that for this variety of mindfulness, and likely for any other meditation, many of the reported benefits in the early stages of practice can be chalked up to expectation, social bonding in the group, instructor enthusiasm, or other “demand characteristics.” Rather than being from meditation per se, any reported benefits may simply be signs that people have positive hopes and expectations.

 

Such data are a warning to anyone looking for a meditation practice to be wary of exaggerated claims about its benefits. And also a wake-up call to the scientific community to be more rigorous in designing meditation studies. Just finding that people practicing one or another kind of meditation report improvements compared to those in a control group who do nothing does not mean such benefits are due to the meditation itself. Yet this is perhaps the most common paradigm still used in research on the benefits of meditation—and it clouds the picture of what the true advantages of the practice might be.

 

We might expect similar enthusiastic reports from someone who expects a boost in well-being by taking up Pilates, bowling, or the Paleo Diet.

 

WHAT EXACTLY IS “MINDFULNESS”?

 

Then there is the confusion about what we mean by mindfulness, perhaps the most popular method du jour among researchers. Some scientists use the term as a stand-in for any and all kinds of meditation. And in popular usage, mindfulness can refer to meditation in general, despite the fact that mindfulness is but one of a wide variety of methods.

 

To dig down a bit, mindfulness has become the most common English translation of the Pali language’s word sati. Scholars, however, translate sati in many other ways—“awareness,” “attention,” “retention,” even “discernment.”15 In short, there is not a single English equivalent for sati on which all experts agree.16

 

Some meditation traditions reserve “mindfulness” for noticing when the mind wanders. In this sense, mindfulness becomes part of a larger sequence which starts with a focus on one thing, then the mind wandering off to something else, and then the mindful moment: noticing the mind has wandered. The sequence ends with returning attention to the point of focus.

 

That sequence—familiar to any meditator—could also be called “concentration,” where mindfulness plays a supporting role in the effort to focus on one thing. In one-pointed focus on a mantra, for example, sometimes the instruction is, “Whenever you notice your mind wandering, gently start the mantra again.” In the mechanics of meditation, focusing on one thing only means also noticing when your mind wanders off so you can bring it back—and so concentration and mindfulness go hand in hand.

 

Another common meaning of mindfulness refers to a floating awareness that witnesses whatever happens in our experience without judging or otherwise reacting. Perhaps the most widely quoted definition comes from Jon Kabat-Zinn: “The awareness that emerges through paying attention on purpose, in the present moment, and nonjudgmentally to the unfolding of experience.”17

 

From the viewpoint of cognitive science, there’s another twist when it comes to the precise methods used: what’s called “mindfulness,” by scientists and practitioners alike, can refer to very different ways to deploy attention. For example, the way mindfulness gets defined in a Zen or Theravadan context looks little like the understanding of the term in some Tibetan traditions.

 

Each refers to differing (sometimes subtly so) attentional stances—and quite possibly to disparate brain correlates. So it becomes essential that researchers understand what kind of mindfulness they are actually studying —or if, indeed, a particular variety of meditation actually is mindfulness.

 

The meaning of the term mindfulness in scientific research has taken a strange turn. One of the most commonly used measures of mindfulness was not developed on the basis of what happens during actual mindfulness meditation but rather by testing hundreds of college undergraduates on a questionnaire that the researchers thought would capture different facets of mindfulness.18 For example, you are asked whether statements like these are true for you: “I watch my feelings without getting carried away by them” or “I find it difficult to stay focused on what’s happening in the present moment.”

 

The test includes qualities like not judging yourself—for example, when you have an inappropriate feeling. This all seems fine at first glance. Such a measure of mindfulness should and does correlate with people’s progress in training programs like MBSR, and the test scores correlated with the amount and quality of mindfulness practice itself.19 From a technical viewpoint that’s very good—it’s called “construct validity” in the testing trade.

 

But when Richie’s group put that measure to another technical test, they found problems in “discriminant validity,” the ability of a measure not just to correlate with what it should—like MBSR—but also not to correlate when it should not. In this case, that test should not reflect the changes among those in the HEP active control group, which was intentionally designed not to enhance mindfulness in any way.

 

But the results from the HEP folks were pretty much like those from MBSR—an uptick in mindfulness as assessed on the self-report test. More formally, there was zero evidence that this measure had discriminant validity. Oops.

 

Another widely used self-report measure of mindfulness, in one study, showed a positive correlation between binge drinking and mindfulness—the more drinking, the greater the mindfulness. Seems like something is off base here!20 And a small study with twelve seasoned (average of 5,800 hours of practice) and twelve more expert meditators (average of 11,000 hours of practice) found they did not differ from a nonmeditating group on two very commonly used questionnaire measures of mindfulness, perhaps because they are more aware of the wanderings of their mind than most people.21

 

Any questionnaire that asks people to report on themselves can be susceptible to skews. One researcher put it more bluntly: “These can be gamed.” For that reason the Davidson group has come up with what they consider a more robust behavioral measure: your ability to maintain focus as you count your breaths, one by one.

 

This is not as simple as it may sound. In the test you press the down arrow on a keyboard on each outbreath. And to up the odds, on every ninth exhale you tap a different key, the right arrow. Then you start counting your breaths from one to nine again.22 The strength of this test: the difference between your count and the actual number of breaths you took renders an objective measure far less prone to psychological bias. When your mind wanders, your counting accuracy suffers. As expected, expert meditators perform significantly better than nonmeditators, and scores on this test improve with training in mindfulness.23

 

All of this cautionary review—the troubles with our first attempts at meditation research, the advantages of an active control group, the need for more rigor and precision in measuring meditation impacts—seems a fitting prelude to our wading into the rising sea of research on meditation.

 

In summarizing these results we’ve tried to apply the strictest experimental standards, which lets us focus on the very strongest findings. This means setting aside the vast majority of research in meditation— including results scientists view as questionable, inconclusive, or otherwise marred.

 

As we’ve seen, our somewhat flawed research methods during our Harvard graduate school days reflected the general quality—or lack of it— during the first decades of meditation studies, the 1970s and 1980s. Today our initial research attempts would not meet our own standards to be included here. Indeed, a large proportion of meditation studies in one way or another fail to meet the gold standards for research methods that are essential for publication in the top, “A-level” scientific journals.

 

To be sure, over the years there has been a ratcheting upward of sophistication as the number of studies of meditation has exploded to more than one thousand per year. This tsunami of meditation research creates a foggy picture, with a confusing welter of results. Beyond our focus on the strongest findings, we try to highlight the meaningful patterns within that chaos.

 

We’ve broken down this mass of findings along the lines of trait changes described in the classic literature of many great spiritual traditions. We see such texts as offering working hypotheses from ancient times for today’s research.

 

We’ve also related these trait changes to the brain systems involved, wherever the data allow. The four main neural pathways meditation transforms are, first, those for reacting to disturbing events—stress and our recovery from it (which Dan tried not so successfully to document). As we will see, the second brain system, for compassion and empathy, turns out to be remarkably ready for an upgrade. The third, circuitry for attention, Richie’s early interest, also improves in several ways—no surprise, given that meditation at its core retrains our habits of focus. The fourth neural system, for our very sense of self, gets little press in modern talk about meditation, though it has traditionally been a major target for alteration.

 

When these strands of change are twined together, there are two major ways anyone benefits from contemplative effort: having a healthy body and a healthy mind. We devote chapters to the research on each of these.

 

In teasing out the main trait effects of meditation, we faced a gargantuan task—one that we’ve simplified by limiting our conclusions to the very best studies. This more rigorous look contrasts with the too-common practice of accepting findings—and touting them—simply because they are published in a “peer-reviewed” journal. For one, academic journals themselves vary in the standards by which peers review articles; we’ve favored A-level journals, those with the highest standards. For another, we’ve looked carefully at the methods used, rather than ignoring the many drawbacks and limitations to these published studies that are dutifully listed at the ends of such articles.

 

To start, Richie’s research group gathered an exhaustive collection for a given topic like compassion from all journal articles published on the effects of meditation. They then winnowed them to select those that met the highest standards of experimental design. So, for example, of the original

231 reports on cultivating loving-kindness or compassion, only 37 met top design standards. And when Richie looked through the lenses of design strength and of importance, eliminated overlap, and otherwise distilled them, this closer scrutiny shrank that number to 8 or so studies, whose findings we talk about in chapter six, “Primed for Love,” along with a few others that raise compelling issues.

 

Our scientific colleagues might expect a far more detailed—okay, obsessive—accounting of all the relevant studies, but that’s not our agenda here. That said, we should nod with great appreciation to the many research efforts we did not include whose findings agree with our account (or disagree, or add a twist), some excellent and some not so.

 

But let’s keep this simple./.

5

Không có nhận xét nào:

Đăng nhận xét