Conservatives are crazy and racists are stupid, according to the latest research by college professors who could not possibly be biased. It’s scientastic!
Table of Contents
- Closed Minds
- Primal Fears
- Sick Puppies
- Scaredy Cats
- Dirty Girls
- Dumb Bigots
- Small Brains
- White Trash
- Blue Pills
- Head Cases
- Letters to the Editor
Saturn Devouring His Son by Francisco Goya (image)
Conservatives are stupid and closed-minded, but liberals are smart and curious, according to a groundbreaking 2007 study by a crack team of psychologists (including David Amodio and John Jost, whom we’ll meet again a little later).
Political scientists and psychologists have noted that, on average, conservatives show more structured and persistent cognitive styles, whereas liberals are more responsive to informational complexity, ambiguity and novelty. We tested the hypothesis that these profiles relate to differences in general neurocognitive functioning using event-related potentials, and found that greater liberalism was associated with stronger conflict-related anterior cingulate activity, suggesting greater neurocognitive sensitivity to cues for altering a habitual response pattern.
Conclusion: “a more conservative orientation is related to greater persistence in a habitual response pattern, despite signals that this response pattern should change.”
In the Telegraph, lead author David Amodio affirms that “liberals tended to be more sensitive and responsive to information that might conflict with their habitual way of thinking, compared with the conservatives.”
Yes indeed, “liberals are more likely than are conservatives to respond to cues signaling the need to change habitual responses,” according to New York University. “Previous studies have found that conservatives tend to be more persistent in their judgments and decision-making, while liberals are more likely to be open to new experiences.”
‘Brain type may dictate politics’ (The Guardian):
Scientists have found that the brains of people calling themselves liberals are more able to handle conflicting and unexpected information than the brains of their conservative counterparts.
The match-up was unmistakable: respondents who had described themselves as liberals showed “significantly greater conflict-related neural activity” when the hypothetical situation called for an unscheduled break in routine.
Conservatives, however, were less flexible, refusing to deviate from old habits “despite signals that this … should be changed.”
‘Left brain, right brain’ (Los Angeles Times):
A study reported in the journal Nature Neuroscience suggests that liberals are more adaptable than conservatives. “Duh,” you might say. […] But this study goes beyond the truism that conservatives like to conserve to suggest that liberals might be better judges of the facts than conservatives.
“Duh.” Stupid conservatives, am I right?
So here’s how the actual experiment went:
On each trial of the Go/No-Go task, either the letter “M” or “W” was presented in the center of a computer monitor screen, following Nieuwenhuis et al. Half of the participants were instructed to make a “Go” response when they saw “M” but to make no response when they saw “W”; the remaining participants completed a version in which “W” was the Go stimulus and “M” was the No-Go stimulus; assignment to either version of the task was random.
A single letter constitutes “informational complexity”? This is how we know that “liberals are more able to handle conflicting and unexpected information”?
Responses were registered on a computer keyboard placed in the participants’ laps. Each trial began with a fixation point, presented for 500 ms. The target then appeared for 100 ms, followed by a blank screen. Participants were instructed to respond within 500 ms of target onset. A “Too slow!” warning message appeared after responses that exceeded this deadline, and “Incorrect” feedback was given after erroneous responses.
A tenth of a second to look, four tenths of a second more to hit a button — this is how we study “judgments and decision-making”? Political judgments and decision-making?
The task consisted of 500 trials, of which 80% consisted of the Go stimulus and 20% consisted of the No-Go stimulus. As in past research, the high frequency of Go stimuli induced a prepotent “Go” response, enhancing the difficulty of successfully inhibiting a response on No-Go trials.
This is what it means that “liberals are more likely to be open to new experiences” and “might be better judges of the facts”?
Participants received a two-minute break halfway through the task, which took approximately 15 minutes to complete. Following completion of the task, participants were debriefed, thanked, paid or awarded credit, and dismissed.
Fifteen minutes of tapping keys — these are “habitual responses”? “Old habits”?
Wait, never mind, they said something about a p-value less than 0.001, and a standard deviation of 0.02, and I think I even saw something about a peak negative deflection between -50 and 150 milliseconds — so their interpretation must be right after all. It’s all very scientastic, you see. Stupid conservative! Lern to be more smarter!
The scientific core of the study is a hypothesized brain function called “conflict monitoring.” The reason why liberals scored better than conservatives, the authors argued, is that the brain area responsible for this function was, by electrical measurement, more active in them than in conservatives.
The authors described CM as “a general mechanism for detecting when one’s habitual response tendency is mismatched with responses required by the current situation.” NYU’s press release called it “a mechanism for detecting when a habitual response is not appropriate for a new situation.” Amodio told the press that CM was “the process of detecting conflict between an ongoing pattern of behavior and a signal that says that something’s wrong with that behavior and you need to change it.”
The indictment sounds scientific: CM spots errors; conservatives are less sensitive to CM; therefore, conservatives make more errors. But the original definition of CM, written six years ago by the researchers who hypothesized it, didn’t presume that the habitual response was wrong, inappropriate, or objectively mismatched with current requirements. It presumed only that a stimulus had challenged the habit. According to the original definition, CM is “a system that monitors for the occurrence of conflicts in information processing.” It “evaluates current levels of conflict, then passes this information on to centers responsible for control, triggering them to adjust the strength of their influence on processing.”
In experiments such as Amodio’s, the habit is objectively wrong: You tapped the button, and the researcher knows that what you saw was a W. But real life is seldom that simple. Maybe what you saw — what you think you saw — will turn out to require a different response from the one that has hitherto served you well. Maybe it won’t. Maybe, on average, extra sensitivity to such conflicting cues will lead to better decisions. Maybe it won’t. Extra CM sensitivity does make you more likely to depart from your habit. But that doesn’t prove it’s more adaptive.
Frank Sulloway, a Berkeley professor who co-authored a damning psychological analysis of conservatism four years ago, illustrates the problem. Appearing in the Times as a researcher “not connected to the study” — despite having co-written his similar 2003 analysis with one of its authors [Jost] — Sulloway endorsed the study and pointed out, “There is ample data from the history of science showing that social and political liberals indeed do tend to support major revolutions in science.” That’s true: When new ideas turn out to be right, liberals are vindicated. But when new ideas turn out to be wrong, they cease to be “revolutions in science,” so it’s hard to keep score of liberalism’s net results. And that’s in science, where errors, being relatively factual, are easiest to prove and correct. In culture and politics, errors can be unrecoverable.
Errors like — well, revolutions (Issue 25).
And of course social and political reactionaries have never contributed anything to science — notwithstanding nobodies like Sir Isaac Newton, a fundamentalist Christian royalist who enjoyed having counterfeiters hanged, drawn and quartered in the name of Her Majesty the Queen; and Carl Friedrich Gauss, who “supported the monarchy and opposed Napoleon, whom he saw as an outgrowth of revolution.”
On the other hand, as Professor Sulloway kindly pointed out, “there is ample data from the history of science showing that social and political liberals indeed do tend” — to shout a lot about evolution when they’re trying to feel superior to Christians; not so much when Thomas Huxley is pointing out that “no rational man, cognizant of the facts, believes that the average negro is the equal, still less the superior, of the white man.”
I wonder: since Professor Sulloway’s one and only example of a “conservative” is George Bush (and his “single-minded commitment to the Iraq war”), what would he make of David Hume, whose ideal form of government was a “civilized monarchy”?
But I digress. Saletan:
The conservative case against this study is easy to make. Sure, we’re fonder of old ways than you are. That’s in our definition. Some of our people are obtuse; so are some of yours. If you studied the rest of us in real life, you’d find that while we second-guess the status quo less than you do, we second-guess putative reforms more than you do, so in terms of complexity, ambiguity, and critical thinking, it’s probably a wash. Also, our standard of “information” is a bit tougher than the blips and fads you fall for. Sometimes, these inclinations lead us astray. But over the long run, they’ve served us and society pretty well. It’s just that you notice all the times we were wrong and ignore all the times we were right.
In fact, that’s exactly what you’ve done in this study: You’ve manufactured a tiny world of letters, half-seconds, and button-pushing, so you can catch us in clear errors and keep out the part of life where our tendencies correct yours. And now you feel great about yourselves. Congratulations. You haven’t told us much about our way of thinking. But you’ve told us a lot about yours.
Conservatives aren’t principled; they’re terrified, according to a 2008 political science breakthrough, as interpreted by the American Association for the Advancement of Science, in ‘The Politics of Fear.’
Why do people have the attitudes they do toward social issues such as welfare, abortion, immigration, gay rights, school prayer, and capital punishment? The conventional explanations have to do with their economic circumstances, families, friends, and educations. But new research suggests that people with radically different social attitudes also differ in certain automatic fear responses.
‘Spiders, Maggots, Politics’ (Newsweek):
In other words, on the level of physiological reactions in the conservative mind, illegal immigrants may = spiders = gay marriages = maggot-filled wounds = abortion rights = bloodied faces.
‘Born to be conservative?’ (Los Angeles Times):
People with strongly conservative views were three times more fearful than staunch liberals after the effects of gender, age, income and education were factored out.
“Three times more fearful” — sounds like a real finding to me!
Their research, published in the journal Science, indicates that people who are sensitive to fear or threat are likely to support a right wing agenda.
Asked whether the findings imply a fearmongering strategy for conservatives, New York University psychologist David Amodio responded, “Yes. And some people believe that they are actively using this strategy.”
[New York University psychologist John] Jost condemned such tactics. “From an ethical standpoint, conservative campaigns should not exploit feelings of fear in the general population,” he said.
Hello again, Dr. Jost, Dr. Amodio. Gosh, I don’t mean to nitpick, but that is not quite what the study showed, as Scientific American explains (way down at the bottom):
Having conservative leanings predicted stronger physiological reactions to the scary pictures, including a spider sitting on a person’s face, a bloodied face and a maggot-infested wound. People who leaned more politically left didn’t respond any differently to those images than they did to pictures of a bowl of fruit, a rabbit or a happy child.
I don’t have a turntable on me so you’ll have to supply your own record scratch here: progressives “didn’t respond any differently”? Sort of like… psychopaths?
- ‘Psychopathy, startle blink modulation, and electrodermal reactivity in twin men’ (Psychophysiology, 2005)
- ‘Startle modulation in non-incarcerated men and women with psychopathic traits’ (Personality and Individual Differences, 2007)
Well, that’s not going to stop Scientific American from titling their article ‘Are you more likely to be politically left or right if you scare easily?’ Scare easily, mind you. As opposed to showing no reaction to maggot-infested wounds and bleeding faces — as would any decent, tolerant, intelligent, open-minded, free-thinking progressive-type person who supports abortion up until the moment of birth and wants to flood her country with Guatemalan peasants and Congolese cannibals.
(“Liberals aren’t psychopaths!” You’ve missed the point entirely: look how the study has been universally reported. This is critical journalism, not psychopharmaphysiology.)
By the way, The Last Psychiatrist dug up one of those images on NPR:
That ol’ right-wing agenda: not putting giant spiders on girls’ faces.
You know, since we’re apparently coming at this “from an ethical standpoint,” I feel like now might be a good time to revisit John Jost and Frank Sulloway’s nifty 2003 paper ‘Political Conservatism as Motivated Social Cognition’ in Psychology Bulletin.
Analyzing political conservatism as motivated social cognition integrates theories of personality (authoritarianism, dogmatism-intolerance of ambiguity), epistemic and existential needs (for closure, regulatory focus, terror management), and ideological rationalization (social dominance, system justification). A meta-analysis (88 samples, 12 countries, 22,818 cases) confirms that several psychological variables predict political conservatism: death anxiety (weighted mean r = .50); system instability (.47); dogmatism-intolerance of ambiguity (.34); openness to experience (–.32); uncertainty tolerance (–.27); needs for order, structure, and closure (.26); integrative complexity (–.20); fear of threat and loss (.18); and self-esteem (–.09). The core ideology of conservatism stresses resistance to change and justification of inequality and is motivated by needs that vary situationally and dispositionally to manage uncertainty and threat.
Sounds like a real finding to me! “Authoritarianism,” you say?
In using the word “fascist” to discredit their enemies, leftists from the 1960s to the present follow the lead of two Marxist thinkers: Erich Fromm and Theodor Adorno, who were among the more vocal in making the false connection between conservatism and fascism. That connection took the form, most importantly, of the identification of something called “the authoritarian personality,” a concept which emerged and grew alongside Adolf Hitler’s rise to power in the 1930s. In fact, Fromm and Adorno, as well as Herbert Marcuse and many other leftist intellectuals, were members of The Frankfurt School, a Marxist think-tank driven from Germany during that decade by Hitler himself — the ultimate authoritarian personality, and the model for the Institute’s defining psychological paradigm.
Fromm labeled the personality disorder in his 1947 book, Man For Himself. In the section of the book entitled Humanistic vs. Authoritarian Ethics, Fromm writes, “In authoritarian ethics an authority states what is good for man and lays down the laws and norms of conduct; in humanistic ethics man himself is both the norm giver and the subject of the norms, their formal source or regulative agency and their subject matter.” Fromm further asserts that “[v]irtue is responsibility toward his [man’s] own existence. … [V]ice is irresponsibility toward himself.” In his attempts to discredit “authoritarianism,” Fromm lays out the solipsistic leftist principle that self-reference is the only way to discover and apply values and principles.
During the mid-1940s, Adorno and others conducted research that would be presented in a book titled The Authoritarian Personality. Their work typified the methodology of leftist thinkers: they identified a number of psychological characteristics of subjects they had observed to be most sympathetic to the Nazi message and then proposed that there must be a psychological type that conformed to these characteristics. After that, they interviewed for the study itself the very subjects they’d used to derive their “type.” This pretty much guaranteed that their theory would be borne out, since they knew in advance what the answers to their questions would be.
Lo and behold, they “discovered” that there indeed did exist among American citizens people who were of the “authoritarian personality” type. Furthermore, these people shared many of their personality traits with Nazi leaders and Nazi supporters. Predictably, their character traits were precisely opposite those of the “good guys,” that is, of people who tended to be pro-communist and pro-socialist. The book claims to describe nothing less than “the rise of an ‘anthropological’ species we call the authoritarian type of man.”
(“Anthropological species,” unlike races, are totally a real thing.)
I don’t want to get all conspiratorial, reader, but there seems to be an awfully fine line between what the universities assure me is legitimate social-scientific research, on the one hand, and straight-up cultural Marxism, on the other — the latter being basically a deliberate scheme to take over the social science departments of American universities by a gaggle of German Jews whose commitment to academic integrity was, shall we say, significantly weaker (p < 0.00001) than their devotion to Marxist ideology.
In 1923, the Frankfurt School — a Marxist think-tank — was founded in Weimar Germany. Among its founders were Georg Lukacs, Herbert Marcuse, and Theodor Adorno. The school was a multidisciplinary effort which included sociologists, sexologists, and psychologists.
The primary goal of the Frankfurt School was to translate Marxism from economic terms into cultural terms. It would provide the ideas on which to base a new political theory of revoltuion based on culture, harnessing new oppressed groups for the faithless proletariat. Smashing religion, morals, it would also build a constituency among academics, who could build careers studying and writing about the new oppression.
Toward this end, Marcuse — who favored polymorphous perversion — expanded the ranks of Gramsci’s new proletariat by including homosexuals, lesbians, and transsexuals. Into this was spliced Lukacs’ radical sex education and cultural terrorism tactics. Gramsci’s ‘long march’ was added to the mix, and then all of this was wedded to Freudian psychoanalysis and psychological conditioning techniques. The end product was Cultural Marxism, now known in the West as multiculturalism.
Additional intellectual firepower was required: a theory to pathologize what was to be destroyed. In 1950, the Frankfurt School augmented Cultural Marxism with Theodor Adorno’s idea of the ‘authoritarian personality.’ This concept is premised on the notion that Christianity, capitalism, and the traditional family create a character prone to racism and fascism. Thus, anyone who upholds America’s traditional moral values and institutions is both racist and fascist. Children raised by traditional values parents, we are told to believe, will almost certainly become racists and fascists. By extension, if fascism and racism are endemic to America’s traditional culture, then everyone raised in the traditions of God, family, patriotism, gun ownership, or free markets is in need of psychological help.
Probably just a coincidence.
You know the social “sciences” are never going to stop churning out these earth-shattering, mind-boggling studies, right? They get paid to inflict them on us. In 2009 (I’m going year by year, if you hadn’t noticed), we learned that conservatives are not only stupid, closed-minded and scared of the dark, but easily grossed out, as well:
[Cornell University psychology professor David] Pizarro explains that disgust is evolution’s way of protecting us from disease. Unfortunately, in his view, disgust is now used to make moral judgments.
Liberals and conservatives disagree about whether disgust has a valid place in making moral judgments, Pizarro argues. Some conservatives think there is inherent wisdom in repugnance, that feeling disgusted about something — gay sex between consenting adults, for example — is cause enough to judge it wrong or immoral, even lacking a concrete reason, Pizarro explains. Liberals tend to disagree, and are more likely to base judgments on whether an action or a thing causes actual harm, he said.
Studying the link between disgust and moral judgment could help explain the strong differences in people’s moral opinions, Pizarro figures. And it could offer strategies for persuading some to change their views.
Just some, mind you; specifically, the ones who make unfortunate “moral judgments.” Liberals, after all, base their judgments “on whether an action or a thing causes actual harm” — which admittedly raises the question of how liberals can still support, e.g., decolonization (Issue 12), anti-racism (Issue 18), or feminism (Issue 24)…
In any case, two years later, Pizarro returned — and he brought some friends:
“This is one more piece of evidence that we, quite literally, have gut feelings about politics,” said study researcher Kevin Smith, a political science professor at the University of Nebraska-Lincoln. “Our political attitudes and behaviors are reflected in our biology.”
“I think that one plausible explanation is sort of along the lines that one way to understand some of these attitudes about politics and morality is that they have a strong emotional component,” Pizarro told LiveScience in a telephone interview. Different emotions are linked with different kinds of judgments and behavior, he added. For instance, fear is linked to vigilance and preparedness, he said, while disgust is linked to steering clear of any sort of contamination, “foreign looking” things, or possibly even strange people.
As such, people who are more easily disgusted may be more likely to take on political views that help them avoid these “disgusting” situations.
The actual images considered disgusting by “squeamish” conservatives, not so much by liberal gay-porn enthusiasts: “a man in the process of eating a mouthful of writhing worms; a horribly emaciated but alive body; human excrement floating in a toilet; a bloody wound; and an open sore with maggots in it.” “Consenting adults”!
The 2011 article cites not only Pizarro’s study (above), but also…
A 2008 study by scientists at the University of Nebraska-Lincoln (UNL) found that people who are highly responsive to threatening images were likely to support defense spending, capital punishment, patriotism and the Iraq War.
You might recall that “highly responsive,” in this case, means not a psychopath.
(That sound you hear is the social science mainstream chanting “Rethuglicans be dum! LOL LOL” as they flush four hundred years of Anglo-American political history down the toilet to send a few more votes to the party with the right colour skin.)
Racists may simply be frightened out of their evil little minds, German psychiatrists revealed in 2010. White racists, of course — what other kind is there?
Children with a genetic condition that quells their fear of strangers don’t stereotype based on race, according to a new study. […]
To further explore the role of social fear in racial prejudice, a team led by Andreas Meyer-Lindenberg, a psychiatrist at the Central Institute of Mental Health in Mannheim, Germany, examined the attitudes of a group notably lacking it: children with a genetic disorder called Williams syndrome. Caused by the deletion of genes on chromosome 7, Williams syndrome often leads to mild to moderate mental and growth retardation as well as an intense sociability and lack of social fear, especially of strangers. […]
Meyer-Lindenberg and his colleagues, cognitive neuroscientists Andreia Santos and Christine Deruelle of the Mediterranean Institute for Cognitive Neuroscience in Marseille, France, gave a standard test of racial and gender attitudes to 10 boys and 10 girls with Williams syndrome. The children, aged 7 to 16 years, were all French Caucasians. In one typical question, children were shown drawings of two boys, one darker skinned and one lighter, and told that one of them had been naughty because he drew pictures on the walls of his house with crayons. They were then asked which of the two boys was naughty. […]
The control group demonstrated racial stereotyping by assigning negative qualities to dark-skinned individuals on an average of 83% of their responses, but the Williams children did so an average of only 64% of the time, the team found. […] The results suggest that interventions designed to reduce social fears might be the best approach to countering racism, says Meyer-Lindberg.
Cognitively normal children correctly associate darker skin with higher aggression. Pathologically fearless children are less likely to do so. Therefore, the government should hire social scientists to train normal children to ignore their instincts and to love and trust all dark-skinned people — for science! Yes, that’s the ticket:
“If social fear was culturally reduced, racial stereotypes could also be reduced,” Meyer-Lindenberg said.
Brilliant! What could possibly go wrong?
The woman had just left the Babies R Us store on when she noticed a man in a tattered military coat lurking in the parking lot, she told police. The woman told detectives she was worried because the man looked like a thug, but she didn’t want to seem racist.
She had just purchased a princess hat for her daughter’s first birthday pictures when the man stepped up behind her and pulled a gun from under his coat.
The woman offered to give the man several hundred dollars she had in her car, and drove him to the bank to withdraw another $500. The man then demanded she drive to Fruitvale Junior High, where he raped the woman at gunpoint in front of her daughter.
Rape experts with the Alliance Against Family Violence and Sexual Assault’s Outreach Center would not comment on this particular case, but said relying on instincts can often save a victim’s life.
“Pay attention to those gut feelings if something does not feel quite right to you. It’s OK to walk away from your car and go somewhere else,” said Alliance Outreach Center Supervisor Carrie Zerafa.
After the tour, a security guard called [Eugenia] Calle and said that [Shamal] Thompson was in the lobby.
“Would you like for me to escort him up?” the guard asked Calle, according to Meadows.
“No, it’ll be fine,” Calle responded. “I don’t want him to think that we don’t trust him.”
Calle’s body was discovered about nine hours later, around 11 p.m. Tuesday, by her fiance, an Atlanta tax attorney whose name has not been released.
In 2006, Thompson was arrested by DeKalb police on burglary charges. He was sentenced last April to 10 years in prison, with six months to serve. He was credited with about two months’ time served and the sentence was reduced to time served, court records show.
A renowned epidemiologist, Calle retired last month from her post as vice president of the epidemiology department at the Atlanta-based American Cancer Society. Her research in epidemiology, which is the study of disease, helped establish the link between cancer and obesity, and cancer and diet.
More importantly, she wasn’t a frightened little racist.
Getting back to the German study, note the classic trick in “race prejudice” research: fit the experiment to the desired conclusion by testing only white kids — because it’s hard to claim with a straight face that “prejudice stems from fear of people from different social groups” when black kids also perceive a higher threat in dark-skinned people:
Research presented by the fertility and development expert Robert Winston last year  suggested that children as young as four hold racist views. An experiment for his BBC1 series Child of Our Time showed that young children identified black people as potential troublemakers and criminals. It also showed that children of all backgrounds prefer white people, associating them with success and trustworthiness.
A similar trick of social scientism gave America desegregation, followed shortly by forced integration, as Jared Taylor (American Renaissance) explains:
An immense amount has been written about the Brown decision, and its unsavory background has gradually come to light. It is clear that Earl Warren and several other justices wanted to end segregation in schools and were determined to do it with or without legal justification. […]
Even with doctored evidence there was no Constitutional basis for the Justices’ ambitions, so they skipped the Constitution entirely and based their ruling on social science (see “Brown v. Board: the Real Story,” AR, July 2004). This “science” was mainly black sociologist Kenneth Clark’s notorious doll studies.
Clark claimed that when black students in segregated schools were shown two dolls — one black, the other white — and asked which one they liked better, a substantial number chose the white doll. He claimed this meant segregation instilled feelings of inferiority. What he failed to say — but what was known to the lawyers arguing the case for segregation — was that his own research contradicted that claim. He had run the same experiment with children in integrated schools in Massachusetts and had found that even more of them preferred the white doll. If his research showed anything about feelings of inferiority, it was that integration made them worse. His evidence before the Supreme Court was deceitful.
The main lawyer arguing the case for segregation, John W. Davis, knew this. However, he believed the Supreme Court of the United States of America would never stoop so low as to base a ruling on whether a practice made people feel bad rather than on whether it was Constitutional. He did not even bother to reply to the doll study, noting, “I can only say that if that sort of ‘fluff’ can move any court, ‘God save the state.’”
“Fluff” prevailed. The justices struck down segregation because they found that it “generates a feeling of inferiority as to their [blacks’] status in the community that may affect their hearts and minds in a way unlikely ever to be undone.” Even the New York Times recognized the decision as a con job. Its sub-headline in the article announcing Brown read: “A Sociological Decision: Court Founded Its Segregation Ruling On Hearts and Minds Rather Than Laws.”
Wikipedia offers an interesting footnote to this digression:
Regarding Brown, this question of psychological and psychic harm fit into a very particular historical window that allowed it to have formal traction in the first place. It wasn’t until a few decades prior (with the coming of Boas and other cultural anthropologists) that cultural and/or social science research — and the questions that they invoked — would even be consulted by the courts and therefore able to influence decisions.
In other words, we owe it all to cultural Marxism.
Racists are actually crazy, Dutch social scientists explained in 2011.
A messy or chaotic environment causes people to stereotype others, probably out of a need to control and organize the situation around them, new research suggests.
Stereotyping is much simpler than reality, allowing us to place people into clear-cut categories. In this way, stereotyping is a way to cope with chaos, acting as a mental cleaning device in the face of disorder. And while ordering thoughts isn’t problematic, Lindenberg has shown that this thought process actually manifests itself in discriminatory behavior.
Unfortunately (for the glorious global struggle against white people feeling the wrong feelings and thinking the wrong thoughts), this study turned out to be flawed — in the sense that lead author Diederik Stapel literally made up all the data for this and pretty much every other study he ever wrote.
The experiments, however, never took place, the universities concluded. Stapel made up the data sets, which he then gave the student or collaborator for analysis, investigators allege. In other instances, the report says, he told colleagues that he had an old data set lying around that he hadn’t yet had a chance to analyze. When Stapel did conduct actual experiments, the committee found evidence that he manipulated the results.
There’s actually a larger issue here: that the social “sciences” in general are utterly corrupt pseudosciences and exist mainly to infect children with progressive doctrine, as the New York Times admitted in 2013.
At the end of November , the universities unveiled their final report at a joint news conference: Stapel had committed fraud in at least 55 of his papers, as well as in 10 Ph.D. dissertations written by his students. The students were not culpable, even though their work was now tarnished. The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.
The adjective “sloppy” seems charitable. Several psychologists I spoke to admitted that each of these more common practices was as deliberate as any of Stapel’s wholesale fabrications. Each was a choice made by the scientist every time he or she came to a fork in the road of experimental research — one way pointing to the truth, however dull and unsatisfying, and the other beckoning the researcher toward a rosier and more notable result that could be patently false or only partly true. What may be most troubling about the research culture the committees describe in their report are the plentiful opportunities and incentives for fraud. “The cookie jar was on the table without a lid” is how Stapel put it to me once. Those who suspect a colleague of fraud may be inclined to keep mum because of the potential costs of whistle-blowing.
The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”
Fraud like Stapel’s — brazen and careless in hindsight — might represent a lesser threat to the integrity of science than the massaging of data and selective reporting of experiments. The young professor who backed the two student whistle-blowers told me that tweaking results — like stopping data collection once the results confirm a hypothesis — is a common practice. “I could certainly see that if you do it in more subtle ways, it’s more difficult to detect,” Ap Dijksterhuis, one of the Netherlands’ best known psychologists, told me. He added that the field was making a sustained effort to remedy the problems that have been brought to light by Stapel’s fraud.
Social “scientists” are big fans of statistics, right? Well, here are some neat statistics courtesy of science writer Ed Yong and the journal Nature (2012):
In a survey of more than 2,000 psychologists, Leslie John, a consumer psychologist from Harvard Business School in Boston, Massachusetts, showed that more than 50% had waited to decide whether to collect more data until they had checked the significance of their results, thereby allowing them to hold out until positive results materialize. More than 40% had selectively reported studies that “worked.” On average, most respondents felt that these practices were defensible. “Many people continue to use these approaches because that is how they were taught,” says Brent Roberts, a psychologist at the University of Illinois at Urbana-Champaign.
Replication bias (image)
Professor Stapel is far from the only known fraud:
These practices can create an environment in which misconduct goes undetected. In November 2011, Diederik Stapel, a social psychologist from Tilburg University in the Netherlands and a rising star in the field, was investigated for, and eventually confessed to, scientific fraud on a massive scale. […]
Stapel’s story mirrors those of psychologists Karen Ruggiero and Marc Hauser from Harvard University in Cambridge, Massachusetts, who published high-profile results on discrimination and morality, respectively. Ruggiero was found guilty of research fraud in 2001 and Hauser was found guilty of misconduct in 2010. Like Stapel, they were exposed by internal whistle-blowers. “If the field was truly self-correcting, why didn’t we correct any single one of them?” asks Nosek.
“High-profile results on discrimination” — call me crazy, but I’m starting to see a pattern in the sort of misconduct that’s allowed to thrive in the social “sciences.” Economist John A. List gives us another example in an interview by the Richmond Federal Reserve Bank:
Your paper with Roland Fryer and Steven Levitt came to a somewhat ambiguous conclusion about whether stereotype threat exists. But do you have a hunch regarding the answer to that question based on the results of your experiment?
I believe in priming. Psychologists have shown us the power of priming, and stereotype threat is an interesting type of priming. Claude Steele, a psychologist at Stanford, popularized the term stereotype threat. He had people taking a math exam, for example, jot down whether they were male or female on top of their exams, and he found that when you wrote down that you were female, you performed less well than if you did not write down that you were female. They call this the stereotype threat. My first instinct was that effect probably does happen, but you could use incentives to make it go away. And what I mean by that is, if the test is important enough or if you overlaid monetary incentives on that test, then the stereotype threat would largely disappear, or become economically irrelevant.
So we designed the experiment to test that, and we found that we could not even induce stereotype threat. We did everything we could to try to get it. We announced to them, “Women do not perform as well as men on this test and we want you now to put your gender on the top of the test.” And other social scientists would say, that’s crazy — if you do that, you will get stereotype threat every time. But we still didn’t get it.
What that led me to believe is that, while I think that priming works, I think that stereotype threat has a lot of important boundaries that severely limit its generalizability. I think what has happened is, a few people found this result early on and now there’s publication bias. But when you talk behind the scenes to people in the profession, they have a hard time finding it. So what do they do in that case? A lot of people just shelve that experiment; they say it must be wrong because there are 10 papers in the literature that find it. Well, if there have been 200 studies that try to find it, 10 should find it, right? This is a Type II error but people still believe in the theory of stereotype threat. I think that there are a lot of reasons why it does not occur. So while I believe in priming, I am not convinced that stereotype threat is important.
And yet when science journalist John Horgan is demanding that the government ban “research on race and intelligence” (Scientific American, 2013), he makes an exception for research on this apparently non-existent “stereotype threat.”
I’m torn over how to respond to research on race and intelligence. Part of me wants to scientifically rebut the IQ-related claims of Herrnstein, Murray, Watson and Richwine.
(Note that he does not do so.)
But another part of me wonders whether research on race and intelligence — given the persistence of racism in the U.S. and elsewhere — should simply be banned. I don’t say this lightly. For the most part, I am a hard-core defender of freedom of speech and science. But research on race and intelligence — no matter what its conclusions are — seems to me to have no redeeming value.
Far from it. The claims of researchers like Murray, Herrnstein and Richwine could easily become self-fulfilling, by bolstering the confirmation bias of racists and by convincing minority children, their parents and teachers that the children are innately, immutably inferior. (See Post-postscript below.)
A “Post-postscript” in which we learn that Mr. Horgan only wants to ban scientific experiments whose outcomes are politically inconvenient for progressives — an effective way to avoid “confirmation bias,” I’m sure we can all agree!
Post-Postscript: Scientific American has just published two excellent article [sic] on “stereotype threat,” which is a kind of reverse placebo — or “nocebo” — effect; victims of negative stereotypes may underperform because they believe the stereotype. […] Some clever critics of my post might accuse me of hypocrisy, because these articles present esearch [sic] on race and and [sic] should be subject to my proposed ban. Obviously I’m trying to eliminate research that reinforces rather than counteracting racism. I mean, Duh.
The truth is, ladies and gentlemen, racists like us are just plain stupid, according to a thoroughly scientastic 2012 paper by a pair of Canadian psychologists.
Despite their important implications for interpersonal behaviors and relations, cognitive abilities have been largely ignored as explanations of prejudice. We proposed and tested mediation models in which lower cognitive ability predicts greater prejudice, an effect mediated through the endorsement of right-wing ideologies (social conservatism, right-wing authoritarianism) and low levels of contact with out-groups. In an analysis of two large-scale, nationally representative United Kingdom data sets (N = 15,874), we found that lower general intelligence (g) in childhood predicts greater racism in adulthood, and this effect was largely mediated via conservative ideology. […] All analyses controlled for education and socioeconomic status. Our results suggest that cognitive abilities play a critical, albeit underappreciated, role in prejudice. Consequently, we recommend a heightened focus on cognitive ability in research on prejudice and a better integration of cognitive ability into prejudice models.
In other news, psychology professors (1) have high IQs, (2) surround themselves with high-IQ outliers from as many racial “out-groups” as possible, (3) have thus managed to convince themselves of various progressive fairy tales including racial equality, (4) would never report being racist anyway, and (5) are a bunch of communists; whereas, e.g., working-class whites (a) have normal IQs, (b) can’t afford to flee the rising tide of vibrant diversity, (c) harbor common sense (true) beliefs about racial inequality, (d) will admit them in a study, and (e) are not a bunch of commie filth.
Scientism refuted. You’re welcome.
And then of course there’s this, from the American Sociological Association:
Smart people are just as racist as their less intelligent peers — they’re just better at concealing their prejudice, according to a University of Michigan study.
“High-ability whites are less likely to report prejudiced attitudes and more likely to say they support racial integration in principle,” said Geoffrey Wodtke, a doctoral candidate in sociology. “But they are no more likely than lower-ability whites to support open housing laws and are less likely to support school busing and affirmative action programs.”
But never mind that: ‘Low IQ & Conservative Beliefs Linked to Prejudice’ (LiveScience).
There’s no gentle way to put it: People who give in to racism and prejudice may simply be dumb, according to a new study that is bound to stir public controversy.
Interesting bit of trivia: intelligence testing also shows — and this, at least, is a reliably replicable finding — that black people have much lower cognitive ability than white people, but believing that is generally considered to be a form of… well, you know.
Average IQ score for indigenous populations (image)
So what about blacks? Are they simply dumb? How dumb are they? The blacks, I mean. How dumb are the blacks? They sure seem dumb. Sorry, there’s no gentle way to put it!
As suspected, low intelligence in childhood corresponded with racism in adulthood.
Low intelligence corresponds even better with low income in adulthood. Hang on, why do we have affirmative action, again? What with blacks being so dumb and all.
Nonetheless, there is reason to believe that strict right-wing ideology might appeal to those who have trouble grasping the complexity of the world.
Then how come Haitians and Mexicans vote Democrat? They’re pretty dumb, after all.
Prejudice is of particular interest because understanding the roots of racism and bias could help eliminate them, Hodson said. For example, he said, many anti-prejudice programs encourage participants to see things from another group’s point of view. That mental exercise may be too taxing for people of low IQ.
I guess that explains why so many black people hate white people and want to hurt them (Issue 18). Empathy is just too darn taxing, as philosopher and educator Gedaliah Braun deduced from more than 30 years of living among Africans. From his essential essay Morality and Abstract Thinking (2009):
One of the pivotal ideas underpinning morality is the Golden Rule: do unto others as you would have them do unto you. “How would you feel if someone stole everything you owned? Well, that’s how he would feel if you robbed him.” The subjunctivity here is obvious. But if Africans may generally lack this concept, they will have difficulty in understanding the Golden Rule and, to that extent, in understanding morality.
If this is true we might also expect their capacity for human empathy to be diminished, and this is suggested in the examples cited above. After all, how do we empathize? When we hear about things like “necklacing” we instinctively — and unconsciously — think: “How would I feel if I were that person?” Of course I am not and cannot be that person, but to imagine being that person gives us valuable moral “information:” that we wouldn’t want this to happen to us and so we shouldn’t want it to happen to others. To the extent people are deficient in such abstract thinking, they will be deficient in moral understanding and hence in human empathy — which is what we tend to find in Africans.
And that calls to mind what William Hannibal Thomas, who was of African descent himself, wrote in The American Negro (1901, pp. 109–116):
In applying these mental laws to the freed people the fair inference is that the supreme difficulty of the negro mind is its inability to parallel internal with external environing phenomena, because there is no apparent intelligent apprehension and adjustment of its internal mental states to external facts. The negro has all the physical endowments of intellect, but he has a mind that never thinks in complex terms, and, as it is wholly engrossed with units of phenomena, the states of consciousness aroused by visual or textual impressions rarely suggest sequences. The consequence is that the freedman exhibits great mental density, and gives conclusive evidence that he has neither clear nor distinct perceptions of specific facts, inasmuch as in every attempt at primary reasoning he falls into confusion and error. There is also reason to believe that the negro neither associates correlated facts, nor deduces logical sequences from obvious cases. He is largely devoid of imagination in all that relates to purely intellectual exercises, though he has fairly vivid conceptions of such physical objects as appeal to the passions or appetite.
There is then no intelligent comprehension and adjustment of the freedman’s internal mental states to external facts, because the former is only engrossed with units of phenomena. That many negro people are incapable of associating numbers, beyond a few elementary numerals, verifies this conclusion. […] Of course, such mental conditions forbid any apprehension of simultaneous or successive events being acquired; and we shall infer that, so long as the negro is devoid of the power of introspection, he will neither have the ability to analyze his own defects nor determine his relation to external phenomena.
The negro not only lacks a fair degree of intuitive knowledge, but so dense is his understanding that he blindly follows weird fantasies and hideous phantoms. So great is his predilection in this direction, that he appears incapable of understanding the difference between evidence and assertion, proof and surmise. These facts warrant the conclusion that negro intelligence is both superficial and delusive, because, though such people excel in recollections of a concrete object, their retentive memories do not enable them to make any valuable deductions, either from the object itself, or from their familiar experience with it. The explication of such intellectual states is to be found in the fact that the chief mental anxiety of the freedman is for the immediate gratification of his physical senses. He lives wholly in his passions, and is never so happy as when enveloped in the glitter and gloss of shams. But while the frankness of these reflections is only equalled by their veracity, we must remember that the negro represents an illiterate race, in which ignorance, cowardice, folly, and idleness are rife, and one whose existence is dominated by emotional sensations. It is only from this standpoint, and with this knowledge of his being, that his characteristic qualities can be fairly ascertained.
To those who know the freedman the fact is obvious that the highest aspiration of negro ambition is, not to acquire the essential spirit of knowledge, but to imitate mechanically what he only succeeds in caricaturing. He certainly is not an intelligent observer of facts, for he can seldom accurately describe objects of daily contact, and one may easily discover that there is no such thing in his mind as capacity for original definition and analysis. We are also thoroughly convinced that the negro mind is rarely awakened to responsive intelligence by meditative reasoning. […]
To make our meaning clear we need only to state that when the negro acquires the terminology of things he vainly imagines himself endowed with a knowledge of the subject-matter represented thereby, and so gets credit from the unthinking class for a knowledge he does not possess. And his mental environment is not only complicated by the fact that in his quest for knowledge he never gets beyond unverified theoretical assertion and memorized verbal learning, but also because the opinions he asserts are echoings of second-hand utterances. […]
As American philanthropy has never instituted searching inquiry respecting the fundamental characteristics of negro nature, it is pertinent to say that, if the foregoing deductions are correct, it follows that our negro educational methods are both wrong and profitless. […]
The educated class of the race — the result of such teachings — rarely seeks for truth in statement, but rather for rhetorical flourish, skill in contradiction, and word juggling, in order to confuse, bewilder, and impose upon their hearers, and to make those among the less informed believe them to be wise and all-knowing. So long as this shameless mendacity or pretence of learning continues, it will prove a fundamental hindrance to race awakening.
Of course, books are full of hate facts, which is why only stupid racists read books.
Needless to say, decent people — meaning people who believe whatever early 21st century progressivism tells them to believe — don’t actually need science to tell them that “racists” (white, obviously) are mentally defective. They knew that already!
Indeed, a quick tour of the ’net reveals that “racists” are widely considered to be “dumb,” “stupid,” “crazy,” “evil,” “hateful,” “vile,” “deranged,” “disgusting,” “infuriating,” “cowardly” people, who “hate life” and make us “feel sick” on account of their “twisted hate,” and because they are “idiots,” “freaks,” “trolls,” “cranks,” “scum,” and “not people society wants,” as well as many other, unprintable things; who will be “racist forever” and “deserve to die” and should “go fucking die” and “do indeed deserve whatever harassment you’d like to throw at them” and “I hope you and your nasty kind suffer and die,” “die,” “just die,” “die alone,” “murder,” “die,” etc., etc.
lol dehumanization (image)
That may be why, when Towson University professor Ben Warner met Matthew Heimbach, a real live white nationalist, in 2013, Warner was shocked that someone who believes what most normal white people believed up until at least the mid 20th century — but has since been proven to be “racist” by Marxist academics — could do such a convincing impression of a human being with thoughts and feelings and stuff.
If I was once ashamed of my excitement at having him in class, that shame didn’t keep me from talking about him. Though it made me uncomfortable, he’d become the most interesting part of my teaching. I was primed for something to boil over, but I also found myself liking him. He arrived to class on time; he was prepared; he was respectful. He had a way of calling me professor in the middle of sentences that appealed to my ego. […]
One day, he came in a half-hour after class had started, and then stuck around to apologize.
“Sorry I was late,” he told me. “I was dealing with the police.”
“The police?” I said.
“I might not be in class on Thursday. I got my first death threat.” He lifted his computer to show me his Facebook wall.
“Usually, they just call me a motherfucker or tell me to suck a dick,” he said. “But this one says they’re going to kill me on Thursday.”
“That’s terrible,” I said. “What are the police doing?”
“They’re not doing anything.”
“You shouldn’t have to deal with that,” I said, while at the same time knowing that — had he not been in my class, had I not seen him furrowing his brow over Carver or O’Connor, asking serious questions about characterization and voice — a more reprehensible me might have muttered he deserved it.
On May 1 — May Day — there had been Workers of the World-type protests in D.C. The emails in my inbox had subject lines like, “At it again,” and “Is this what it’s like in class?” They linked to videos of the White Student Union standing in a line across a street, calmly leaning Confederate flags (more appropriately sized for poles) against their shoulders, as a wave of bearded and backpacked anarchists approached them, flanked them, and began to call them “fuckers” and “racist pigs.”
They chanted, “Nazi scum, your time will come,” told them to “get in the fucking ground where you belong,” and held extended middle fingers fractions of inches away from their faces. There was my student, unflinching, chewing a piece of gum, occasionally asking someone to stop grabbing at his flag. In other videos, he posed measured questions about affirmative action to his screaming counterparts. It was only after someone managed to rip his flag from his hands that he lunged forward, was caught up in a shoving match, and was lost in a swarm of police. Later, I read that bags of urine had been thrown at him.
I guess that for someone accustomed to an all-in brand of righteousness, my suggestion of just shutting up had lacked a certain credibility. What kind of “connection” had I really made? I’d told him to keep his opinions to himself. Was I comfortable with that advice, even when giving it to someone who had advocated for a whites-only state?
“I know,” he’d said to me earlier in the semester, “you probably don’t agree with my politics, because no professors do, but you’re one of the only ones who treats me like a human.”
I’d wanted to receive it as a compliment, but does receiving a compliment always mean letting down your guard?
In a way, I was getting my own education in human contradiction. As much as I railed against it in my students’ stories, I’d been acting no differently in my attempts to oversimplify — to fit him most easily into a prefabricated slot in my mind. I had not wanted to acknowledge his complexity, let alone adjust my teaching style to it. Was he the manipulative leader of a dangerous hate group? Or a college kid experimenting with the power of his voice? Of course, he could be both, and he could be many other things to which spending two and a half hours a week with him had granted me no access.
It was no wonder he’d become frustrated trying to exist in media sound bites. Who among us can keep a message on fire in 30-second bursts — for good or bad — before having to lie down in the straw, exhausted, demoralized or forgotten?
On one of the last days of the semester I saw him in the hall, hours before we were due to meet for class. He was playing with his phone.
“Hi,” I said, catching him off guard. He looked up and said hello as though he had no idea who I was. But after class, later that day, he stayed again. “Professor,” he said, “I just want to really thank you for saying hello to me today.”
“Yeah?” I said.
“This morning, a girl spit on me. When you said hello, it really turned my day around.”
“I’m sorry,” I said to him. “You shouldn’t have to go through that.”
What should he have to go through? I still had no idea.
The Internet’s ever-so-courageous swarm of “anti-racists” has a few ideas, though:
“I have pale skin, and people who are racist ENRAGE ME,” according to one. “Some horrible, disgusting white trash piece of filth just talked to me on the internet.”
“This really makes me MAD,” writes another social justice warrior, “that people can still openly exhibit disgusting racist habits in our lovely rainbow nation!”
“If I were in charge, I’d put all the racists in concentration camps,” explains a fearless Chinese “anti-racist,” since “racists don’t even qualify as human in my book.”
Bear in mind, knowing too much about human biodiversity and evolutionary history practically guarantees that you’re a “racist.” In fact, any decent college class on anthropology will necessarily make you into a “racist.” Anthropologist Gregory Cochran:
There are a lot of facts like this. It’s made even worse by charlatans spewing falsehoods — sometimes that’s all that the typical undergraduate is exposed to. For example, average brain size is not the same in all human populations. Average cranial capacity in Europeans is about 1362; 1380 in Asians, 1276 in Africans. It’s about 1270 in New Guinea. Generally there is a trend with latitude — brain volume is lowest near the equator. And no, despite Gould’s bushwa, there is nothing especially difficult about measuring brain volume. Direct measurement of a healthy brain is best; but that is now done, using magnetic resonance imagery, and the results are about the same — a mean black-white difference of about 1 standard deviation.
Graduate students in anthropology generally don’t know those facts about average brain volume in different populations. Some of those students stumbled onto claims about such differences and emailed a physical anthropologist I know, asking if those differences really exist. He tells them ‘yep’ — I’m not sure what happens next. Most likely they keep their mouths shut. Ain’t it great, living in a free country?
Average brain volume for indigenous populations (image)
In light of these hate facts, consider, if you please, one of the most astonishing and revealing documents of our time: a poster advertising a “Diversity Literacy” course at the University of Cape Town in South Africa.
The Diversity Literati explain their contribution to anthropological disinformation:
Why the brains?
For most of the 19th and early 20th centuries the “science” of ‘race’ purported that different races had different sized brains and by extension mental capacities. Predictably, Europeans were supposed to have the biggest brains and Africans the smallest. This claim has since then been thoroughly discredited — there are no differences in brain size between the ‘races’ — and it is now widely accepted that this view served the interests of European racism and the justification of colonial expansion.
The poster is a play on this idea, suggesting that if any one group of people were to have smaller brains and impaired intellectual capacity it would be those who think that different races have different sized brains!
Welcome to Diversity Literacy!
Note that in the real world (that is, outside the “Diversity Literacy” classroom),
- the races, again, do in fact have brains of different sizes (for what it’s worth, Europeans neither have nor “were supposed to have” the largest);
- all attempts to discredit this incontrovertible scientific fact, notably by the fraud Stephen Jay Gould, have merely “served the interests” of cultural Marxism and the “justification” of Third World colonization of the West (Issue 5); and
- “colonial expansion” (white rule) was quite simply the best thing that ever happened to (what is now) the Third World (Issue 12).
All of which is beside the point, which, if it wasn’t obvious, is that progressive, “anti-racist” academics, educators, activists and public servants — you know, our betters; the people who tell children what to believe, and either refuse to look up or pretend not to know about race differences in intelligence, race differences in brain size, and the known link between intelligence and brain size — will howl with delight that white people (ugh) who feel the wrong feelings and think the wrong thoughts — even if those thoughts are incontrovertible facts — are mentally defective, possibly subhuman.
Just look how tiny that “racist” brain is! Why, it’s even smaller than an African’s.
Not even a total ignorance of anthropology is enough to save you from succumbing to “racism” (a fate, I’m sure we can all agree, nine times worse than death at least) if you happen to be born to that one and only inferior race. You know which one I mean.
“Of course all white people are racist,” writes Joseph Harker (The Guardian). “I don’t mean they are all wilful bigots, of course.” Well, of course you didn’t mean that — but you did just announce to your readership that all white people are racist, and racism is generally considered to be something pretty bad, probably involving wilful bigotry. And possibly Hitler.
Ah, but Harker, you see, is using the increasingly popular subtle definition of racism as “a combination of prejudice and power” (not, say, hating people because of their race), and since, he hallucinates, non-whites have no power anywhere in the world, it follows that he himself, being a mulatto, “cannot be racist” — even if he were to, say, commit a racially motivated violent crime against a white person (Issue 18).
Thus we are left with the following equivalence:
- all white people are racist,
- no non-white person is racist, and therefore
- “racist” simply means white.
So are all “racists” still subhuman scum who deserve to die? I ask this because not everyone is equally willing and able to distinguish between the subtle definition of “racism” (“prejudice plus power”) and the not-so-subtle definition (KKK plus Hitler).
A bit dangerous, then, don’t you think, to teach this at an American university?
The University of Delaware recently made a decision to subject its students to mandatory “treatment” (‘treatment’ is a term used by the university) where they learn that “all whites are racist,” racism by the ‘people of color’ is impossible, and George Washington is merely a “famous Indian fighter, large landholder and slave owner.”
The university requires that the students adopt highly specific university-approved views on issues ranging from politics to race, sexuality, sociology, moral philosophy, and environmentalism. Students are forced to attend training sessions, floor meetings, and one-on-one meetings with their Resident Assistants (RAs). The RAs who facilitate these meetings have received their own intensive training from the university, including a “diversity facilitation training.”
Students are not allowed to express disapproval of the “treatment” or of the questions that deal with their political views, sexuality and other private matters. Expressing disapproval will result in a negative report of the progress they’ve made during “treatment,” which may result in punishment.
Students are forced to achieve “competency” as a goal of the “treatment.” ‘Competency’ requires that “Students will recognize that systemic oppression exists in our society,” “Students will recognize the benefits of dismantling systems of oppression,” and “Students will be able to utilize their knowledge of sustainability to change their daily habits and consumer mentality.”
Student are also forced to take actions that outwardly indicate their agreement with the university’s ideology, such as displaying specific, school-approved door decorations and taking action by advocating for a social group that is defined as “oppressed” by the University.
The following are quotes from the booklet that students must learn, which was compiled by the Foundation for Individual Rights in Education (FIRE).
“A RACIST: A racist is one who is both privileged and socialized on the basis of race by a white supremacist (racist) system. The term applies to all white people (i.e., people of European descent) living in the United States, regardless of class, gender, religion, culture or sexuality. By this definition, people of color cannot be racists, because as peoples within the system, they do not have the power to back up their prejudices, hostilities, or acts of discrimination. (This does not deny the existence of such prejudices, hostilities, acts of rage or discrimination.)” — Page 3
“REVERSE RACISM: A term created and used by white people to deny their white privilege. Those in denial use the term reverse racism to refer to hostile behavior by people of color toward whites, and to affirmative action policies, which allegedly give ‘preferential treatment’ to people of color over whites. In the , there is no such thing as ‘reverse racism.’” — Page 3
“A NON-RACIST: A non-term. The term was created by whites to deny responsibility for systemic racism, to maintain an aura of innocence in the face of racial oppression, and to shift responsibility for that oppression from whites to people of color (called “blaming the victim”). Responsibility for perpetuating and legitimizing a racist system rests both on those who actively maintain it, and on those who refuse to challenge it. Silence is consent.” — Page 3
“Have you ever heard a well-meaning white person say, ‘I’m not a member of any race except the human race?’ What she usually means by this statement is that she doesn’t want to perpetuate racial categories by acknowledging that she is white. This is an evasion of responsibility for her participation in a system based on supremacy for white people.” — Page 8
I gather that all white people share “responsibility” and must not be allowed to “shift responsibility” or feign “innocence” for “systemic racism,” which involves “racial oppression” — but don’t worry, I’m sure there are suitably subtle ways to redefine all those terms so that technically this cannot be seen as inciting anyone to do anything to his racist oppressors, meaning all white people everywhere.
Hazardous as well to teach this at a Canadian elementary school:
While people in different contexts can experience prejudice or discrimination, racism, in a North American context, is based on an ideology of the superiority of the white race over other racial groups. Racism is evident in individual acts, such as racial slurs, jokes, etc., and institutionally, in terms of policies and practices at institutional levels of society. The result of institutional racism is that it maintains white privilege and power (such as racial profiling, hiring practices, history, and literature that centre on Western, European civilizations to the exclusion of other civilizations and communities). The social, systemic, and personal assumptions, practices, and behaviours that discriminate against persons according to their skin colour, hair texture, eye shape, and other superficial physical characteristics.
And I feel like it might be detrimental to race relations when a (non-white) social “scientist” at Duke University announces that white people are as evil as ever, and don’t ever let those evil white people tell you otherwise:
Duke University sociology professor Eduardo Bonilla-Silva says white people are as racist as they’ve ever been, it’s just modern, more insidious forms of racism have taken over the more overt forms of racism common in America’s past, The Dartmouth reports.
“We must fight white supremacy,” Bonilla-Silva said. “The only way to remove racism in America is to remove systemic racism.”
You know, if white people really are the source of this “systemic racism” thing you’ve convinced everyone is tremendously evil, and white people aren’t getting any less “racist” over time, then it seems like your plan to “fight white supremacy” and “remove systemic racism” actually reflects a desire to eliminate… well… hmm.
Finally — I’m not sure what to make of Tumblr dot TXT. These… people… appear to be sincere. Is it worth trying to understand them as a product of the modern world, or should we just laugh and laugh and sharpen our bayonets and laugh in renewed certainty that the modern world must be destroyed utterly? (Sic to everything.)
- “All white people are racist. Every. Single. One”
- “all white people are inherently racist. All of us”
- “a white woman… is, therefore, thus, ergo, a racist”
- “fuck all of europe you’re all nazis and white supremacists”
- “whiteness is antiblack trans-misogyny all on its own”
- “I feel like 99.9999% of the world’s problems would be solved if we took all the cishet white men in the world and pushed them somewhere else”
- “WHITE PEOPLE ARE NEVER THE FUCKING MINORITY AND THEY NEVER ARE THE LESS PRIVILEGED ONES NEVER IN ANY KIND OF SITUATION NOWHERE”
- “WARNING: THIS IS NOT A WHITE-FRIENDLY SPACE”
- “Forever laughing at the idea that white people have ‘culture’” (Issue 13)
- “Can we just ban white people from celebrating Halloween already?”
- “What ever happened to twerking? It used to be a revered art form now its been reduced to slutshaming. I feel like the answer is white people”
- “I’m sorry but feminism does not apply to white women and it SHOULD not”
- “Rules of logic,evidence,and basic fairness… pretty sure theyve ALWAYS been used to trick POC [coloured people]”
- “‘WHO THE FUCK DESIGNED THIS SYSTEM’ White ablebodied neurotypical upper class cis men. Just like everything bad”
- “I doubt any white person invented anything, except maybe perfecting the art of stealing and claiming shit”
- “i just assume poc made everything… white folks aint shit”
- “white men brought rape to the new world and still do it worldwide”
- “Rape is a product of privilege consuming our society. Who benefits the most from privilege?? White cis heterosexual males. Therefore they are the only true people who can carry out a legitimate rape”
All I know for sure is that I’ve never been prouder to be a racist white racist.
Even a bona fide (non-white) anti-racist social “scientist,” Rodolfo Mendoza-Denton, wants to address “the rampant and unchecked stereotype that whites are racist.”
What do I mean by “rampant and unchecked”? Although there are stereotypes about many groups, not all stereotypes are tabooed to the same degree. For example, there are very strong social norms against expressing negative stereotypes against African Americans and Latino/as. […]
Yet I’d venture that the perception that all whites are bigots is one of the stereotypes that elicits the least outcry from our society. In so many of our conversations about diversity and overcoming prejudice, whites are too often seen as the source of the problem — the ones who don’t understand, the ones whose attitudes need to be changed, the blue-eyed devils with hate and ignorance in their hearts. […]
The stereotype of whites as racist has real and negative consequences for its targets. This has been shown in a recent study by psychologist Phil Goff and colleagues.
I should point out that these “real and negative consequences” have also been shown, in rather more spectacular fashion, in a recent put down the fucking protractor and take a goddamn look at the actual world we actually live in by the Carlyle Club (Issue 18).
There ARE instances where people get madder at a white man because he is white. My suspicion is that when it comes specifically to saying something racist, people are likely to get angrier at and punish whites more than they do minorities. Sure, Juan Williams was fired from NPR — but he was promptly hired by Fox. Ask yourself — if a White journalist had said something similar, would quite so many people have been as willing to defend the journalist and blame the network?
Speaking of punishing whites, I guess it’s not enough to train the whole world to hate white people; you also have to make it illegal for white people to defend themselves from the obvious consequences of that same socially conditioned hatred:
Retired Brooklyn Supreme Court Judge Frank Barbaro wants a white man he convicted in 1999 of killing a black man to be freed — claiming Wednesday he based the verdict on his own reverse [sic] racism. The 86-year-old former jurist convicted Donald Kagan, now 39, of fatally shooting Wavell Wint, 22, during a struggle over Kagan’s chain outside an East New York movie theater in 1998.
But Barbaro told a court that, because of his viewpoint as a civil-rights activist, he didn’t consider a justification defense by Kagan in the nonjury trial.
“Mr. Kagan had no intent to kill that man… I believe now that I was seeing this young white fellow as a bigot, as someone who assassinated an African-American,” Barbaro, a former longshoreman who also served 23 years in the state Assembly, told Brooklyn Supreme Court Justice ShawnDya Simpson.
Barbaro said he contacted Kagan’s attorneys after some deep soul-searching led him to realize he had denied Kagan a fair trial.
“I was prejudiced during the trial. I realized I made a terrible mistake and there was a man in jail because of my mistake.”
Barbaro said his work during the civil-rights movement fed into his bias in the trial.
“The question of discrimination against African-American people became part of my fiber — my very fiber,” he told Simpson.
Moral progress — through socially conditioned racial hatred!
Now, I know what you’re thinking, reader: you are thinking that weasels are extremely cute and wriggly and they have tiny paws and soft little bellies. You are, of course, correct — but also way off topic. Concentrate, please, or we will never get through this.
Be not distracted by Blanket Weasel, tiny though his paws may be (image)
Or perhaps you were wondering where I’m going with this. Well, here’s a hint: in 2012, experimental psychologists, psychiatric neuroscientists, and even a pair of “practical ethicists” put their heads together and came up with an honest-to-God cure for racism.
Implicit negative attitudes towards other races are important in certain kinds of prejudicial social behaviour. Emotional mechanisms are thought to be involved in mediating implicit “outgroup” bias but there is little evidence concerning the underlying neurobiology. The aim of the present study was to examine the role of noradrenergic mechanisms in the generation of implicit racial attitudes.
Relative to placebo, propranolol significantly lowered heart rate and abolished implicit racial bias, without affecting the measure of explicit racial prejudice. Propranolol did not affect subjective mood.
Our results indicate that β-adrenoceptors play a role in the expression of implicit racial attitudes suggesting that noradrenaline-related emotional mechanisms may mediate negative racial bias. Our findings may also have practical importance given that propranolol is a widely used drug. However, further studies will be needed to examine whether a similar effect can be demonstrated in the course of clinical treatment.
“Practical importance” from Oxford’s “practical ethicists.”
If you’re thinking that maybe, just maybe, this study doesn’t actually show what it claims to show — you are, of course, correct, and light-years ahead of the official press. Cue the chorus of leftoid journo-drones:
“Propranolol may open hearts and minds,” according to Yahoo News, with its “unintended benefit of muting racist thoughts in those who take it.”
“A heart drug was found to have the unusual side-effect of reducing racist feeling in participants” is how the Daily Mail sees it.
The pill has been “found to reduce racist attitudes,” according to Gizmag. “Does this mean we could one day see a pill to counter racist tendencies?”
“Bigots who want to reform may soon be able to take an anti-prejudice pill,” says the New York Post. I like how they seem willing to give us a choice in the matter.
Jim Goad (TakiMag) digs through the comment sections:
This pill should be mandatory for all right-wing Republicans, tea bags, KKK, people who hate the immigrants and want them deported, etc. What a much better world this would be.
Let’s put Propanolol in the water supply and stamp out the ugliness of Republicanism forever.
There’s already a pill to cure racists, it contains cyanide :)
How nice. I’m sure they all read the original study very carefully — or at least the abstract, including the one section I didn’t quote before:
Healthy volunteers (n = 36) of white ethnic origin, received a single oral dose of the β-adrenoceptor antagonist, propranolol (40 mg), in a randomised, double-blind, parallel group, placebo-controlled, design. Participants completed an explicit measure of prejudice and the racial implicit association test (IAT), 1–2 h after propranolol administration.
So, 36 white guys took the IAT: the same test of “bias” that had already shown that more “biased” physicians discriminate less (2008) and that black people prefer to interact with “highly racially biased” white people (2005). Possibly that’s because the IAT is measuring “simple cognitive inertia — the difficulty in switching from one categorization rule to another — rather than unconscious preferences” (2010).
In short, the study shows or claims to show (because in light of what we’ve seen, we have absolutely no reason to trust the methodology, the data or the analysis, let alone the authors’ interpretation) that an anti-anxiety drug can affect reaction time — and nothing else (“without affecting the measure of explicit racial prejudice”).
“Propranolol may open hearts and minds.” Fuck you.
But let’s not let a little critical thinking stand in the way of Progress: ‘Blood pressure drug “reduces in-built racism”’ (The Telegraph).
Sylvia Terbeck, lead author of the study, published in the journal Psychopharmacology, said: “Our results offer new evidence about the processes in the brain that shape implicit racial bias.
“Implicit racial bias can occur even in people with a sincere belief in equality.
“Given the key role that such implicit attitudes appear to play in discrimination against other ethnic groups, and the widespread use of propranolol for medical purposes, our findings are also of considerable ethical interest.”
Professor Julian Savulescu, of the university’s Faculty of Philosophy, and a co-author of the study, said: “Such research raises the tantalising possibility that our unconscious racial attitudes could be modulated using drugs, a possibility that requires careful ethical analysis.
“Biological research aiming to make people morally better has a dark history. And propranolol is not a pill to cure racism. But given that many people are already using drugs like propranolol which have ‘moral’ side effects, we at least need to better understand what these effects are.”
Tantalizing indeed. Not to worry, though: this possibility “requires careful ethical analysis.” Careful ethical analysis by whom, you ask? Why, professional ethicists, of course! Certified experts on right and wrong.
The Hastings Center is a major American bioethics research institute. In a 2013 paper for the influential Hastings Center Report, Senior Research Scholar Erik Parens relates:
I recently visited terrific colleagues in the Netherlands, primarily to explore the nexus of the bioethical debates about disability and enhancement. But since I’ve been contemplating the question concerning assisted suicide in the context of Alzheimer’s disease, the visit was also a chance to learn a bit about how the Dutch are broaching that fearsome and bewildering question.
As in the United States, the Dutch conversation about assisted suicide emerged primarily in the context of cancer. At least in that context, before acceding to a request for assistance in dying, caregivers must be sure that the person has made a voluntary and carefully considered request, and that her suffering is unbearable and without prospect of improvement. The Dutch have recently been trying to use those criteria in the context of Alzheimer’s disease. Given the wave of Alzheimer’s cases poised to crash onto wealthy countries, along with emerging technology to detect the disease process before symptoms appear, we should be grateful to the Dutch for that attempt.
As a newcomer to this discussion, however, I was struck that those criteria may have a somewhat odd result. A patient with Alzheimer’s disease can easily meet the conditions in the early stage of the disease, when one usually has the mental capacity to request assistance in dying and to make the case that one’s existential suffering is unbearable. If one is in the late stage, though, it can be much harder to get such assistance because one does not have that capacity.
Okay, that seems to make sense as an ethical question (bioethical, whatever): sometimes people are suffering but can’t communicate it; how are we to help them?
If one holds what we might call “the disease view” of the person, however, the current situation in the Netherlands can seem odd. On this conception (which in a nuanced form is associated with Ronald Dworkin), there is one person, who, when she is healthy, conceives of her whole life and makes a decision we should respect: that she wants help in dying when she is so far into the disease that she can no longer engage in meaningful communication with those she loves. On this “disease view,” the current Dutch practice is odd insofar as it fails to respect the healthy person’s capacitated wish, and in so doing, subjects that person to what she said was, by her lights, unbearable suffering.
I notice that this argument doesn’t actually use either of the premises of “unbearable suffering” or an inability to “engage in meaningful communication.” It’s actually enough, for this argument to go through, that the patient (a) get a mental “disease,” however defined, and then (b) change her mind about wanting to die, now that she has it:
The predisease person may have thought she would have wanted assistance in dying if she were to develop the disease, but that person may be replaced by a person who actually has Alzheimer’s but is perfectly happy to live in a new, different way.
Okay, this no longer makes sense to me as an ethical question.
My guess is that it won’t work terribly well to use the cancer criteria in the context of Alzheimer’s disease. My further guess is that, to make headway, we will have to draw on both the “difference” and the “disease” views. How to do that is hardly clear, but that we need to try is.
“The outcome is predetermined,” remarks Wesley J. Smith (National Review), consultant for the International Task Force on Euthanasia and Assisted Suicide and for the Center for Bioethics and Culture, “and the goal is to find the best way to justify doing what we already want to do.” Practical ethics!
Don’t worry, though, I’m sure mental health professionals have a solid handle on what is and isn’t a mental “disease”… right?
Racism is a mental illness, according to Harvard psychiatry professor Alvin Poussaint (Western Journal of Medicine, 2002), who happens to be black.
The psychiatric profession’s primary index for diagnosing psychiatric symptoms, the Diagnostic and Statistical Manual of Mental Disorders (DSM), does not include racism, prejudice, or bigotry in its text or index. Therefore, there is currently no support for including extreme racism under any diagnostic category. This leads psychiatrists to think that it cannot and should not be treated in their patients.
To continue perceiving extreme racism as normative and not pathologic is to lend it legitimacy. Clearly, anyone who scapegoats a whole group of people and seeks to eliminate them to resolve his or her internal conflicts meets criteria for a delusional disorder, a major psychiatric illness.
Right, because that’s totally what the word “racism” means — for the purposes of classifying “racists” as mentally ill. I mean, once we’ve finished officially classifying “racists” as crazy, we can probably relax the definition ever so slightly…
It is time for the American Psychiatric Association to designate extreme racism as a mental health problem by recognizing it as a delusional psychotic symptom. Persons afflicted with such psychopathology represent an immediate danger to themselves and others. Clinicians need guidelines for recognizing delusional racism in all its forms so that they can provide appropriate treatment. Otherwise, extreme delusional racists will continue to fall through the cracks of the mental health system, and we can expect more of them to explode and act out their deadly delusions.
And the epidemic of Nazi attacks might finally cease. Hoo-ray.
In 2012, The Oxford Handbook of Personality Disorders added a chapter by Carl C. Bell (who happens to be black) and Edward Dunbar on “Racism and Pathological Bias as a Co-Occurring Problem in Diagnosis and Assessment.”
The purpose of this chapter is to explore the social and psychological etiology of racism, to explicate racism as a public health pathogen (i.e., how racism harms our society), and to examine the controversy of whether racism can be considered, in some instances, a personality disorder or other form of psychopathology.
The answer to that “controversy,” by the way, is: yes; yes it can (Bell, 2004).
Where does it end? How the hell should I know? I do know that in 2009, psychiatrist Allen J. Frances warned of diagnostic recklessness in the new edition of the Diagnostic Statistical Manual (DSM) by the American Psychiatric Association (APA).
I believe that the work on DSM-V has displayed the most unhappy combination of soaring ambition and weak methodology. […]
The DSM-V goal to effect a “paradigm shift” in psychiatric diagnosis is absurdly premature. Simply stated, descriptive psychiatric diagnosis does not now need and cannot support a paradigm shift. […]
There is also the serious, subtle, and ubiquitous problem of unintended consequences. As a rule of thumb, it is wise to assume that unintended consequences will come often and in very varied and surprising flavors. For instance, a seemingly small change can sometimes result in a different definition of caseness that may have a dramatic and totally unexpected impact on the reported rates of a disorder. Thus are false “epidemics” created. For example, although many other factors were certainly involved, the sudden increase in the diagnosis of autistic, attention-deficit/hyperactivity, and bipolar disorders may in part reflect changes made in the DSM-IV definitions. Note this.
Undoubtedly, the most reckless suggestion for DSM-V is that it include many new categories to capture the subthreshhold (eg, minor depression, mild cognitive disorder) or premorbid (eg, prepsychotic) versions of the existing official disorders. The beneficial intended purpose is to improve early case finding and promote preventive treatments. Unfortunately, however, the DSM-V Task Force has failed to adequately consider the potentially disastrous unintended consequence that DSM-V may flood the world with tens of millions of newly labeled false-positive “patients.” The reported rates of DSM-V mental disorders would skyrocket, especially because there are many more people at the boundary than those who present with the more severe and clearly “clinical” disorders. The result would be a wholesale imperial medicalization of normality that will trivialize mental disorder and lead to a deluge of unneeded medication treatments — a bonanza for the pharmaceutical industry but at a huge cost to the new false-positive patients caught in the excessively wide DSM-V net. They will pay a high price in adverse effects, dollars, and stigma, not to mention the unpredictable impact on insurability, disability, and forensics.
In my experience, experts on any given diagnosis always worry a great deal about missed cases but rarely consider the risks of creating a large pool of false positives — especially in primary care settings. The experts’ motives are pure, but their awareness of risks is often naive. Psychiatry should not be in the business of inadvertently manufacturing mental disorders. […]
Another DSM-V innovation would create a whole new series of so-called behavioral addictions to shopping, sex, food, videogames, the Internet, and so on. Each of these proposals has the potential for dangerous unintended consequences by inappropriately medicalizing behavioral problems; reducing individual responsibility; and complicating disability, insurance, and forensic evaluations. None of these suggestions is remotely ready for prime time as an officially recognized mental disorder.
Getting as much outside opinion as possible is crucial to smoking out and avoiding unforeseen problems. We believed that the more eyes and minds that were engaged at all stages of DSM-IV, the fewer the errors we would make. In contrast, DSM-V has had an inexplicably closed and secretive process. Communication to and from the field has been highly restricted. Indeed, even the very slight recent increase in openness about DSM-V was forced on to an unwilling leadership only after a series of embarrassing articles appeared in the public press. It is completely ludicrous that the DSM-V work group members had to sign confidentiality agreements that prevent the kind of free discussion that brings to light otherwise hidden problems. DSM-V has also chosen to have relatively few and highly select advisors. It appears that it will have no Options Book to allow wide scrutiny and contributions from the field.
The secretiveness of the DSM-V process is extremely puzzling. In my entire experience working on DSM-III, DSM-III-R, and DSM-IV, nothing ever came up that even remotely had to be hidden from anyone. There is everything to gain and absolutely nothing to lose from having a totally open process. Obviously, it is much better to discover problems before publication — and this can only be done with rigorous scrutiny and the welcoming of all possible criticisms.
This is the first time I have felt the need to make any comments on DSM-V. Even when the early steps in the DSM-V process seemed excessively ambitious, secretive, and disorganized, I hoped that I could avoid involvement and believed that my successors deserved a clear field. My unduly optimistic assumption was that the initial problems of secrecy and lack of explicitness would self-correct and that excessive ambitions would be moderated by experience. I have decided to write this commentary now only because time is running out and I fear that DSM-V is continuing to veer badly off course and with no prospect of spontaneous internal correction. It is my responsibility to make my worries known before it is too late to act on them.
What is needed now is a profound midterm correction toward greater openness, conservatism, and methodological rigor. I would thus suggest that the trustees of the American Psychiatric Association establish an external review committee to study the progress of the current work on DSM-V and make recommendations for its future direction.
Finally, Dr. Frances opened his commentary with the statement, “We should begin with full disclosure.” It is unfortunate that Dr. Frances failed to take this statement to heart when he did not disclose his continued financial interests in several publications based on DSM-IV. Only with this information could the reader make a full assessment of his critiques of a new and different DSM-V. Both Dr. Frances and Dr. Spitzer have more than a personal “pride of authorship” interest in preserving the DSM-IV and its related case book and study products. Both continue to receive royalties on DSM-IV associated products. The fact that Dr. Frances was informed at the APA Annual Meeting last month that subsequent editions of his DSM-IV associated products would cease when the new edition is finalized, should be considered when evaluating his critique and its timing.
Dr. Frances lost, of course.
DSM 5 is a done deal — the final proofs were just sent off for printing with a mid May  publication date.
I began writing this blog almost three years ago hoping to warn the people working on DSM 5 off their worst decisions.
For the most part I failed. About one third of my targets were dropped, but DSM 5 remains a reckless and poorly written document that will worsen diagnostic inflation, increase inappropriate treatment, create stigma, and cause confusion among clinicians and the public.
On the plus side, maybe some day those racist thugs who believe in anthropology will be able to get the help they so desperately need — whether they want it or not.
While it might be easy to dismiss this chapter of Soviet history as an interesting consequence of totalitarian and authoritarian politics, it also serves as a disturbing reminder of the normative nature of psychiatry and the assessment of psychiatric disorders. Mental health is a culturally sanctioned thing. Our definitions of mental health change over time depending on the values and morals of the society in question.
Today, our concerns are with those people who pose a threat to themselves and impose a burden on society, and in turn we’ve come to pathologize such things as gambling, depression, anxiety, and overeating. Looking to the future, it’s not ridiculous to think we might do the same for shyness, extreme religious beliefs, or racial bigotry. But given the diversity of human culture and individual experience, could we ever in all fairness agree upon and impose a singular vision of what’s mentally normal?
Thank you for reading.
“My nose is cold” says the deadly Snow Weasel (image)