In recent years, culture studies and cultural history have become very popular approaches in the humanities. This is all to the good, but the notion of "culture" is used in several different ways, and they aren't compatible with one another. In this post, I want to look at two of these notions of culture and the problems they cause; I'll look at some others later.
The first meaning uses something like Mathew Arnold's definition: "the best that has been said and done" [1]. There are two important characteristics of this usage. First, it is explicitly evaluative and normative. Second, it is, tacitly or otherwise, rooted in the local perspective of the observer. It is thus a definition used by critics, not scholars. The second meaning is one which grew out of anthropology in the early twentieth century: culture is the system of conventional practices or institutions which make up a society's way of living.
The new emphasis on cultural studies often conflates the two definitions in an important way; for example:
“Two commitments seem to us to be constitutive of cultural history. First, there is the concern with culture in the sense of that which gives meaning to people’s lives, a concern that covers both Matthew Arnold’s elitest ‘best that has been known and thought in the world’ [1] and William Morris’s culture ‘of the people, by the people and for the people’ [2]. Secondly, cultural history has to deal with culture in the sense of social habits, the totality of the skills, practices, strategies, and conventions by which people constitute and maintain their social existences.” [3]
These two definitions are fundamentally incompatible. Understanding culture as a system of conventions or practices means suspending evaluative judgments of the practices we're studying. Hence, mixing the attempt to understand with an evaluative stance must lead to some kind of failure, whether technical, evaluative, or both.
This point immediately raises a series of questions. How do we know when we've been successful in suspending evaluative judgment? Isn't it true that some sort of evaluation is intrinsic to any stance we take? How can we justify refusing to take an evaluative stand on some kinds of especially repulsive behavior (e.g., child abuse or mass murder)? I'll deal with these points only very briefly, because my concern is with the point that conflating the approaches implied by the two definitions leads to a failure.
How can we know we've suspended evaluative judgment successfully? Isn't there always some bias? Certainly. It is ultimately impossible to guarantee avoidance of all bias simultaneously, evaluative or otherwise. But biases can be identified, made explicit, restricted, and neutralized. All attempts to understand and evaluate are, in some sense, local; they are limited in space, time, assumption and reference. The way to identify biases is by systematic comparative analysis, juxtaposing what is the case here and now in these circumstances with the situation at some other time and place and circumstances. Sometimes, we compare what is with what might be, or ought to be. But in any case, it is comparison that reveals differences, and it is the analysis of differences that reveals bias. The more comprehensive and detailed the differences we study, the more we can identify subtle biases. This proceeds without any intrinsic limit: we're always bound by local circumstance, and we can always find the limits of those bonds by looking at neighboring circumstances. All this is true whether we're looking at descriptive or evaluative differences.
Isn't there always some sort of evaluation built into any approach we take? Yes, but we get to choose it. And the relevant evaluative judgment here is that understanding is valuable and important. The social science conception of culture implies a certain stance, not only toward observed activities, but toward the observation process itself and its host culture. Social scientists are obliged to adopt an attitude of patient understanding toward what we observe, putting aside our own aesthetic and moral commitments in order to better grasp those we are taking as subject-matter. That is, achieving understanding itself has very high moral value. Understanding activities which are not our own is extremely important to us. It often overrides commitments to standards which are ordinary and usual in our own cultures. Sometimes, we find ourselves looking at things which are ugly or obnoxious, and we don't seek to change or remove them. This is not indifference, but studied neutrality. It is justified by the value of the knowledge we produce; it's worth it to forego judgment in one or more specific circumstances, if we gain insights which aid in solving a whole class of problems. This creates a serious obligation to gain such insights, to do the research well, lest the sacrifices made in the name of obtaining them be wasted.
But what happens when a social scientist becomes aware of an evil situation? Isn't there an obligation to do something about it? Perhaps. Incompatible moral obligations arise in every line of work; social scientists are not spared this. We sometimes find ourselves sacrificing an increase in understanding. Certainly, we recognize obligations of this sort in the conduct of research-- protecting privacy of respondents, for example. Ethical constraints on our own research practices aside, these kinds of conflict are quite rare in practice. It's certainly true that there are some very difficult problems at the hypothetical extremes, but most research most of the time doesn't run into these problems. Few sociologists run into respondents who are planning violent crimes, for example.
Suspension of aesthetic and moral judgment means suspending applause for the worthy, admirable, and valuable no less than it means suspending opprobrium for the repugnant and the wicked. For example, it doesn't really matter for sociological purposes if a scientist's new theory is particularly elegant or effective at explaining a phenomenon which has long been puzzling. It is important to understand why other scientists admire the theory and make use of it, but it would be a betrayal of sociological commitments to simply accept their evaluation of the theory.
This is an attitude which practitioners often find hard to accept, and it's not difficult to see why. I clearly remember a young physician who latched on to this point: "You guys don't even care about the diagnosis!" he said to me, voice quivering with outrage. A patient in agony, ten years of his brutally difficult training on the line, and I didn't care. I was almost ashamed of myself.
But this reaction misses the point. Of course sociologists don't care about the diagnosis; caring about the diagnosis is what physicians and patients do. Why should a sociologist set up as an unqualified competitor on the physicians' own turf? We don't need more untrained diagnoses. Rather, we need a better understanding of what goes into the diagnostic process, which physicians can use to improve their own practice. And physicians in the clinic are ill-equipped to make systematic analytically-oriented comparisons of one another's diagnostic practices precisely because they care about the diagnosis. Because they care, they rush to save the patient rather than analyze patterns of medical work.
Similarly, we don't need more ill-trained scientists; we need better understanding of the ways in which research is organized and conducted. And in order to gain this understanding, observers have to distance themselves from the success and failure of particular projects, scientists, or lines of work. So we often hear a complaint analogous to my physician friend's: "You guys don't even
So it's necessary to obtain some distance on the claims our respondents make if we are to do social science adequately. This means refusing to accept at face value all the claims which a group of people make about themselves. But it also means understanding those claims, and why they are made, patiently and sympathetically.
And here the two definitions of "culture" make serious trouble for the social scientist. The evaluative conception of culture, together with the implicit critic's stance which goes with it, is incompatible with the analytical stance. The critic's stance requires judging better and worse just when neutrality is needed in order to protect understanding. Whether the evaluative standpoint is that of the people being observed, or of some other audience altogether, is irrelevant. This is very easy to see when we're talking about studying scientists at their research. It would be foolish to accept scientists' own claims about the meaning and importance of their work at face value, because these claims are biased, as well they should be. Scientists understand this very well; they’ve built a very elaborate system of institutional checks (such as peer review) to deal with it. Of course, it would be equally foolish to accept others' evaluative claims at face value, because who's the expert?
Science is not unique is this way. Every institution, every organization, every culture, every society, faces the same issues: the insider specialists know more than anyone else, and are best-placed to make judgments. But they're also inevitably biased by their own positions. One practical way to get around the implicit conflict is to get an honest outsider who is utterly indifferent to the concerns of those involved. That's why we like to say, when we're trying to decide right and wrong rather than true and false, that justice should be blind. Another way is to get knowledgeable advocates on all sides of an issue, and let them argue; "Our truth" says biologist Richard Levins, "is the intersection of independent lies." [4]
This does not mean that evaluation is impossible, or that it must be suspended forever. It does mean that one cannot formulate an effective understanding of something and evaluate it at the same time. And so the critical and the analytical cannot be mixed, because the result will be either malformed understanding or mistaken evaluation. We can separate them in time, or we can separate them in the division of labor. But separate them we must if we are to be both honest and competent.
[1] Arnold, Matthew 1869. Culture and anarchy; an essay in political and social criticism. London: Smith, Elder & Co.
[2] Morris, W. 1882. Hopes and Fears for Art. Boston: Roberts Bros.
[3] N. Jardine and E. C. Spary "The natures of cultural history" pp. 3 - 16 in N. Jardine, J. A. Secord, and E. C. Spary (Eds.), 1996. Cultures of Natural History. Cambridge: Cambridge University Press, at p. 8. I'm singling out Jardine and Spary here because, first, I have their book close by, and second, because they are particularly clear about the point.
[4] R. Levins. 1966. The strategy of model-building in population biology. American Scientist 54:421-431, at p. 426.
Comments