It isn't easy putting a yardstick to human behavior. How can something so complex and inscrutable be measured, quantified, and modeled? "Sociologists face extremely tough intellectual and practical tasks," writes Hubert M. Blalock, Jr., who was a member of the UW sociology department from 1971 until his death in 1991. Reality is sufficiently complex that sociologists would need "upwards of fifty variables" to disentangle the myriad factors at work, he notes. Besides, variables are intercorrelated. Group norms or role expectations are not precise quantities--they have fuzzy boundaries, making measurements difficult. And causal relationships are hard to prove.
"The impact of Hubert M. Blalock on contemporary sociology is deep and pervasive," says UW sociology chairman Charles Hirschman. Blalock's work increased the widespread use in sociology of quantitative techniques of data analysis. His work on modeling causal relationships influenced the way problems are formulated, the way social measurement is conceived, and the way social data are analyzed.
Blalock's 1964 book Causal Inferences in Nonexperimental Research was a "classic," says Hirschman, that helped to move contemporary social science, especially sociology and political science, away from simply the identification of observed associations toward the actual testing of causal hypotheses.
Building on work by other researchers in biology and economics, Blalock published work in the 1960s and 1970s that laid the foundation for social science studies that cannot depend entirely on the use of an experimental design (experimental plan in which variables are carefully controlled, and experimental interventions are randomly administered and their effects measured).
On another front, Raftery developed a new statistical test that has become the standard technique for the evaluation of models in much of quantitative sociology. "The aim of much social research is to describe the main features of social reality; such a description is often called a model, and is necessarily to some extent approximate," writes Raftery. When a model is a good one, any discrepancy with respect to the real data should be small.
But how to choose among possible models? Until Raftery's development, sociologists relied mainly on the classical means of testing the null hypothesis put forth by pioneering statistician R. A. Fischer long ago. Fisher's technique was appropriate for small samples used in such experiments as agricultural field tests, but it was inadequate to deal with very large data sets encountered in sociological studies. The large data files from sample surveys and censuses can make standard tests misleading. Raftery developed a test for evaluating how well a model fits the data in comparison to other candidate models, called the Bayesian Information Coefficient, which is more appropriate for such large data sets.