You are on page 1of 20

http://sss.sagepub.

com/
Social Studies of Science
http://sss.sagepub.com/content/41/1/107
The online version of this article can be found at:

DOI: 10.1177/0306312710385743
2011 41: 107 originally published online 18 November 2010 Social Studies of Science
Mikaela Sundberg
astrophysics, oceanography and meteorology create standards for results
The dynamics of coordinated comparisons: How simulationists in

Published by:
http://www.sagepublications.com
can be found at: Social Studies of Science Additional services and information for

http://sss.sagepub.com/cgi/alerts Email Alerts:

http://sss.sagepub.com/subscriptions Subscriptions:
http://www.sagepub.com/journalsReprints.nav Reprints:

http://www.sagepub.com/journalsPermissions.nav Permissions:

http://sss.sagepub.com/content/41/1/107.refs.html Citations:

What is This?

- Nov 18, 2010 Proof

- Jan 19, 2011 Version of Record >>


by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Corresponding author:
Mikaela Sundberg, Department of Sociology, Stockholm University, Universitetsvgen 10B, S-106 91
Stockholm, Sweden.
Email: mikaela.sundberg@sociology.su.se
The dynamics of coordinated
comparisons: How
simulationists in astrophysics,
oceanography and meteorology
create standards for results
Mikaela Sundberg
Stockholm University
Abstract
This paper explores the social dynamics of so-called intercomparison projects. An intercomparison
project is a type of collaborative project that takes place in a number of simulation-based research
areas such as astrophysics and climate modelling. Intercomparison projects can be seen as form
of metrological practice in which the participants compare the results of numerical simulations of
the same scientific problem in order to ensure their reliability and validity. The paper is based on
case studies of astrophysics, meteorology and oceanography, and the focus is on the organization
and coordination of intercomparison projects. I argue that such projects have the effect of defining
which scientists work on a particular problem and that they also serve as organizational vehicles
for creating and presenting a dominant view of and a standard result for that problem. These
types of projects are important for understanding numerical simulation-based research, because
they show that expectations about desirable results are generated within the group.
Keywords
comparison, coordination, expectation, metrology, simulation, standard, status
Within some areas of numerical simulation-based research, a common type of project
aims to compare the results of numerical simulations of the same type of case or scientific
problem.
1
Such projects supplement the more usual practice in which a simulationist or
group of simulationists tries to formulate innovative research problems and then performs
and interprets the numerical simulations they developed. Simulationists who do so often
compare their results with those of others, but this is not the major aim of their work and
such occasional comparisons are not discussed jointly in a more organized setting.
Social Studies of Science
41(1) 107125
The Author(s) 2011
Reprints and permission: sagepub.
co.uk/journalsPermissions.nav
DOI: 10.1177/0306312710385743
sss.sagepub.com
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
108 Social Studies of Science 41(1)
With deliberately comparative projects, about five to 20 different numerical codes,
each of which is run by a different researcher or group, are prepared to perform the
same numerical simulations.
2
Results are then compared and discussed, and finally
presented in one or several publications. Participants use several names to refer to these
kinds of project, including code or model comparison, code or model intercomparison,
and benchmark studies. Yet they all share the basic features outlined above. In this paper,
I refer to this loosely coordinated and organized cooperation as an intercomparison
project. This type of coordinated effort in simulation-based research has not previously
been scrutinized from an STS perspective.
The publicly most well known intercomparison project involves the comparisons
and collaborative evaluations of climate change scenarios developed from different cli-
mate models.
3
These simulations and comparisons are now integral parts of and a funda-
mental source for the findings presented by Working Group I of the International Panel
for Climate Change (IPCC) in their report The Physical Science Basis.
4
The climate
change question has of course become a major political and scientific issue and the
organization of the IPCC, its underlying groups and related programmes have attracted a
lot of interest within STS.
5
While the importance of intercomparisons for reviewing the
state of current climate science is certainly a reason for why intercomparison projects are
interesting to look at more closely, this article focuses on such projects in general, rather
than on the details of the particular intercomparisons that underlie the IPCC process. On
the basis of case studies of numerical simulation practices in astrophysics, meteorology
and oceanography, this article contributes to STS by analyzing how intercomparison
projects are organized as a form of metrological practice.
Metrology has to do with the generation and maintenance of standard measurements
(see, for example, Latour, 1987; OConnell, 1993; Schaffer, 1992). OConnell (1993:
154) illuminates some of the problems involved in creating standardizations by discuss-
ing how metrologists believe that two qualified legitimate standards have equal natural
authority, but when for some reason one is accorded higher accuracy than another, then
purely social authority takes over in deciding which one is authoritative. In this paper,
I address how this distinction applies to the comparison of results in simulationists inter-
comparison projects.
Mallards (1998) analysis of the comparison and use of standards in metrological
operations presents participants in the intercomparison as a research collective, but it is
not obvious who belongs to a research collective and who does not, and thus, who is
allowed to participate in a metrological trial. From an economic sociology perspective,
Aspers (2005) offers one suggestion on how to address both the question of standards
and the question of who may participate in creating them. He proposes that actors for
example, researchers interested in particular problems orient towards each other
because they share common identities, and that they create partial value orders based on
standards or status. In a standards-based order, actors are evaluated on the basis of how
well they fit a particular standard. When there is no standard, status is all that is left for
ranking the actors. In Aspers conception of status order, it is only the interaction of the
actors that determines what is good or bad: they are not ranked in relation to some objec-
tive criterion. Accordingly, status and standard orders are mutually exclusive, but at least
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 109
when applied to science this view is difficult to maintain. For example, Collins (1985)
analysis of controversies showed how there is never a definite, objective criterion for
evaluating scientific results. Social matters such as status always play a role.
According to Webers (1978) classic definition, status refers to an individuals or
groups prestige or honour. Podolny (2005) suggests that the status of an actor affects
the willingness of others to be associated with that actor, implying that status is of inter-
est regarding who gets to participate in metrological operations such as intercompari-
sons. High status is attractive, whereas low status is a disadvantage for creating
relationships, such as different forms of cooperation. However, to view status in this
way neglects that it is a social attribute that is not cast in stone. The status of a scientist
may change, for example, during the process of a controversy (Collins, 1999). The
present paper does not suggest that intercomparison projects necessarily recast existing
status orders; instead it proposes that the status of simulationists affects the generation
of standards. In addition, the status of researchers is not all that is of interest in relation
to intercomparisons, but also the status of codes. Depending on whether a simulation-
ist has been involved in developing the code as opposed to using it as a black box, the
status of researchers and the status of the codes they work with may be more or less
distinguishable (Lahsen, 2005; Sundberg, 2010). However, STS scholars generally
have come to view social attributes of scientists as being bound up with their technical
practices, and so I do not attempt entirely to separate the status of researchers from the
status of codes.
To summarize, the present article treats the coordination of intercomparison projects
as a type of metrological practice, and addresses the following questions. How does par-
ticipation in intercomparison projects relate to the status of participants? How are stand-
ards generated within these projects and how does this process depend on whether there
is a reference point to relate to or only social authority to rely upon? By answering these
questions, the paper argues that intercomparison projects define particular problems and
generate a standard result for these problems. This standard not only influences the per-
ception of participants, but also those of researchers working on similar problems who
nevertheless remain outside the project. These types of project are therefore important to
study for understanding how expectations about standards shape notions of desirable
simulation results.
The case studies and analysis
As noted earlier, this paper examines intercomparisons within three areas of scientific
research: meteorology, oceanography and astrophysics. These are all physics-based dis-
ciplines in which numerical simulations play an important role for knowledge produc-
tion, in part because of the impossibility of conducting traditional laboratory experiments
to study the phenomena of interest in the field. Numerical simulations therefore serve an
important function by enabling numerical experiments.
6
By analysing material from a
number of projects, the paper aims to reveal discipline-specific as well as more general
features of intercomparison projects. The analysis focuses on joint aspects of intercom-
parison projects and the features they share with other research fields. However, I also
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
110 Social Studies of Science 41(1)
discuss differences between the disciplines in modes of participation, reference points
for generating standards, and authorship patterns.
The analysis is primarily based on 16 interviews with simulationists who have
participated in at least one intercomparison project.
7
This includes five astrophysicists,
five meteorologists and six oceanographers.
8
In the latter two groups, nine researchers
were involved in climate modelling. All interviews were recorded and transcribed. I also
conducted participant observation and took field notes during seminars, meetings and
conferences. Some specifically concerned intercomparison projects, which made up the
focus of my analysis. These gatherings included one workshop in astrophysics, one
project meeting in astrophysics, one workshop in meteorology, one project meeting in
meteorology, several presentations during two meteorology conferences (which also
included presentations by oceanographers) and one seminar in oceanography.
9
In addi-
tion to the discussions I overheard, I observed many other informal conversations with
the researchers.
My focus is on seven intercomparison projects, in particular. Two atmospheric
research projects dealt with improving the understanding and representation of the
atmospheric boundary layer and the Arctic regional climate. One oceanographic project
aimed to improve the simulation of climate variability in the Arctic and to understand the
behaviour of different ocean models when applied to the Arctic, including identifying
systematic errors in the simulation codes. The associated climate modelling intercom-
parison compared different annual mean climatological outputs from a number of cli-
mate models. Three intercomparisons in astrophysics dealt with: solar dynamo models to
simulate the solar magnetic activity and cycle; hydrodynamic simulations of planets
embedded in gas discs; and the radiative transfer of photons. I interviewed at least two of
the participating researchers in six of these projects.
Additional material includes the publications resulting from those intercomparison
projects, as well as publications from other projects, information on projects on the web
and search results from open access archives.
10
Publications are part and parcel of the
public repertoire of science and an integral part of the working process. They illuminate
how authors wish to present the project and its results to external audiences, but they are
insufficient as sources of information on how comparisons take place in practice.
Therefore they are used only to confirm what I concluded from other sources.
STS work on standards and metrology and the economic sociology literature on status
provide the theoretical basis for this analysis. Methodologically, the analysis takes its
point of departure from the actors perspectives.
11
My presentation follows the chronol-
ogy of the intercomparison projects. First, I discuss the initial formation of the project,
and observe that the status of potential participants affected who would be involved. I
then shift to how the simulation was realized in practice, focusing on how this special
form of metrological trial was organized in relation to the problem. In the third part, I
show how standards are created by means of natural or social authority, and then go
on to address how the outcomes of intercomparisons are presented publicly, discussing
how such outcomes are related to the way others view the reputation and status of the
project and the problem area. Finally, in the conclusion, I reflect upon what these projects
indicate about the dynamics of science.
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 111
Opening the floor
In general, intercomparison projects are initiated by a few researchers or groups working
together, but they also can be proposed by a single researcher or group. Sometimes inter-
comparisons are initiated within existing research networks. Within oceanography and
meteorology in general, and among researchers dealing with climate issues in particular,
there are several ongoing projects where new participants have joined over time. At first
sight, intercomparisons appear to be examples of what Weber (1978: 43) refers to as
open relationships, where everyone who might be interested is welcome. The view that
is expressed by participants and on project websites is the more participants the better.
Not surprisingly however, there are different degrees of welcoming.
Whereas famous or relatively well-known researchers or groups in the field tend to
be invited via personal email, other researchers find out about the projects at meetings
and conferences, through email lists or through forwarded invitations. Apparently, it is
important for the legitimacy and visibility of the project to include researchers and
research groups with the highest status (Podolny, 2005). Those who are well-known have
published highly recognized results and work at prestigious universities. It also seems
clear that inviting such stars promotes attention from researchers outside of the inter-
comparison project (Merton, 1968: 447; see also Cole, 1970; Collins, 1999). To give an
example, some researchers in astrophysics organized two similar intercomparison
projects in succession, without participation from a Harvard group who worked on simi-
lar problems. One of the participants, who worked closely with the coordinator, told me
how they were considering a new comparison case in which they would adapt to the
preferences of the Harvard group. It is likely that Harvards reputation was involved in
their willingness to do so. If it is important from the point of view of a project to attract
high status researchers within the field; this means that such researchers to some extent
can dictate the terms of their participation, since they can afford to choose whether to
participate or not.
However, status and participation make up a two-way affair. During an interview, the
same astrophysicist that later told me about the situation with the Harvard group said:
Once a project has got a certain status and you are active in this field you want to be in
the project, so that youre one of the persons working on this. So it is a way of advertising
that you are also working on this problem.
This astrophysicist claims here that intercomparison projects show (by advertising)
to outsiders who belongs to a research field and who does not. High-status actors may
care less about the status of a project because they are already recognized as active in
the field, but I suggest that advertising that you are also working on this problem is
important for newcomers. By newcomer I refer to a participant who has limited experi-
ence with working in the field, and has not yet been recognized as a contributor.
Participation in an intercomparison project thus becomes a means to show that they are
part of the field.
The Coupled Model Intercomparison Project (CMIP) has become the basis for the
IPCC evaluations of the current state of climate change. There are currently about 15
groups in the world who have submitted their results to the CMIP. An ocean simulationist,
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
112 Social Studies of Science 41(1)
who is a member of one of the groups, and whose group has only submitted results to the
most recent intercomparisons, said:
I dont think people are working on climate change problems if they dont participate [in
CMIP]. ... We did climate simulations before we participated in the IPCC runs because we were
not anywhere close to having a good enough model for the first, or the previous report. But then
we did climate runs just to test our model and compare the results to the existing models.
12

This scientist suggests that CMIP has achieved such a prominent status that participation
defines who is doing global climate modelling. By the same token the quote illustrates
that his way of defining the climate-modelling field is not based on the actual work being
done, because his group performed climate simulations before participating in CMIP.
The point is that this simulationist, who now considers members of his group to be insid-
ers due to their participation in CMIP, suggests that participation is required in order to
be recognized as working with global climate modelling. This may be regarded as a mat-
ter of rhetorical positioning to protect the exclusivity of CMIP, and the current material
does not address the extent to which this view of participation is shared by researchers
outside intercomparison projects. However, the account gives still another hint that par-
ticipation in intercomparison projects is considered important in order to be recognized
in the field, at least from the perspective of newcomers.
In addition, the quote suggests that non-participating researchers or groups compare
their own results with the results of the intercomparison project. Although researchers
talk about intercomparisons as open and welcoming, the quote informs us of one of the
reasons for why some researchers stay outside intercomparison projects. According to
the ocean simulationist, his groups initial model was not good enough. We can
interpret this as a form of self-censorship that the group used to exclude itself from
participation, but the question remains what good enough means. Astrophysical pub-
lications reporting intercomparisons often state that the codes are well-tested and
well-established. Regarding which codes can participate in CMIP, another ocean sim-
ulationist from the same group stated: It should be a model system that is somehow
accepted because otherwise everyone could send in data and there would be many
strange kinds of data. This simulationist contrasts an accepted system with strange
kinds of data. Bearing in mind that his group remained outside CMIP but compared
their results with the results from the intercomparison, good enough and accepted
seem to mean having generated results that agree with those generated through the
intercomparison.
Setting up an intercomparison as a metrological trial
Importantly, the possibility of participating also depends on whether the simulation code
at hand can be applied to particular problems. Applying a simulation code to a particular
problem may require that a specific process description is included in the simulation
code, such as equations describing radiation transfer (astrophysics) or a cloud scheme
(meteorology, climate modelling). This leads to the question of how research problems
are decided upon, which is intimately linked with the question of who participates.
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 113
The intercomparison project must formulate a doable research problem that aligns the
interests of the participants, the capacities of their codes and the expectations from the
outside world (Fujimura, 1987). One astrophysicist said the following about how they
picked such a problem: This particular problem was chosen because it represents the
basic problem that all these groups are trying to do it is sort of the common denomina-
tor. The astrophysicist suggests that at least in this specific intercomparison project, all
participants address a common problem, yet this does not exclude the possibility that
some researchers are more influential than others when formulating the more exact prob-
lem, as the previous discussion on status and participation has already indicated.
The initiators generally propose the exact formulation of how the problem should be
set up in terms of such matters as parameter values, and initial and boundary conditions
for the computational domain. These instructions are often in line with how the problem
has been approached in the past. One reason for repeating such approaches is to guaran-
tee that the setup is feasible, something which otherwise can take quite some time to test.
Running a numerical simulation is not a trivial undertaking. It may take a lot of time and
effort to get the simulation code to run without crashing and to generate reasonable
results. The adjustment of parameter values, constants and resolutions that are required
for the intercomparison may endanger the functionality of the code. On the one hand,
being well-established and well-tested not only implies that a code has generated
acceptable results; it also often means that it has been tuned and adjusted in order to
perform well for particular applications. If some of the values are changed, the code may
generate unreasonable values or crash.
13
On the other hand, newcomers to the research
want to learn from more experienced members which results are acceptable and how to
achieve them for particular applications. This can mean that a newcomer is the most flex-
ible participant regarding how to set up the problem. One oceanographer suggested:
Some groups have come quite far; they have many, many years of development in their model.
Therefore, they are maybe a bit locked to their approach because they have good experience.
So they dont get why they should test something else if they have had good results. And we
who are newcomers are a bit more open.
This quote suggests that groups with established codes want to secure that their codes do
not behave in unexpected ways; something they fear might happen if they set up their
codes in ways that diverge from their usual practice. Several informants spoke of projects
that could not continue due to disagreements over the problem setup. This supports the
idea that the more experienced simulationists are locked into their approach and that
they influence the formulation of the problem setup more than the newcomers.
Because of the potential conflicts involved in formulating a problem setup, pragmatic
choices are often made. This means that, even if the initial aim is to be as strict as pos-
sible concerning how different participants set up their simulation codes for the inter-
comparison runs, compromises are often made along the way. In the end, the setup for
each simulation code often varies more than initially anticipated because the participants
are not willing or able to adapt their codes to the requirements of the project, at least not
when they deviate from previous successful setups. These divergent setups are debated
both among participants in projects and among outside scientists. One example is the
reactions that occurred to a presentation of an intercomparison project during a workshop
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
114 Social Studies of Science 41(1)
that gathered researchers from several fields of astrophysics: planet people, star people
and cosmologists. After the presentation, several researchers questioned the loosely
defined setup of the problems for the project. One of them asked: But if you are doing a
benchmark [study], should you not have decided on this? The coordinator of the project
answered that they wanted to compare how codes are used for real problems, that is,
how the codes would be set up if they were to be used for simulations outside the scope
of the intercomparison (when participants would adapt the problem formulation to their
own codes). This is a typical argument in favour of, or defence of, less controlled set-
tings. The questioner expressed the opinion that code intercomparisons should start from
the same initial conditions, parameter settings, and so on. However, practical issues arise
along the way and the differences between simulation codes limit the extent to which
common setups are feasible.
As this example illustrates, discussions about setting up problems and codes take
place during workshops, which are arranged to organize and manage the project. Participants
often organize an initial workshop to meet each other and to decide upon the setup. A
subsequent workshop functions as a deadline for submitting results and provides an
opportunity to discuss them. Deadlines seem to be extremely important: several research-
ers report that intercomparison projects move slowly, as participants take a long time to
submit their results. Meetings are sometimes organized simply to maintain momentum.
In line with this, the coordinator or coordinating group has a very important role in terms
of driving the project forward, but does not necessarily make other decisions. The coor-
dinator gathers people at meetings and workshops, reminds them about deadlines and
collects submitted results, but these administrative functions do not necessarily involve
scientific leadership (also see Shrum et al., 2007).
The numerical simulations provided for the CMIP, which subsequently provided a
basis for IPCC work, were exceptional in the way participants engaged in intercompari-
son work and respected deadlines. Researchers at one climate centre told me that the
requirements for participation in this intercomparison, such as which process descrip-
tions needed to be included in the simulation codes that submit results, set the stage for
model development at the centre for years to come.
14
They discussed how participation
in IPCC (CMIP) is a huge undertaking. This differs markedly from the little time and
effort that most researchers are willing to put into an intercomparison project. Below, I
offer several reasons for the more common attitude.
Technical problems often arise during intercomparisons, due to the adjustments that
must be made. New settings may create problems for running the code, as implied above.
In addition, submitted results often reveal that the instructions for how the code should
be set up were interpreted differently. This is often a matter of simple mistakes, but
sometimes the design of the simulation code leads to divergent ways of interpreting the
instructions, for example, on the type of geometry in the computational domain. Simulation
codes are not just applied and set up to solve different problems, they also shape prob-
lems by making researchers think about and interpret instructions in line with the design
of the codes (see also Turkle, 2004 [1984]: 48). One of the consequences is that partici-
pants sometimes end up having to run the problem several times because the coordinator
informs them that they have not followed the instructions.
15
The participants get tired of
the work that the project requires, long before the intercomparison of results has begun.
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 115
This is especially the case because the scientific problems addressed in intercomparison
projects are rarely new, and tend to be versions of problems that participants have
addressed before. Because they often repeat work that is similar to what they have done
before, the scientific news value of publishing intercomparison results is low.
16

Consequently, they prefer more innovative research projects.
In addition, there often is a lack of funding for intercomparison projects, and partici-
pants must fund their work in such projects themselves. This obviously influences how
much time and effort they are willing to invest in a project. An ocean simulationist
recounted that an intercomparison project is only partly financed from here but we have
a strong interest since it stimulates our own model development, even if it is a project
which is dependent on money. Intercomparisons can therefore be seen as joint efforts to
develop simulation codes and discuss how to improve them.
17
Finally, there is a risk involved in submitting results. What if they are considered to
be poor? According to what measure is output considered good or bad? One important
aspect of an intercomparison is whether there is some established, correct answer to the
problem a reference point with natural authority which can then be used to assess
output. Without such an authoritative answer, codes can only be compared with other
codes, and results become standard by virtue of social authority.
Comparison to create a standard: a point of reference or social
authority?
In astrophysics, analytical solutions are the ideal reference points for individual code
assessments.
18
However, they only exist for very simple, highly idealized problems.
Because these types of problems are not considered as meaningful for organizing inter-
comparisons, analytical solutions are almost never used as reference points. In oceanog-
raphy and meteorology, analytical solutions are not even used for evaluating individual
codes, and consequently not for intercomparisons either, due to the character of the equa-
tions that serve as the basis for simulation codes in those fields.
Observations may also be used as reference points, and one significant difference
between oceanography and meteorology on the one hand, and astrophysics and simula-
tions of future climate on the other hand, is the availability and use of observations or
observation-based products with which to compare results. The detection of electromag-
netic radiation, including visible light, from different regions of the electromagnetic
spectrum provides observations for astrophysics, but such observations are relatively
sparse. Time and spatial scales for processes of interest also create problems for observa-
tional techniques. Some phenomena, such as supernovae, occur too quickly to capture
with current observational methods. On galactic scales, nothing happens on human
timescales and there are basically no observations available at all from the Dark Ages
when the Universe was between 380,000 and 400 million years old because stars had
not yet formed. Although the number of observations increases all the time, I am not
aware of any intercomparison project that has been used as a source of reference points.
Meteorologists benefit from a huge network of meteorological measurement sites.
19

Satellite measurements are also common, but permanent measurement sites or field cam-
paigns provide measurements with higher accuracy and more detail. In spite of relatively
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
116 Social Studies of Science 41(1)
good access to observations, it can be very hard to find measurements simulationists
deem useful. For example, this can be due to the awkward location of the particular site
of interest or unprecedented weather events during the time period that the numerical
simulations seek to represent. One meteorologist commented on the possibility of com-
paring output from intercomparison simulations with observations: Its not so easy to
find atmospheric cases which are as idealized as we would like when we plan to test
models. This is illustrated by the fact that, after 6 years of daily observations at a perma-
nent measurement site, only 9 days of observations were potentially useful for the par-
ticular intercomparison in which this meteorologist participated. Measurements from
one single day were finally selected as reference data for the intercomparison. To enable
comparison between simulations of the global atmosphere, so-called re-analysis data
serve as common reference points in meteorology. Re-analysis is produced from weather
prediction models constrained by observations, in order to get as close as possible to the
true atmospheric conditions and present a partly model-based picture of the state of the
global atmosphere many years back in time.
20
Although meteorologists continually dis-
cuss the quality and availability of observations as well as the quality and biases of re-
analysis products, they consider intercomparisons without observations as reference
points to be controversial (except for simulations of future climate). For example, one
meteorologist told me how a submitted intercomparison paper was criticized because
the obvious comment was of course, you compare models with models.
21

Despite the preference for objective reference points furnished through observa-
tions, the very idea of an intercomparison project is to compare different simulation
codes with each other (OConnell 1993: 158). Preliminary results of simulations are
discussed during meetings and sometimes joint databases provide participants with infor-
mation on other participants simulations. This makes it possible to adjust to others
results (also see Mallard 1998: 591). One meteorologist talked about the fact that no
participant wants to be the one with the worst result (even if it is not apparent what this
means) and commented on the consequences of such an outcome.
It has often happened that after this workshop, a number of groups have withdrawn their results
and submitted new results, after they have gone home and done their homework and checked
and some have found bugs in their programs and submitted new results which normally have
been better.
This quote illustrates how participants care about how their own results stand relative to
those of the other participants. Intercomparison between the codes themselves is a cru-
cial feature in understanding how such projects work, whether or not there is a reference
point. But without a reference point, how is it possible to determine that some simulation
is better or worse than some other one? The situation is reminiscent of Mallards (1998)
case of metrological problems, when scientists compare instruments without having any
authoritative standard with which to evaluate their results. The answer is that, in practice,
what a good result means becomes a question of what most of the numerical simula-
tions generate. This is what is meant by social authority in this context. For example,
during a short meeting for an intercomparison project in astrophysics, the participants
compared output from different codes and parameters.
22
A parameter for one code
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 117
deviated a lot from the others. Rather than considering this particular code as a source of
correct results, in contrast to the other codes from which its output deviated, participants
assumed that it was the erroneous one. When these results were discussed, the simula-
tionist who had run this code made no objections to repeating his simulations to obtain
new results. One of the other participants concluded the whole discussion at the work-
shop by saying to the others: This an interesting example of how we will go back and
make it [the result] more similar. Lahsen (2005: 908) indicates that simulationists trust
their own simulations, but what happened at the meeting is an example of how they try
to generate results that agree with each other within the framework of intercomparison
projects, and do not seem hesitant to re-run simulations in order to fit their results with
the majority.
Interestingly, many accounts indicate that the deviating code itself in the previous
example had a bad reputation among astrophysicists. Reputation (good or bad) can be
seen as an indicator of status (high or low) (Stewart, 2005). In this case, however, did the
bad reputation of the deviating code affect the way its result was interpreted at the meet-
ing as an erroneous result rather than a singularly correct result? The discussion about
how to gather (or exclude) participants suggested that high status researchers shape prob-
lem formulation, but it is more difficult to determine how reputation of codes influence
discussions of results.
Publication as closure and presentation of a standard
Intercomparisons have external audiences, consisting of a core group of researchers in
the discipline as well as others in a broader context (Collins, 1999). In relation to these
audiences, publications of results are among the most visible products of the intercom-
parison project. There are basically two different ways of organizing co-authorship of
such publications.
In astrophysics, everyone who has run simulations is listed as a co-author of the pub-
lication and the lead author is often the coordinator. Generally, this also is the case for
intercomparison projects with simulation codes that represent more idealized models
(often generating less data) or that include fewer parameters of interest. Comparison or
intercomparison is often mentioned in the title of the article. In general, each project
results in one published article.
In several climate related intercomparisons (either with ocean, atmospheric or cou-
pled models), different participating groups analyse different aspects of the numerical
simulations. There are huge amounts of data and multiple parameters to be compared and
analysed. This enables many different kinds of analyses to be performed and thus several
articles to be published. In publications, authors acknowledge the other project partici-
pants, but do not include them in the list of authors. It is also notable that several of the
climate related intercomparisons are performed within a loose network of co-operations
that go on for years, tackling several different problems along the way. Sometimes they
repeat previous intercomparisons. The projects often produce web pages in which they
present the problem setup and list the participating codes and/or groups and the publica-
tions from the project (for examples, see http://www.pcmdi.llnl.gov/projects/cmip/index.
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
118 Social Studies of Science 41(1)
php and http://efdl.cims.nyu.edu/project_aomip/purpose.html). These lists show that the
comparative design is rarely mentioned in the article titles.
Thus, there are two different ways of organizing (co-)authorship: either all participants
are listed as authors with the coordinator as the first or the list of authors only includes
the members of the group who performed the analysis. There also are different ways of
presenting results. In principle, if not in practice, a reference point enables objective
ranking of the performance of each code. Of course this involves interpretation, but when
there is only social authority, the evaluation of code performances is potentially more
adaptable for presenting any particular code in a favourable light. However, this depends
on the way co-authorship is organized. When authorship is limited to a fraction of the
participants or a single participating group/researcher, there generally is more freedom to
present ones own simulations in a favourable light. Support for this interpretation is
found in the fact that some disagreements among participants concern which topics (such
as a particular region or process) different participants will be responsible for analysing and
publishing.
23
Despite respondents saying that they are generally polite in papers about
codes performances, one meteorologist mentioned that colleagues of his had negative
experiences with intercomparisons (referred to as these kinds of experiments).
When I first came to [a research unit] which had developed [a model] and we started talking
about that we should maybe take part in one of these kinds of experiments, then they were very
negative because there they had had a bad experience before. They had released model results
for a model comparison where they thought they were maltreated and misinterpreted. So they
didnt want to participate in these types of experiments because they thought that the others had
been unfair to them.
This account might seem to contradict the earlier statement that participation in inter-
comparisons is important to show the relevance of ones own research to a problem area,
but it agrees with the suggestion that some researchers need to participate when others
can afford to stay outside. This research laboratory works with a model used for
research, but it is primarily developed to make weather predictions for a national navy.
Therefore they have no need to participate in intercomparisons to gain recognition (but
may need to do so for other reasons).
24

As for status in relation to performance, intercomparisons may confirm previously
formed opinions on which codes are best, and unexpected performances do not necessar-
ily affect the status ladder among the participants. For example, during a break at a
workshop, a couple of astrophysicists discussed the results from one of the astrophysics
codes. This was a well-known and freely available code with a bad reputation, something
I became aware of through listening to several informal conversations. In line with this
reputation, one of the astrophysicists was surprised to hear that this code did quite well.
In this instance, the bad reputation of the code persisted, and seemed unaffected by its
momentary good performance. Perhaps, this illustrates how gossip concerns reputation
and, in particular, drifts between reputation and behaviour (Merry, 1984). Stewart (2005)
also suggests that social evaluations may become static so that changes in performance
are not necessarily reflected in changes to status.
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 119
The intention of authors to give their own codes preferential treatment in publications
should not be overemphasized. Publications tend to give vague formulations of differ-
ences between individual simulation codes. When several different parameters are com-
pared, it is easy to avoid stating that some codes are definitely better or worse than
others, even when there are reference points. The conclusions in publications state that
some codes are good for some problems or processes while others are better for others.
For example, one oceanographer working on climate models stated: Of course we have
some informal knowledge people know that some models are better than others. But
still, when you are doing these publications, people are typically not saying that this
model is bad and this is worse [laughter]. People are mainly polite. This statement does
not say how a researcher can know that some models are better than others, nor what
better means. While it is clear that codes have different reputations, it is unclear to what
extent such reputations are related to or affected by performance in intercomparisons.
The account also is yet another indication of how publications from intercomparisons
emphasize collective unity rather than individual performances and how they frame such
projects as joint efforts within a particular research field rather than as competitions
between individual players. To be considered successful, a project seems to require that
the simulation results produced within its frameworks must agree otherwise they do
not generate any standard (Mallard, 1998: 588ff.). One astrophysicist expressed this
view of success when he mentioned public disagreements in his research field.
There have been these very ugly disputes between crews using different codes, writing articles
in journals, writing that we cant reproduce your results. And then you get replies, another
paper saying thats okay, we cant reproduce yours either. Were right and youre wrong, and
things like that. This is of course very damaging. Not only for the groups who are involved but
for the whole field because then an outsider thinks, okay, these models seem to produce
whatever. We cant trust them. So in that sense it is very important to have some kind of
common framework where you can validate your methods.
This account actually implies that intercomparison projects help researchers put a lid on
scientific controversies among different crews, but it is the only account that explicitly
connects such projects with disputes. This may of course indicate that disputes are sensi-
tive topics to discuss in an interview. At the same time, it is hard to understand the inter-
est in organizing intercomparisons if it is not about dealing with (potential) disagreements.
As my analysis has shown, as indicated by the level of funding for them, intercomparison
projects have low scientific value and, with the exception of newcomers, individual par-
ticipants gain little recognition from them. The astrophysicist spoke of the importance of
such projects as for establishing common frameworks that can be used to unify the field
and protect its reputation. If disputes are regarded as potentially damaging for the legiti-
macy of the field (Shackley et al., 1999: 442ff.), presenting a unified front through inter-
comparisons appears to be a major way to maintain the fields legitimacy for external
audiences (also see Schaffer, 1992).
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
120 Social Studies of Science 41(1)
Relating to the standard from the outside
An important aspect of intercomparison projects is the messages they send to other
researchers interested in similar problems. As mentioned earlier, researchers may com-
pare outputs with the results of such projects without having participated in them. This is
possible because articles often state where precise statements of initial conditions can be
found. In addition, databases with results are sometimes created. Some publications from
intercomparison projects explicitly encourage readers to compare their own simulations
with those they report. In this way, such projects function as benchmarks and influence
expectations of results from other projects. One astrophysicist noted that:
In some other areas of astrophysics there have been these benchmark projects, where people fix
this certain set of models and then they state very precisely all the parameters that are used
[T]hen people with new codes can then look up this paper and do the same models and then they
can be fairly certain that their own model is also working.
By referring to benchmarks, this astrophysicist emphasizes that an intercomparison
project is meant to establish standards for others to use, including those who did not
submit results for the project. This account also suggests that the codes produced by such
a project are considered to be working when the results are similar to those of others a
first step towards becoming established.
While the research for this article did not aim to ascertain the extent to which intercom-
parisons work as benchmarks, their potential importance for the relevant research fields is
clear, especially in light of the importance of expectations in science (Collins, 1999: 188).
In addition, my findings support the suggestion that intercomparisons function as bench-
marks for outsiders (see also van der Sluijs et al., 1998: 310). The first part of the analysis
in this article indicates this, but there are further examples. During conferences, research-
ers compared their own simulations with results from intercomparison projects in which
they had not participated, and some publications present new code developments and use
comparisons with previous intercomparison results to strengthen their credibility. The
launch of a Wikipedia webpage for astrophysics codes (http://www.astrosim.net/code/
doku.php?id=home:home) where tests (intercomparisons) of codes in astrophysics are
listed is another example of the tendency to encourage use of such results as standards.
Concluding remarks
My analysis can be summarized with two points. First, intercomparison projects shape
expectations about which results are acceptable for a given problem. They do so by con-
stituting a form of metrological project that generates standards, and my analysis has
outlined the various steps in this process. Second, intercomparisons define boundaries by
establishing which actors are important for the given research problem: they define whos
in and whos out. Participation becomes a way for researchers to show that they are part
of a particular research field and follow its standards. These standards are also established
for (current) outsiders: researchers working on similar problems who did not directly
participate in the project. Their status is established partly on the basis of that of the (cur-
rent) insiders. The way insiders view their audience also is important for understanding
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 121
how the projects are organized, because the way their unity is generated and maintained
is partly due to the audience. These types of metrological projects are not arenas for pro-
moting controversial results. Instead, by imposing standards they enhance the credibility
of the particular research area as a whole.
Analysing several cases for this paper revealed similarities and differences, but at the
same time the material was too limited to generalize and quantify. Some differences
between the disciplines are noteworthy, however. Astrophysics projects are organized for
particular problems, as they create standards for scientists to understand what good
results are and lead to joint publications that list all participants as authors. Meteorology
and oceanography intercomparison projects not only create standards for what counts as
good results, they also aim to establish how good the results from the codes are by
comparing them to some reference point, usually observations. Another difference is that
the participants divide comparisons of different aspects among themselves, leading to
several publications authored by different participants.
Climate research has a much wider and diverse external audience than astrophysics.
Shackley et al. (1999: 200) note that climate model evaluations have become increasingly
formalized through intercomparison projects, and they ask if this is related to the fact that
climate modellers perceive requirements for validation in the context of uncertain policy-
related research, rather than that the climate modellers are interested in this form of valida-
tion themselves. In this article, I have shown how such formal evaluations are not only part
of the numerical simulations for understanding climatic aspects of regional atmospheric
processes and the dynamics of the oceans; they also produce cooperative effort among astro-
physicists. High political interest combined with uncertain science may increase the interest
of scientists in presenting formal evaluations to strengthen the credibility of their results, but
such interest does not explain why intercomparison projects are used in less politically
charged settings. An alternative suggestion is that intercomparisons are akin to numerical
simulation methods in the (natural) sciences. A tentative proposition is that by creating
standards for evaluation to be used by simulationists themselves within their respective
fields, they strengthen the standing of numerical simulation as an increasingly autonomous
method, to be used alongside observational and analytical (theoretical) methods.
Acknowledgements
First, I thank the scientists I interviewed and the organizers of the different types of scientific meet-
ings I attended. I also thank Gran Ahrne, Patrik Aspers, Marcus Carson, Michael Lynch and the
anonymous reviewers for suggestions and comments on earlier versions of this article. I especially
express my gratitude to Ingrid Schild, who gave exceptionally thorough and precise comments on
several earlier versions. A draft of this article was presented at the 4S/EASST conference in
Rotterdam, 2023 August 2008. The author also acknowledges the generous financial support
from the Swedish Science Council under contract 2006-1296 and contract 2007-1627.
Notes
1. Numerical simulations are based on algorithmic formulations of mathematical models that
use numerical time-stepping procedures to obtain the models behaviour over time. Within
the physics-based sciences, these models are constraint-based equation models derived
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
122 Social Studies of Science 41(1)
from physical laws. Different computer codes can be based on the same mathematical
model. The scientists working with numerical simulations are generally not very strict in
their terminology when talking about models and codes. Astrophysicists generally speak
of codes, sometimes of models when they refer to how they set up their scientific prob-
lem to perform numerical simulations. Meteorologists and oceanographers talk about mod-
els, even when they obviously refer to a version of a computer program. I do not analyse
this different terminology further and use model and (simulation) code interchangeably,
even if reference to (simulation) code more adequately reflects how numerical simulations
rely on computers and how simulationists primarily work with computer code rather than
mathematical models.
2. I use the term participants to refer both to groups and individual researchers. The defining
characteristic is that each participant uses a particular numerical code or version of one.
Participation in intercomparison projects refers to the process of discussing problem formu-
lations, submitting outputs and writing articles. Participants in these projects often come from
several different countries.
3. By climate model I refer to global coupled atmospheric ocean general circulation models
(AOGCMs). Other models are also used for climate studies, but AOGCMs have become the
most widely used tools for climate studies and a synonym for climate model (see Shackley
et al., 1999).
4. Note that this is the title of the fourth report. The previous reports were entitled The Scientific
Basis.
5. See Agrawala (1998), Shackley et al. (1999), van der Sluijs et al. (1998), Miller and Edwards
(2001) and Skovdin (2000).
6. Simulationists often refer to simulations as numerical experiments or simply experiments.
I do not explore the meaning of experiment in their usage.
7. The organization of code intercomparisons became a particular research topic within the
framework of a larger study, of which this paper forms only a part. The general aim of the
overall study is to reconstruct the perspectives that develop among simulationists and to
improve sociological understanding of numerical simulations as scientific practices. The
study covers a broad variety of research on the processes and forces (such as turbulence,
convection and magnetic fields) involved in planet formation, accretion discs, mass loss and
formation of stellar (star) content in galaxies, and modelling of different aspects of stars. It
also covers modelling in oceanography of small and large scale dynamics in the oceans and
seas and of processes such as seaice dynamics, airsea interaction and coupled physical
chemicalbiological reactions. Finally, the study covers modelling in meteorology on the
dynamics in the lowest part of the atmosphere (the boundary layer), radiation transfer and
processes such as aerosol and cloud interactions. Due to the interest in climate change, a lot
of the research in oceanography and especially meteorology is linked to climate-related ques-
tions. For the present article, I have selected only the part of the material that is relevant for
analysing intercomparison projects.
8. The interviewees included one meteorologist who worked in Norway, one meteorologist in
the Netherlands, four ocean simulationists in Norway and one astrophysicist in Denmark, the
rest of the interviewees worked in Sweden. Eight interviews were conducted in English, the
rest in Swedish. To protect the anonymity of the researchers, I do not inform the reader which
quotes I have translated into English and which remain in the original English.
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 123
9. Most of these gatherings were held at or (co-)organized by the Department of Meteorology or
the Department of Astronomy at Stockholm University, the Nordic Institute for Theoretical
Physics, or the Department of Physics and Astronomy at Uppsala University.
10. Most of the quotations in this paper are taken from the interviews.
11. Accounts contain different levels of reasoning and we can interpret them as descriptions of
social practice, as part and parcel of social practices (as perspectives linked to practices), or
as discursively constructed and therefore autonomous from (other) social practices (Gubrium
and Holstein, 1997). The analysis here primarily alternates between the two first analytical
modes, but the possibility to consider them as rhetoric (autonomous from practice) is stated
explicitly when necessary.
12. Note how participation in the CMIP is referred to as participation in the IPCC work, suggest-
ing how closely CMIP is considered to be connected to IPCC. They are however different
entities.
13. This is particularly the case with weather prediction models. They are heavily tuned in order
to work and give a reasonable output (Sundberg, 2009).
14. The orientation towards the demands of CMIP is not unique for the group at this climate
centre. For example, the list of notable improvements of one of the few AOGCMs that is also
open for use by researchers outside the centre where it is being developed, mentioned new,
easy to use methods to run IPCC climate change experiments (http://www.ccsm.ucar.edu/
models/ccsm3.0/notable_improvements.html, accessed 15 August 2008; see also Kintish,
2008). See also note 12.
15. The ability to re-run simulations depends on how long it takes initially to run them. It can take
a few hours on a laptop or up to several months on supercomputers, as was the case with some
global climate scenario simulations.
16. The Global Energy and Water Cycle Experiment (GEWEX) is one of several international
sub-programmes under the World Climate Research Program in which many intercomparison
projects have been conducted. Interestingly, one report from the GEWEX Cloud System
Study mentions that the participants have fallen into the intercomparison trap and states that,
while they are valuable for establishing community benchmarks, they do not represent the
achievement of scientific progress (Global Energy and Water Cycle Experiment, 2000: 25).
This implies that only new, innovative scientific results, rather than common standards, con-
stitute genuine scientific progress.
17. Notably, several scientists mentioned that even if there is an aim to make a more thorough
comparison for the sake of progress, there is rarely much more than a qualitative intercom-
parison taking place. See also van der Sluijs et al. (1998: 309).
18. If the relationships that compose the model are simple enough, it may be possible to use
mathematical methods (using, for example, algebra or probability theory) to obtain an ana-
lytic solution through pen and paper derivations. Many models are too complex to allow
analytic evaluation and must be studied by means of numerical simulation.
19. One reason behind the network is the use of measurements for data assimilation into opera-
tional weather forecasting. For an analysis of the meteorological network, see Edwards (2006).
20. For example, the most famous re-analysis set the ERA 40 dataset from the European Centre
for Middle Range Weather Forecasts represents the global state of the atmosphere during
the period 19572001. Only three weather services in the world have teams working on creat-
ing re-analysis data.
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
124 Social Studies of Science 41(1)
21. See Sundberg (2006) on comparisons of model output and observation data, and the relation-
ship between modellers and experimentalists, in meteorology. See Oreskes et al. (1994) on
the validation of simulation models more generally.
22. For more detailed analyses of the interpretive activities of scientists in relation to results, see,
for example, Lynch and Woolgar (1990). See Mallard (1998: 589ff.) in relation to standards.
Compare also van der Sluijs et al. (1998) on retaining previously accepted estimates.
23. It is also possible that if intercomparison-related work takes more and more time and becomes
integral to research, which seems to be the case with climate modelling, these publications
become means to promote individual contributions with regard to topics of the analysis rather
than to any common standard.
24. In meteorological intercomparisons, numerical weather prediction models are sometimes
included. On the relationship between meteorological research and weather prediction, see
Harper (2003) and Sundberg (2009). See also Collins (1999: 189) discussion of science pol-
icy and the opportunities that different sources of funding provide.
References
Agrawala S (1998) Context and early origins of the Intergovernmental Panel on Climate Change.
Climatic Change 39: 605620.
Aspers P (2005) Identitetsformation i grnssnitt (Identity formation in interfaces). Distinktion:
Scandinavian Journal of Social Theory 11: 4157.
Cole S (1970) Professional standing and the reception of scientific discoveries. American Journal
of Sociology 76(2): 286306.
Collins HM (1985) Changing Order: Replication and Induction in Scientific Practice. London:
SAGE Publications.
Collins HM (1999) Tantalus and the aliens: Publications, audiences and the search for gravitational
waves. Social Studies of Science 29(2): 163197.
Edwards PN (2006) Meteorology as infrastructural globalism. OSIRIS 21: 229250.
Fujimura J (1987) Constructing do-able problems in cancer research: Articulating alignment.
Social Studies of Science 17(2): 257293.
Global Energy and Water Cycle Experiment (2000) GEWEX Cloud System Study. Second Science
and Implementation Plan. IGPO Publication Series, no. 34.
Gubrium JF and Holstein JA (1997) The New Language of Qualitative Method. Oxford: Oxford
University Press.
Harper KC (2003) Research from the boundary-layer: Civilian leadership, military funding and the
development of numerical weather prediction (19461955). Social Studies of Science 33(5):
667696.
Kintish E (2008) Climate science: Turbulent times for climate model. Science 321(5892):10321034.
Lahsen M (2005) Seductive simulations? Uncertainty distribution around climate models. Social
Studies of Science 35(6): 895922.
Latour B (1987) Science in Action: How to Follow Scientists and Engineers through Society.
Cambridge: Harvard University Press.
Lynch M and Woolgar S (eds) (1990) Representation in Scientific Practice (Cambridge, MA: MIT
Press).
Mallard A (1998) Compare, standardize and settle agreement: On some usual metrological prob-
lems. Social Studies of Science 28(4): 571601.
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from
Sundberg 125
Merry SE (1984) Rethinking gossip and scandal. In: Black D (ed.) Toward a General Theory of
Social Control, vol. II. New York: Academic Press, 271302.
Merton RK (1968) The Matthew effect in science. Science 159(3810): 5663.
Miller CA and Edwards PN (eds) (2001) Changing the Atmosphere: Expert Knowledge and
Environmental Governance. Cambridge, MA: MIT Press.
OConnell, Joseph (1993) Metrology: The creation of universality by the circulation of particulars.
Social Studies of Science 23(1): 129173.
Oreskes N, Shrader-Freshette K and Belitz K (1994) Verification, validation, and confirmation of
numerical models in the earth sciences. Science 263 (4 February): 641646.
Podolny JM (2005) Status Signals: A Sociological Study of Market Competition. Princeton:
Princeton University Press.
Schaffer S (1992) Late Victorian metrology and its instrumentation: A manufactory of ohms. In:
Bud R and Cozzens S (eds) Invisible Connections: Instruments, Institutions and Science.
Bellingham, WA: Optical Engineering Press, 2356.
Shackley S, Risbey J, Stone P and Wynne B (1999) Adjusting to policy expectations in climate
change modelling: An interdisciplinary study of flux adjustments in coupled atmosphere-
ocean GCMs. Climatic Change 43: 413454.
Shrum W, Genuth J and Chompalov I (2007) Structures of Scientific Collaboration. Cambridge,
MA: MIT Press.
Skovdin T (2000) The Intergovernmental Panel on Climate Change. In: Andresen S, Skovdin T,
Underdal A and Wettestad J (eds) Science and Politics in International Environmental
Regimes. Manchester: University of Manchester Press, 146180.
Stewart D (2005) Social status in an open-source community. American Sociological Review 70:
823842.
Sundberg M (2006) Credulous modellers and suspicious experimentalists? Comparison of
model output and data in meteorological simulation modeling. Science Studies 19(1):
5268.
Sundberg M (2009) The everyday world of simulation modeling: The development of parameteri-
zations in meteorology. Science, Technology, & Human Values 34(2): 162181.
Sundberg M (2010) Organizing simulation code collectives. Science Studies 23(1): 3757.
Turkle S (2004 [1984]) The Second Self: Computers and the Human Spirit, 20th anniversary edn.
Cambridge, MA: MIT Press.
Van der Sluijs J, van Eijndhoven J, Shackley S and Wynne B (1998) Anchoring devices in science
for policy: The case of consensus around climate sensitivity. Social Studies of Science 28(2):
291323.
Weber M (1978) Economy and Society: An Outline of Interpretive Sociology. Berkeley and Los
Angeles: University of California Press.
Biographical note
Mikaela Sundberg is Associate Professor at the Department of Sociology, Stockholm
University. She has published several articles on meteorological research practices,
including field experimentation and numerical simulations, and the relationship between
them. Presently, she conducts comparative analysis of numerical simulation practices
in astrophysics, oceanography and meteorology, reported in publications, including
Organizing Simulation Code Collectives (Science Studies 23(1): 2010).
by Ezequiel Benito on October 8, 2011 sss.sagepub.com Downloaded from

You might also like