Emma Clinton's Portfolio

Collection of Open Source GIScience Projects


Project maintained by emmaclinton Hosted on GitHub Pages — Theme by mattgraham

GIS as Reproducible Science

This week, we considered the question of how to categorize GIS (as in geographic information systems). Is it a tool or is it itself a science? Wright et al. (2010) state that GIS can be considered under three different categories: as a tool, as toolmaking, or as a subject of study or science.

The question of what it means to be “doing GIS” is a question I have never deeply considered, but is quite an important one. Based on my academic experiences, “doing GIS” as a means to “do geography” seems to be a reasonable stance to take on this issue. One comment in this article really stood out to me: “GIS is the application of spatial science to the study of earthbound objects” (Wright et al., 2010). This resonated with many of the experiences I have had while learning GIS: I have been working mostly in a GUI format and view the GIS element of a task as a means to an end (in other words, I have implicitly learned to understand that GIS is a tool). Since we use GIS to represent and analyze data to test theories about the physical world, I have a hard time disentangling GIS from the overall study of geography and other disciplines like remote sensing and cartography (essentially, as a method to “do science”). This might be because I have no real experience with software development; I could see how that aspect of GIS could certainly count as a science.

Thinking of GIS as a tool seems to be equivalent to seeing these “systems” as computer applications and associated techniques that only gain meaning when scientists apply scientific knowledge to them (Wright et al., 2010). In this context, GIS is utilized to solve problems and answer questions. The tool is a neutral, yet essential and informed, aspect of the process of investigation. Importantly, the use of GIS, which is not in this case a science, does not impart the assumed validity associated with “doing science.”

Perhaps this class and my remote sensing class last semester venture into the territory of GIS as toolmaking. In our most recent lab, we needed to develop a tool to answer the question at hand; thus, that development was inseparable from the process of solving the problem.

Thinking of GIS as a science entails using GIS to develop and test theories (Wright et al., 2010). The use of GIS as a tool is grounded in scientifically proven concepts, and the development of GIS can be thought of as science. Breakthroughs and developments in GIS design could certainly be counted as scientific, as they further the capabilities of knowledge generation. The “GIS as engineering” argument is applicable here, though. GIS and engineering are both methods of “problem-solving” (Wright et al., 2010), but those methods of development are grounded in scientific and well-tested principles. (Likewise, “the design PRINCIPLES of GIS are ‘scientific’” (Wright et al., 2010)).

In order to make any headway with this argument, it is important to try and pin down a definition of “science,” which is very difficult in and of itself. “Rigorous collection and evaluation of data in the production of knowledge” as per Karl Popper’s “‘positivism’” and “’critical rationalism’” (1959) (c.b. Wright et al.(2010)) seems a fitting definition; however, as mentioned in class, historic ways of categorizing “science” vs. “not science” are often subject to gatekeeping and problematic biases. As Prof. Holler mentioned, science is always subject to rules, but those rules often differ by field of study. “Doing science” does not always involve following the structure of the scientific method. I therefore don’t feel qualified to define something as “science” when I am not entirely sure what exactly science is.

Regardless of how you categorize GIS, there is certainly a place for GIS in the scientific world. Open Source GIS in particular has much to offer in terms of the reproducibility crisis currently facing the modern scientific community.

According to the National Academies of Sciences, Engineering, and Medicine (2019):

“Reproducibility is obtaining consistent results using the same input data; computational steps, methods, and code; and conditions of analysis. This definition is synonymous with “computational reproducibility,” and the terms are used interchangeably in this report. Replicability is obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.”

Replicability is a hallmark of good science (National Academies of Sciences, Engineering, and Medicine, 2019). As science is intended to be a communal effort to increase human understanding of the world around us and solve pressing problems, it goes without saying that reproducibility and replicability should always be emphasized when “doing science.” The example we considered in class of the ESRI tool designed to determine the best locations of COVID testing sites illustrated how the use of highly useful technology can be limited by gatekeeping (financial or otherwise) and therefore not be truly communal or available to be evaluated or confirmed by the greater community.

Science is a process of building knowledge upon previous knowledge. Without clear methods and provision of data for checking analyses, important mistakes in methods might not be caught. Methods that are not clearly described or available may not be able to be replicated with different data in order to determine the bounds within which a scientific theory applies (as we discussed in class yesterday). In addition, using open source software and available data reduces the threat of bias due to current pressures to overstate the importance of results. There should certainly be more emphasis on reproducibility and replicability in scientific review processes, and projects that provide the resources for reproducing or replicating results should be lauded for their efforts. There should also be greater esteem afforded to those who go to the efforts of reproducing the results of a study to determine their validity.

Readings:

Wright, D. J., M. F. Goodchild, and J. D. Proctor. 1997. GIS: Tool or science? Demystifying the persistent ambiguity of GIS as “tool” versus “science.” Annals of the Association of American Geographers 87 (2):346–362. https://doi.org/10.1111/0004-5608.872057

National Academies of Sciences, Engineering, and Medicine. 2019. Reproducibility and Replicability in Science. Washington, D.C.: National Academies Press. https://doi.org/10.17226/25303

Main Page