Hollinger — Week 6

Zipcode: The zip code layer contains all the zip codes within Delaware’s county limits. Where there were tax-exempt parcels or roads without zip codes, the zip codes were manually allotted based on location in the layer.

Recorded Document: This data set contains locations of recorded documents (such as vacations, subdivisions, centerline surveys, surveys, annexations, and miscellaneous documents) from the Delaware County Recorder’s Plat Books, Cabinet/Slides, and Instruments Records. This layer/data was created to aid in the process of locating a number of documents within Delaware County.

School District: This layer/data contains all the school districts within Delaware County. 

Mapsheet: This dataset contains all the map sheets within Delaware County.

Farm Lot: This dataset contains all of the farm lots (as seen in the US Military and the Virginia Military Survey Districts) of Delaware County.

Township: This layer consists of the different townships in Delaware County. There are 19 total.

Street Centerline: The center of pavement for public and private roads is depicted in this layer. It was developed with data from field observation and addresses from building permits. 

Annexation: This layer depicts the annexations and conforming boundaries of Delaware after 1853. 

Condo: This layer has polygons for all the condominiums reported to the Delaware County Recorders Office in Delaware County.

Subdivision: This data has all subdivisions and condos that are recorded with the Delaware County Recorders Office.

Survey: This layer consists of points that represent all land surveys within Delaware County. The surveys are from the Recorder’s office and the Map Department and older surveys are not included.

Dedicated ROW: This layer depicts all lines that are Right Of Way in Delaware County. This data is created with Parcel data.

Tax District: This layer shows all of the tax districts within Delaware County as determined by the Delaware County Auditor’s Real Estate Office.

GPS: This layer depicts data points of all GPS monuments (est. 1991-1997) in Delaware County.

Original Township: This layer depicts the original boundaries of Delaware’s townships prior to change due to tax districts.

Hydrology: This layer depicts all major waterways in Delaware County. 

Precinct: This layer depicts all the voting precincts within Delaware County as defined by the Delaware County Board of Elections.

Parcel: This layer depicts all of the cadastral parcel lines in the form of Polygons in Delaware County. 

PLSS: This layer contains polygons that represent all the Public Land Survey System of the US Military and the Virginia Military Survey Districts of Delaware County. 

Address Point: This layer consists of points that represent all certified addresses in Delaware as defined by the  State of Ohio Location Based Response System (LBRS).

Building Outline: This layer depicts building outlines of structures in Delaware County.

Hollinger Week 5

Chapter 6:

  1. Problems:
    1. 6A #7: There was only one symbol (not separate ones for ingrowth, unknown, etc.) So, I could only do one type of symbology.
    2. 6A #9-13: I couldn’t make more than 1 type of tree because of the previous problem and there were no trees shown on the base map so I just used my one symbol and placed them randomly.
    3. 6B #15: There was no option to save to the Esripress folder like the book wanted me to. This didn’t end up affecting any later steps.
    4. 6C #1: The ArcGIS collector map is now called ArcGIS Fieldmaps.
    5. 6C #6: I couldn’t choose which type of tree again because of the first problem.
  2. Terms/Comments:
    1. It was interesting to see how all of the ArcGIS platforms could collaborate with each other. I honestly didn’t expect this to work considering some of the data connection issues I had last week. 
    2. It was kind of frustrating that I couldn’t symbolize multiple types of trees. You would have to click on them and look at their popup box or pull up the attribute table to see what kind they were on my map. It made me realize how different colors and symbolizations could be useful though! 
    3. I also thought the fact you could create a web layer on one platform and public it to view and use it on different platforms was really useful too. I think it increases the means for collaboration between individuals on different platforms for projects. 
  3. My maps:


Chapter 7:

  1. Problems:
    1. 7B #14: I couldn’t find the locate pane anywhere. I don’t think it ended up affecting the final result. 
    2. 7C #7: There was no mile option so I just used US Survey Miles instead
  2. Terms/Comments:
    1. Address locator: file that contains reference data and various geocoding rules and settings. 
    2. Having the street layer and reference data already set in the program makes it really easy to assign addresses. I feel like it would be very hard to find addresses without it because you would probably have to go search for each one individually.
    3. I could see the application of this chapter to be used to see which houses might be worth more money because of their proximity to certain locations and attributes like the accessibility ones shown at the end of the chapter. I think this could also be used by individuals to find houses that have certain criteria they desire. 
    4. Buffers: polygons that are created around a feature at specified distances (used around bike lanes and proposed bike lanes in this chapter – helped show the proximity to certain properties)
  3. My maps:

Chapter 8:

  1. Problems:
    1. 8A #10: There was no output location box. 
  2. Terms/Comments:
    1. Temporal data: data that has the same time attribute
    2. Kernel density: calculates the density of features in an area
    3. Hot spot analysis: shows significant areas (symbolized by red for large clusters and blue for small)
    4. Space-time cube: help visualize how the data is distributed over an area. 
    5. Honestly, I thought everything was pretty straightforward until the space-time cube. I feel like the hot spot analysis showed the distribution over a geographical area and I didn’t really understand what the vertical portion represented. It also made the middle elements blocked and hard to see. 
    6. I also thought the controls were kind of hard for viewing the cube personally. I would always go to a weird angle or too close or too far. It was hard to get the position you wanted for the view.
    7. The animation portion of the chapter helped me understand the time portion of the 3D hotspot layer. I also thought the option to “step through” each month was useful for visualizing the change. 
  3. My maps:


Chapter 9:

  1. Problems:
    1. 9C #12: There was only a box for value and not a start and end value so I couldn’t fill out the table properly. I think I ended up just putting in start values and my map seemed to look the same as the book so I think it was okay.
  2. Terms/Comments:
    1. I liked this chapter. I think the difference between before and after clipping and seeing the shadows was really useful for understanding how a visual can change the way you see a feature. 
    2. I also liked how red and green were used to visualize good places to plant. This showed how effective narrowing down your classes can be because when there were more colors before we changed them, it was harder to visualize these areas.
    3.  I also thought outlining the plots was useful in visualization too. I could see an application to farming in that maybe you mark off places with better soil or some other features to determine where to plant which crops.
    4. Hillshade: a layer that depicts shadows of an illumination source
    5. Azimuth: direction of the sun
    6. Altitude: the angle of the sun above the horizon
    7. Using the model builder also always helps me visualize how the steps I’m doing go together to make a complete outcome vs. when I am just going through them step by step in the book directions. 
  3. My images:


Chapter 10:

  1. Problems:
    1. 10A #7: The “Move Value Button” was actually called “Reorder” 
  2. Terms/Comments:
    1. Chapter 10 felt a lot longer than the other chapters, but I think it was because there were a lot of new elements, especially when making the layout toward the end. 
    2. Symbol layer drawing: helps you override and change the default settings and order of the layers. 
    3. Label class: used to specify details of how labels are positioned and symbolized. 
    4. I kept getting confused between the Legend and Legend Item panes toward the end of the chapter. I felt like the book kept switching between the two without clarifying you have to get to them in different ways and they do different things.
    5. Scale bars: shows size and distances on a map
    6. Dynamic Text: can provide additional information about your map to viewers like Spatial References in this case.
  3. My image:

Week 4 – Hollinger

Chapter 1

Chapter 1 was pretty straightforward. I didn’t have any problems with it, but it did help me make connections to some of the topics we learned about in the Mitchell book. Some of the topics I remembered from the Mitchell book were vectors, rasters, and attributes.

It was neat seeing how the layers could be turned on and off so that not everything was visible at once when trying to work on the map. One thing I did run into was on step 12 of exploring the map, I did not see an Enable Outline button, but I just skipped past it after looking for a while and it didn’t seem to make much of a difference. I also thought it was really neat how you could filter the data really easily so the map only showed certain types of incidents. 

My favorite feature of this chapter was probably the popup windows. As a data major, I feel like we often just look at big data sets without much context. This however took parts of the data and gave them geographical context that you could visualize, which was super interesting to look at. This helped me understand how maps could be used to categorize crimes and how that data can be applied to high-crime areas to make them safer.  

Here is a map of my work for chapter one:

Chapter 2

Exercise 2A: Exercise 2A was again pretty easy and straightforward to me. It did take a second to find all of the buttons since I wasn’t familiar with the software at all. I did get a little confused finding the ESRI Press folder but it turns out I just missed the part where you had to download it at the front of the book. Importing the map and making the folder connection was pretty simple. It was also interesting to see the difference in how the popups looked in ArcPro vs. Online. Ultimately, this section didn’t have you change much, but really just look around and learn how to use the tools like explore and select. 

Exercise 2B: I liked how you got to play around more with the data and symbology in this section more than in Chapter 1. The option to edit and change the size of the symbols and the visibility range so you couldn’t see labels until a certain extent was really neat because it helped make the map less cluttered. I also recognized the use of graduated symbols from the Mitchell book. I could also see how measuring the distance between features could be really useful. I don’t remember base maps being mentioned in the Mitchell book, but I could have just missed it. I thought the different types of base maps could be beneficial. Streets might be more useful in city areas, while oceans are more useful when looking at a larger view of the world. 

Exercise 2C: Exercise 2C and I did not get along. There was a problem, which I think was with the folder connection and path where I couldn’t save my project in the 3D folder and because of that couldn’t access the files in that folder to do the activity. So, I ended up having to skip this activity. 

Here is my picture from Chapter 2 A and B:

Chapter 3

Exercise 3A: A new function I learned about was Attribute Query: a request for features in a table that meet user-defined criteria. This was really useful for selecting and narrowing down your map to only a certain area (Illinois in this case) from a broader region. I could see this being used to focus on only specific counties in a state, or maybe even certain areas in a park or preserve. 

Exercise 3B: This section also applied concepts of symbology discussed in the Mitchell book, like graduated colors. I also thought it was interesting to see the classification method of natural jenks from the Mitchell book on an active map and how drastically changing the values of classes can change the map. The importing layer symbology process got a little repetitive after doing it 6 times, but it wasn’t a super complicated process. After that, I only ran into 1 problem here, which was that apparently the appearance tab is actually called the feature layer tab so it took me a while to find the swipe tool. 

Exercise 3C: After I created the new map, and tried to add a new Perc_change field to the attribute table, I ran into a problem where it wouldn’t load, but eventually it did the next day although it wasn’t in the place it said it would be in the book. Calculating the field value was really easy. I expected it to be more complicated than filling in a few boxes. The analysis and summary statistics were the same way. I did like the graphic infographic produced. It was a different but still visually appealing way to see the data. Although you did lose some of the geographic contexts when looking just at the infographic. 

Exercise 3D: I did not run into any problems with this section. It was short and to the point. I liked the visual application of food deserts. That was one of the applications I wrote about in my week 1 post so it was interesting to get to work with it myself. The process for spacially joining data was again surprisingly simple and I did not expect it to be just a few quick drop-down boxes. The way that this allows the number of food deserts to be shown in each county was really useful. I could see how it could be used in situations where maybe there are too many features to be displayed in one area so instead of overcrowding the number could just be displayed. It makes it easier to see the number of features without having to count them. 

Here’s a screenshot of my final Chapter 3 map:

Chapter 4 

Exercise 4A: I also didn’t run into any problems with this exercise. I thought the process of building a geodatabase wasn’t too complicated. I had a little trouble finding the tools at first, but once I figured out how to search for them it was simple. Changing the symbology was again pretty straightforward. I liked the use of different symbols to showcase the difference between wells and fire hydrants etc. I could see how this would be applied to making different features distinct from each other in different contexts. I understood mapping the attribute values with points x and y and the attribute domains. Attribute domain: a set of valid values, or a numerical ranger, to which attributes in each field must be limited. I think this would be useful when you have to establish differences between similar attributes. 

Exercise 4B: I had a problem with finding the bookmark in Exercise 4B. When I looked on the bookmark tab it just said “No bookmarks.” Ultimately, I just ended up zooming in and finding the area that needed to be edited. Using the select tool to edit the pipe was pretty simple. Entering the attribute value was straightforward as well. This would be useful to describe a feature, especially if you took it off the map the information of location could still be provided in the data set. 

Exercise 4C: Merging the polygons and choosing the attributes to preserve were again simple popups through the edit tool. It was similar to selecting the pipeline in exercise 4B. However, I ran into the bookmark issue again when it came to the move bookmark. I ended up just finding the spot and selecting it again. I thought it was nice how you can move vertexes instead of having to delete and replace the whole feature. This makes for faster editing which I feel will be useful when working on larger projects. Drawing the polygon in adding map notes was also similar and it reminded me of the process similar to adding the pipeline in 2B.  

Here are my images for Chapter 4:

Chapter 5

Exercise 5A: This exercise was in building and executing tasks. Having the preset tasks made doing things like the definition queries really simple. I understood how this was useful to save time and prevent errors. However, what I did not understand was whether these tasks are made by the user to be executed by the same user or someone else. I feel like it would not make sense to make a task for yourself to do instead of just doing it. 

Exercise 5B: This exercise brought back the definition queries to limit the extent of the data to certain areas like countries and again used graduated symbols. These two concepts are easy for me at this point. Then we had to use a model builder. I understood how to build the model, but I didn’t really understand what it was for until the model ran. I feel like the book could have been clearer on that. It also talked about how you could make certain processes so they took parameters and change the values over and over again. I think this would be useful if you needed to do the same process with different values in the data. 

Exercise 5C: Again no issue on this exercise. I thought seeing the python command was really interesting as it is more similar to the code I am used to seeing in my computer science classes than doing processing through drop-downs and functions in GIS. I definitely prefer the dropdowns more. It makes the process a lot more simple. Seeing the code and how it is used in different or custom geoprocessing tools was very interesting and I feel like it can make the software flexible to do things outside of the limits of the buttons you find on the screen.

My chapter 5 pictures:

Hollinger Week 3

Chapter 5:

Chapter 5 built off a lot of what was learned in chapters 1-4. It reaffirms the importance of knowing whether your features are continuous or discrete when mapping. Mitchell notes that when dealing with discrete areas you can represent features with several different methods. This includes drawing boundaries on top of each other, on top of a color-coded area, or shading and labeling the boundaries. The reading then details for continuous features you should draw areas symbolized by category and quantity and then draw the boundary on top. I think the difference here is important as continuous data must be represented differently, in this way almost separately for the map to accurately show features and help the viewer get a sense of the range of continuous values.

The chapter then goes on to talk about what kind of data you can get from maps like lists and summary statistics before it gets into what I thought was the most important part of the chapter. This was the portion discussing overlaying areas and features. I talked about two different methods of doing this – the vector method and the raster method – this reaffirmed the difference between vector and raster layers while providing a new mechanism for producing maps and representing features. Briefly, with vector overlay splits category or class boundaries where they cross areas and create a new dataset with resulting areas. Vector is more precise, but it has one problem – slivers. As I understand it, slivers are where borders are offset. If these slivers are so small, it is important to merge them with surrounding data. This brings us to the raster overlay. Raster overlay combines raster layers and counts the number of cells in each category within each area then calculates aerial extent by multiplying the number of cells by the area of a call. This can ultimately be less efficient depending on cell size, but it does prevent slivers.


Chapter 6:

Chapter 6 was all about finding out what’s near and relevant to your feature(s). It talks about how travel is often measured by cost, which is time, money, effort (referred to as travel costs), and distance. The chapter then moved on to outline 3 different ways to find what’s nearby. The first and probably simplest of these is straight-line distance. Essentially, given a source feature and distance, the GIS will find features within the distance. The next method is distance or cost over a network in which GIS finds segments within the distance or cost given source locations and a distance or travel cost along each linear feature. Finally, there is cost over a surface in which you specify the location of the source feature and a travel cost, GIS creates a new layer showing the travel cost from each source feature.

This brings us to some new vocabulary from the chapter. First off, source locations are often referred to as centers. An impedance value is the cost to travel between the center and surrounding locations. Edges are lines, Junctions are where edges meet, and turns are used to specify the cost to travel through a junction check that these exist, are correct, and are in the right spot. These all help to define the network layer.

Another part of defining the network layer is cost. You can specify street direction or more than one center (rural vs urban areas) as these details can change the cost by lengthening travel. The GIS also checks and tags each distance of each segment keeping a cumulative total of cost or distance. One thing I did not understand about cost was the calculation. To find the monetary value the book gives the equation of Cents = length*(cost per mile/5280), but I feel as though travel costs are dependent on many other factors like traffic, gas prices, etc. So, I am slightly confused about how the given cost is an accurate reflection without some way of factoring those in.


Chapter 7:

Chapter 7 discussed mapping changes over time and how it can help predict future needs. It talked about mapping features previously discussed such as discrete features, data summarized by area, continuous categories, and continuous values. Specifically, it talked about how these features can change in character and magnitude. A change in character might be something like a physical movement of a feature, whereas a change in magnitude might be something like a hurricane or storm getting “worse” or “better”.

The chapter then moves on to talk about time. There are 3 ways to measure time: trends, before and after, and cycles. A trend is a change between 2 or more dates and times. This shows increases, decreases, and direction of movement. Before and after are conditions preceding and following an event. This lets you see the event’s impact. Finally, a cycle shows change over a recurring period and can give about the behavior of the features you are mapping. There are two ways to represent these changes in time as well. The first is a snapshot, which shows the condition at any given moment and is used to map continuous phenomena. The second is a summary where an event either is or isn’t occurring at a given time and is used for mapping discrete events. For cycles, you can use a snapshot or summary, for discrete events use a summary, and for continuous data use a snapshot.

The final portion of the chapter discussed the 3 ways of mapping time. The first is a time series. This represents movement or change in character. It can use a trend, cycle, before and after, and shows conditions at each date/time, but it can be hard for readers to compare visually. You should use this for a snapshot when you have 2 or more times. The second is a tracking map which is used for movement and can represent a trend, cycle, or before and after. It is easier to see subtle movement but can be difficult to read if there are many features. You should use this method when you have feature movement over 2 or more times. Finally, Measuring Change measures a change in character. This can represent a trend or before and after and shows the actual difference in amounts or values. However, it doesn’t show any actual conditions and only uses 2 times. The chapter then goes into thorough detail on the process of creating each of these maps. Overall, I thought this chapter was straightforward and I don’t have any questions about it.

Hollinger – Week 2

Chapter 1:

Chapter one was a good introduction and foundation for concepts the book explains more in-depth later in Chapters 2, 3, and 4. Some of the important terms were: discrete (the feature’s actual locations can be pinpointed), continuous (the features blanket the entire area you are mapping and aren’t pinpointed to one location), and summarized by area, categories (groups of similar things), ranks (features in order from high to low – you only know where a feature falls in the order, don’t know how much higher or lower), counts/amounts (counts – the actual number of features on the map, amounts – any measurable quantity associated with a feature), and ratios (show you the relationship between two quantities). Discrete and continuous are considered types of features, while categories, ranks, counts/amounts, and ratios are considered types of attribute values. Furthermore, categories and ranks are not continuous values, they are a set number of values in the data layer, and counts, amounts, and ratios are continuous values, each feature could have a potentially unique value in the range (highest to lowest). Thus, each of these types of attribute values can be classified as a certain feature type. Another key part of chapter one was the difference between calculating and summarizing in your data tables. Calculating allows you to assign new values to features in your table and summarizing allows you to take the values for certain attributes to get statistics. An example of summarizing would be calculating the total average mean. This is important as it is the basis of how you get your data values to work with in the GIS.

These features seem to be the foundation of GIS and classifying data because if you know how to determine which of these terms your data falls under then you will know the best ways to represent it when it comes to mapping. Ideally, Mitchell explains that understanding each of these terms and correctly classifying your data will lead to maps that better represent and display the patterns in which you are trying to see.

Chapter 2:

Chapter two takes the terms from chapter one and dives a little deeper into the why, what, and how you should map. In terms of what you should map, Michell discusses 3 key things. Knowing where your locations are, being appropriate for the audience, and being appropriate for the issue. This means things like not providing too much or too little detail and paying close attention to adding reference features like roads your audience may know to give them context. The next portion of the chapter discusses how to get your data ready to map (assign it coordinates and category values) and then the mapping itself. It goes over two types of mapping (mapping a single type and mapping by category). When mapping a single type, Mitchell recommended using the same symbol to represent all features, but you can also show a subset of features with category values. When mapping by category a different symbol should be used to represent each category. Creating a separate map or subset for each category may make patterns easier to see as well. I think the most important thing I took away from this part of the chapter is that the way you chose to represent the features can alter the patterns. For example, you should use no more than 7 categories because patterns become harder to distinguish. Additionally, using too small or too big of an area relative to your features can obscure patterns as well. Mitchell concludes by talking about how symbol colors and size as well as reference features can change and affect the look of your map as well. Overall, I thought this discussion of aesthetics was important because looking at some of the “wrong” examples in the book, you could tell they didn’t display the data as well as the “right” examples.

Chapter 3:

Chapter three was a long chapter, but it was pretty straight forwards. I found that in many spots it often just elaborated on the concepts and terms we had learned in previous chapters. It went back over discrete, continuous, data summarized by area, counts/amounts, ratios, ranks, and classes. Then it goes on to talk about different schemes you can use to determine your distribution values. There are 4 types: Natural Breaks or Jenks (a natural grouping of data values – breaks where there is a large jump in values), Quantile (each class contains an equal number of features), Equal Interval (difference between high and low values is the same for every block), and Standard Deviation ( based on how much their value varies from the mean). Each of these distributions creates a very different map because certain data points fall differently into the categories depending on which distribution you use. Mitchell then continues to show how you can use a bar chart to visualize the distribution and determine which classification scheme is best. He then talks about outliers and how they can skew the data so you should make sure that they are not a mistake and then group them into their own category or in with the rest of the data. The chapter then moves into an in-depth discussion of ways to show quantities on a map. These are Graduated Symbols, Graduated colors, Charts, Contours, and 3D Perspective Views. The book thoroughly discusses all of these and their appropriate uses, advantages, and downfalls. I personally didn’t like the way any of the chart maps looked. I felt like they displayed too much information and the charts were so small it was hard to read. The chapter was finalized with a discussion of what patterns to look for in maps. These included highest, lowest, clusters, scattered, and even distribution.

Chapter 4:

Chapter 4 was a lot shorter than chapter 3, but it went a lot more in-depth. Particularly it focused on density mapping. Mitchell discusses two ways to map density. The first is by defined area. In this method, when using a dot map each dot represents a feature and is distributed randomly in the given area. These dots DO NOT represent the exact locations of a feature. You can also graph the density value for each area. In this case, using too large or too small areas can skew your graph making patterns hard to see. The other method of mapping uses density surfaces. This uses a raster layer as discussed in chapter one and each cell gets an individual value. This process is much more detailed but takes longer. The chapter then goes into depth on dot density maps. The most important takeaway I got from this section was that the more each dot represents the more spread out they will be, and dots should not be so big as to obscure patterns. After this discussion concludes, the chapter then moves into the specifics of creating a density surface. This discussion includes what cell size to use, how large the search radius should be, and 2 calculation methods. The simple method counts only features within the search radius of the cell, while the weighted method uses mathematical functions to give more importance to features toward the center of the cell. Ultimately, the weighted method results in a smoother surface that is easier to interpret. The chapter then moves into how you should display the data and brings back the distribution models discussed in chapter three (natural breaks, quantile, equal interval, and standard deviation). They are applied to density in a similar way as discussed in chapter 3. The chapter discusses contour lines and how adding them to your density surface can provide clear labels and show variance across a region. This helps make patterns and feature clearer to the audience. The chapter finalizes on an important note that you should map the features on which you based the density with the density surface or on a separate map. I believe this is important in that it provides context to the viewer.

Hollinger Week 1

I’m Lauren and I’m from Canton, Ohio! I’m a freshman majoring in Data Analytics (and thinking about adding geography as my second major!) I love to ski, kayak, hike, camp, and play tennis in my free time.g

I thought the Schuurman Ch. 1 reading was very interesting, especially because of all the applications of GIS it described. I especially thought the applications of water reservoirs and natural gas fuel lines were interesting. I was recently at ODNR, and I got to talk to them about how they use GIS and data for the Canalway and other water sources in Ohio as we are a part of both the Lake Erie and Ohio River watersheds. So, that portion of the reading really helped me tie that experience and the course together. On top of that, the reading also references some applications I never would have considered at the beginning – like its use in Starbucks stores! I also thought the history portion of the chapter was engaging. Learning about the history of using tracing paper and a light table was insightful into the very beginnings of GIS. I also found the part about the original resistance to GISystems and how it eventually became accepted without a second thought over time in the “black box” portion of the chapter. I also was surprised by the distinction between GISystems and GIScience. I did not know that there were two terms, but I appreciate the explanation of how both are important and work together. In this regard, I thought the notion that GIScientists constantly question the efficiency of the GISystems algorithms was super important. Algorithms can be so flawed when we make them even if we don’t notice them at first. In, high school I wrote a report on racism in the medical field and found that with organ donation algorithms, they favored white individuals over half the time. This always makes me think about how important it is to constantly keep an eye on and reevaluate the algorithms we make because often they include our own underlying biases, and we implement them anyways. Thus, I appreciate the aptitude of the book to explore this same notion and its truth for GIS algorithms.

The first GIS application I looked at was food deserts. Canton is a food desert, so I have always been interested in how to effectively view the impacted areas. Last year, a researcher from Ohio State came into one of my classes and showed us the maps he had compiled on Canton as a food desert. Thus, after I read this chapter and thought back to that class GIS came to mind. USC’s spacial science institute had a whole website on how they use GIS to study food desserts. They have mapped everything from the availability of produce and distance and rigor of the path it takes to get to a store. They cross-apply this with maps of income levels and people who own vehicles or other means of transportation. In one study done in Chicago, their researchers found that low-income neighborhoods had significantly less access to food and more poor-quality food compared to upper-class neighborhoods.

Another application I came across was for ski resorts. The application is called Snow Mappy and it was created when the founder decided she wanted a better way to view ski resorts than just a paper map or a sign at the top of the hill. Mappy uses GIS to map things like velocity, and skier speed on certain trails by level, density, and concentration of skiers on certain trails. The application can be used by skiers to track their location and different resources on the mountain as well as the resort itself to track its staff and decide if certain trails need to be added or changed based on where people ski. The map below is the map that displays skier density on trails based on skill level: