Villanueva Henkle Week 2

Chapter 1: Introducing GIS Analysis

 

This chapter begins by briefly introducing the uses of GIS and defining GIS Analysis, which is looking for patterns between geographic features and the relationships between them. This can be done by creating/ using a map or overlapping multiple layers to see differences that may not be readily apparent.

The next section describes the first step in the process, which is Framing a Question. Knowing what information you want from an analysis is key to creating this question, and you need to determine the audience (yourself, your peers, a professor) to successfully set up your methods and frame the question more accurately. The book also describes the different features that you will encounter while doing GIS Analysis, those being Discrete features, Continuous phenomena, and features summarized by Area. Each has it’s own specific use case, with Discrete features being single points on a map, Continuous being variables that are present across the whole area but change (such as temperature or Elevation), and summarized features show counts within a boundary, such as population within county lines. 

The book then shows us two different ways of visualizing data using ArcGIS, those being vector and raster modeling. Vector is good for showing discrete features and summarized features, as they typically use one layer, and Raster is good for showing continuous categories or numbers. However, either render can be used to show any feature. I found this section fairly interesting, as it seemed for continuous data, there was not too much of a difference between vector and raster models, however, there were large differences when discrete and summarized data were done on both models. The next section, dealing with Attribute values, seemed fairly easy to understand, especially after doing work on Rstudio Cloud for the past few semesters, and the final section seems to be nearly identical to R. 

 

Chapter 2: Introducing GIS Analysis

 

The main focus from this chapter, in my opinion, is to make your maps as accurate and as easy to read as possible. The chapter starts with numerous examples of how and why you may need to show others your maps, emphasizing the fact that you will need to know this information. I appreciated that each mapping strategy had its pros and cons described, as it showed that none of these methods are truly useless, just have trade offs.

Every one of these strategies was a different variation on creating different layers to show information, by subsetting the data in different ways. With discrete or continuous data, you can highlight a certain subset by making it a strong, striking color, and the other subsets different background colors.  If you have one map that has every data value on it, it can become clustered and hard to look at. Because of this, it can be very helpful to make multiple maps that each show a different subset, as well as one map which combines them all. 

Another important piece of information that the book emphasizes is to use no more than 7 categories at once, as this can also be overwhelming. If you have more than seven features, you can group certain categories together that have similar traits. Your use of symbols is also important when designing your map. Colors are much more distinguishable than symbols, so they should have a higher priority. However, when using Linear features, you should use different widths rather than colors as that is more easy to see.  Text labels can also be used to label your different categories. The last section of the chapter talks about how you can use different features to understand more about the feature you are looking at. For example, if looking at patterns of growth over an area, elevation can be key to finding the origin of these patterns. 

 

Chapter 3: Mapping the most and the Least

 

This chapter starts out by explaining why mapping minimum and maximum values in data is important. This is because it can show weak points in current systems, and where we might need to improve. There are multiple ways of recording these values, those being “Counts and Amounts,” which are the number of features, “Ratios,” which show the relationship between quantities, and “Ranks,” which order quantities from high to low (and assigns a value). These features can then be grouped into classes, which simplify and group amounts to prevent your data from getting too cluttered. To create these classes, you can either do it manually or use a classification scheme. You only need to do it manually if you are trying to find features suiting a specific criteria, such as a specific percentage or something specific to your area of study. If not doing it manually, you can use the aforementioned classification schemes. Natural Breaks (Jenks) find large jumps in data values and group the data between those lines. Using a Quantile divides groups so that each one has an equal number of features (Essentially having small amounts of large data and large amounts of small data). Equal Intervals makes the difference in groupings equal across the data (Regardless of size or quantity). Finally, there is standard deviation, which groups data by its distance from the mean. When choosing between these schemes, you have to take into account the distribution of your data and if you are trying to find a difference or similarity between. The book also discusses what to do with Outliers if you find them, as some schemes cause these outliers to heavily skew your results. Next, we are taught how to visualize our data on a map. We have five options; Graduated Symbols, which are good for discrete data but can be hard to read if too abundant, Graduated Colors, which are good for continuous and area data, but do not always accurately represent the difference in data, Charts, which essentially have the same pros and cons as the symbols, Contours, which are good for continuous phenomena but does not show individual features well, and 3D perspective views, which have the same pros and cons as Contours. You need to know which schemes to use to make your maps statistically accurate and how to use these map types in order to effectively display our data.

Plunkett Week 2

Chapter 1:

  • GIS has been growing enormously and the use of it is also increasing. It started as a database but now has many more applications. The first step of GIS is examining geographical patterns and the relationship between their features. This can be done by making a map of these patterns. The next step is to formulate a question to better understand what information you need, the more specific the better. You still may not have all the information you need after this which is why choosing the correct method for your analysis is important. Then comes the GIS to process the data. Finally, the last step is to display the results as a table, map, graph, etc.. Being able to see your processed data is important as it allows for patterns to be noticed more easily than looking at raw data. 
  • There are a couple of different features in GIS that affect the analysis process. 
    • Discrete Features: When there are discrete locations or lines the actual location can be pinpointed. 
  • Continuous Phenomena: Two examples of this are temperature and precipitation. Continuous phenomena can determine a value at any given location. 
  • Interpolation: A process in which GIS assigns values to the area between the points, using the data points. 
  • Summarized Data: Data representing the counts or density of individual features within area boundaries. 
  • Map Projections: Translates locations on the globe onto the flat surface of your map. The map projections distort the features being displayed on the map and this can be a concern if you are mapping larger areas. 
  • Categories: A process that lets you organize your data by grouping similar things. 
  • Ranks: This puts features in order from high to low and is used when direct measurements are difficult or there is a combination of factors. 
  • Counts and Amounts: Shows you the total numbers, and the number of features on a map. 
  • Ratios: Show you the relationship between two quantities. These are created by dividing one quantity by another for each feature. This is used to even out the difference between large and small areas. 

 

Chapter 2:

  • This chapter is set up similarly to the first chapter in which it explains the step-by-step details about figuring out what to map and how to use it. It also focuses solely on what is on the map and the presentation of it. To properly use a map one must figure out what map is appropriate for the issue addressed. You have to think about from the perspective of someone who knows nothing about the data, what would they need to see on the map to properly interpret the data. Just like in the last chapter with making categories, these features that were categorized need to have their code of identification. Codes can indicate the major type and subtype of each feature. 
  • Originally I had no idea how to start making these maps but I understand a bit better that each process is step by step and not all at once. Such as in making a single map type, you add features by drawing symbols on the map. Mapping by category can show patterns of that specific data. 
  • There seem to be a lot of different ways to present the data on the map such as mapping by category as stated before. Displaying the features by type allows you to use different categories to display different patterns instead of just using category information. However, with any feature, you do not want to display too much on a map as it can make patterns difficult to follow. To fix this problem you can always group the categories. 
  • I kept reading about symbols and wasn’t sure if it was as direct as it seems but it is. Choosing a symbol is as simple as picking one, but it can also help show the pattern of the data. Symbols usually use a combination of shape and color. 

Chapter 3:

  • The start of the chapter seems to be a small refresher to the last chapter about what you need to map. Once again it is important to remember who is going to be seeing the map, as you may be able to present the data differently depending on the person. In the past chapters there was a lot of discussion about mapping categories but mapping individual data is just as important. While it may take more effort it does create a more accurate representation of the data. 
  • Classes: Groups features with similar values by assigning them the same symbol and allows you to see features with similar values. This does change how the map looks. 
  • Natural Breaks: This is done by using classes based on natural grouping data values, separating them from highest to lowest. 
  • Quantile: Block groups with similar values are forced into adjacent classes. The block groups at the high end are put into one class. 
  • Equal interval: The difference between high and low is the same for every class. In this example, it allows for the blocks with the highest median income to be identified. 
  • Standard deviation: In this case, the classes are based on how much their values vary from the mean.
  • Natural Breaks: Values within a class are likely to be similar and values between classes are different. Due to the natural break finding groupings and patterns inherent in the data. 
  • There are multiple formats to make a map such as graduated symbols, graduated colors, charts, contours, and 3D perspective views. Understanding which features you are using is important to making the map. If I were to have discrete locations or lines I would use graduated symbols to show value ranges,  charts to show both categories and quantities, or a 3D view to show relative magnitude. The chart starting on 154 will probably be useful later down the course. 

Pratt Week 2

Mitchell

Ch. 1

Creating a map might not initially seem like a deep analytical task, but it involves several layers of analysis. Mitchell categorizes data into different types to better understand and represent geographic information. Discrete features refer to specific locations that can be precisely pinpointed, such as linear paths or individual spots. In contrast, continuous phenomena can be measured anywhere within a given space. Interpolation is used to estimate values for areas between measurement points. Although parcels provide a broad area of data, their non-legal definition can introduce some margin for error.

Boundaries help group data into similar types or categories and are usually legally defined, creating a structured way to summarize data by area, like demographic or business information. When features are tagged with codes that assign them to specific areas, statistical analysis on the data table is required to prepare it for mapping. GIS technology allows for overlaying features on areas without predefined codes to determine what belongs where.

Geographic features can be represented in two main ways: vector and raster. Vector representation involves defining features by specific x,y coordinates and tables, which requires precise location data. Analysis with vectors typically involves summarizing attributes in a data table, though sometimes raster data is used for combining layers. Raster data represents features as a matrix of cells in a continuous space, with each layer representing a different attribute. The accuracy of raster data depends on cell size—the smaller the cell, the more precise the information.

Map projections and coordinate systems are crucial when mapping large areas, as they account for the Earth’s curvature. Attribute values can be categorized into several types, including categories (groups of similar things), ranks (ordering features by relative importance), amounts and counts (total numbers showing magnitude), and ratios (relationships between quantities to better reflect feature distribution).

Ch. 2

This chapter highlights the crucial role of statistics and mapping in Geographic Information Systems (GIS) for interpreting spatial data and identifying patterns. A solid understanding of statistics is vital for analyzing spatial data, with spatial statistics specifically designed to quantify and analyze spatial patterns. The chapter covers essential statistical concepts such as descriptive statistics, including mean (average), median (middle value), and standard deviation (variation from the mean). These tools are important for comparing outliers and understanding data distribution.

Effective map creation involves balancing detail with clarity. Users must assign geographic coordinates and category values to each location and decide how to present this information. Too many categories can clutter a map, while too few may obscure important details. The choice of map type and design should be aligned with the intended audience and purpose. Complex maps may suit experts, while simpler versions are better for the general public.

GIS mapping focuses on visualizing the distribution of features rather than individual data points, aiding in the identification of geographic patterns. Users should select the appropriate map type based on the issue and audience. For instance, a crime map can reveal high-crime areas, while a zoning map is useful in a committee setting. When mapping, it’s important to limit categories to around seven to avoid confusion, using different symbols and colors to distinguish them. Including recognizable landmarks can improve map readability.

The chapter stresses that understanding what to map, how to display it, and tailoring the map to its audience are critical for effective spatial data representation. Proper data preparation and thoughtful symbol selection are essential to creating maps that clearly communicate patterns and insights.

Ch. 3

This chapter discusses methods for analyzing spatial patterns through mapping, emphasizing how different techniques reveal patterns in various types of data. Mapping the most and least of certain features helps identify patterns and characteristics within data, such as in real estate. Data can be categorized into discrete features, continuous phenomena, or data summarized by area. Discrete features are often represented by graduated symbols, while continuous phenomena are displayed using graduated colors or 3D perspectives. Data summarized by area is typically shown with shading to indicate quantities.

When creating maps, it is crucial to consider the intended audience and purpose. For presentations, clear explanations of data points are necessary, while exploratory maps should offer a solid baseline for identifying patterns. Numerical considerations like amounts, counts, ratios, or rankings help determine the best representation method, such as gradients or varying shapes.

To effectively represent data, users must classify values into categories. If mapping individual values, detailed data patterns can be observed. Grouping values into classes involves assigning the same symbol to similar values, using standard classification schemes to simplify patterns. Common schemes include natural breaks (based on natural data groupings), quantile (equal number of features per class), equal interval (uniform value range across classes), and standard deviation (class based on variance from the mean). Choosing the appropriate scheme and visualization method is essential for creating clear and informative maps.

Understanding and selecting the right mapping techniques and classification schemes are crucial for accurately analyzing and presenting spatial data. Proper visualization and statistical analysis help reveal significant patterns and insights, making it easier to interpret and act upon the data.