Azizi Week 3

Chapter 4: Mapping Density

This chapter mostly focused on what a density surface is and how GIS takes point or line data, like businesses, roads, or population centroids, and turns it into a smooth surface that shows where things are more concentrated. The chapter explains that cell size really matters because smaller cells show more detail but take longer to process, while larger cells make the patterns more general and can hide smaller variations. I also learned about search radius, which is basically how far the GIS looks around each cell when calculating density. A smaller search radius shows more local differences, but if it is too small, broader patterns might not show up. A larger search radius smooths everything out and shows bigger trends, but it can also blur details that might matter. Another important idea is that GIS can calculate density using simple or weighted methods, where the weighted method gives more importance to features closer to the center of the search area and usually creates smoother and easier to read maps. The chapter also talks about choosing the right units for density, like per square mile or per acre, and how using very large units can make density values seem misleading even if the overall pattern stays the same.
It was also very interesting to learn how much control you actually have over the patterns you end up seeing. Just changing the cell size or the search radius can completely change how the map looks, even when the data itself doesn’t change at all. The examples showing how patterns become too blocky with large cells, or too smoothed out with a big search radius, makes it clear that there is not really one “correct” setting. It depends on what kind of pattern you are trying to understand. Another thing that I noticed was that the highest density area on a map does not always mean something is actually located there, since density is calculated based on nearby features. That made me realize that density maps are more about showing general patterns than exact locations.

Chapter 5: Finding What’s Inside

Some of the key things I picked up from this chapter were how GIS is used to figure out what falls inside certain areas and how that helps compare places in a more meaningful way. The chapter explains that you can do this in a few ways: sometimes you can just draw the boundary on top of features to visually see what is inside, sometimes you select the features inside an area to get a list or count, and other times you actually overlay layers to measure what is inside each area. This makes it possible to answer questions like how much forest is inside each watershed, which parcels fall at least partly inside a floodplain, or how many roads run through a protected area. It also talks about vector and raster overlay, where vector overlay is more precise but slower and can create small and messy pieces called slivers, while raster overlay is usually faster and avoids slivers but depends a lot on cell size for accuracy.
Another thing that I found important was how the type of data changes what kind of summary you can get at the end. When working with categories, like land cover types, you can summarize how much of each category is inside an area and even convert it into percentages to compare areas fairly. When working with continuous data, like elevation or precipitation, GIS calculates statistics such as the mean, minimum, maximum, range, or standard deviation for each area. The chapter also shows how results end up in tables that can be joined back to maps, which makes it easier to compare areas visually instead of just guessing from the map.
It also made me think about how often people use this kind of analysis without realizing it, like when cities decide where to put new services or when environmental groups compare protected areas. It makes me curious about what kinds of “what’s inside” questions are most common in real GIS jobs.

Chapter 6: Finding What’s Nearby

This chapter helped me understand what “nearby” actually means in GIS and how GIS can define it in different ways depending on what you actually mean by near. Sometimes it is just straight-line distance, and sometimes it is just about travel range, like what is within a 3-minute drive of a fire station. This chapter explains that “near” can be measured by distance, but it can also be measured by cost, especially time. It also introduces three main approaches, which are: using straight-line distance, measuring distance or cost over a network (like streets), and calculating cost over a surface for overland travel. I also learned about details that can change results, like planar vs geodesic distance (flat vs curved Earth) and the difference between inclusive rings and distinct bands when you need multiple distance ranges.
As always, I have found this very important to know how much the method you choose can change the story the map tells, even if the starting point is the same. For example, a circle around a store might be fine for a rough estimate, but it is not the same as a real 15-minute drive because streets, turns, traffic, and one-way roads can shape how people actually move. This chapter makes that really clear with the network examples, especially when it talks about assigning “impedance” to street segments using distance, time, or money. I also liked the idea that you can build more realistic travel time by adding turn and stop costs using a turntable, because that is the kind of small detail that matters a lot for something like emergency response. And I didn’t realize there were so many output options like buffers, selections, point-to-point distances, spider diagrams, distance surfaces, and service area boundaries like (compact vs general) depending on what you are trying to show.
If someone uses straight-line distance for something that really depends on travel time, the results can be misleading, especially in places with rivers, highways, or weird street layouts. That made me wonder how GIS people deal with real projects when the data isn’t always perfect. Like, if you don’t have exact speed limits, turn delays, or updated road closures, how do you decide what’s “good enough” without making the map seem more accurate than it actually is?

Ogrodowski Week 7

Data Inventory:

Zip Code: Contains all of the zip codes that fall (either completely or partially) within Delaware County. These parcels were created in 2005 according to property addresses, likely to ensure that properties were not split across zip codes.

Street Centerline: This data depicts the center of the pavement of all public and private roads in Delaware County to give a fair approximation of street routes throughout the county. This street system is called the Ohio Location-Based Response System (LBRS) and is heavily used by ODOT and emergency services. Street segments are measured from vertex to vertex.

MSAG: The Master Street Address Guide (MSAG) delineates townships and municipalities in Delaware County. Most townships are simple geometric rectangles, but the municipalities are irregularly shaped. Some municipalities are also their own townships, and they are located inside of other townships, as is the case with Sunbury Township located inside of Berkshire Township.

Recorded Document: These are records that do not match up with the subdivisions that currently exist on the Delaware County map. They include records of vacations, cemeteries, road centerline surveys, and utilities easements.

Survey: This dataset is a collection of the locations of all recorded land surveys in Delaware County more recent than Old Survey Volumes 1-11. There is a pretty high density of land surveys all throughout the county, except over bodies of water and in parks like Alum Creek State Park, Delaware State Park, and the Dover Recreation Area.

GPS: This dataset displays the shapefile of GPS monuments, or metal disks in the ground that mark latitude and longitude and serve as reference points. These monuments were established between 1991 and 1997. 

Parcel: This dataset is incredibly detailed, showing land parcels in Delaware by ownership. Contains extensive information on each property, such as the address, current owner, sale history, and number of rooms.

Subdivision: This dataset contains subdivisions and condos in Delaware County. (These types of housing are typically higher-density residential areas.) Most subdivisions appear to be concentrated around the town of Delaware or the southern part of the county.

School District: All the school districts in Delaware County are displayed in this data set. Similar to the Zip Code data set, some small portions of school districts that mostly fall within adjacent counties are included.

Tax District: The tax district dataset appears to line up similarly to the MSAG data set but includes a few more divisions. Most of the tax districts around municipalities are shaped irregularly and are even sometimes nested shapes within the more geometric townships.

Annexation: This dataset shows annexations in Delaware County. They are concentrated around towns like Delaware, Sunbury, Powell, and Westerville.

Township: Shows all of the townships in Delaware County. Very similar to the MSAG dataset.

Address Point: This dataset uses LBRS to show all registered addresses in a shapefile. The point on the map is located in the centroid of the building.

Municipality: This dataset contains the municipality parcels that are noticeable in the MSAG and Township datasets.

Condo: Condo polygons are shown in this dataset. They are pretty small and well-dispersed, which, when compared to the Subdivision dataset, leads me to believe that Delaware County has lots more houses in subdivisions than condos.

Precincts: Delaware County voting precincts line up pretty well with township and municipality parcels but are divided within into much smaller areas.

PLSS: This dataset contains Public Land Survey System (PLSS) polygons, most of which are near perfect squares. However, the west side of Delaware County comprises more irregular PLSS polygons.

Delaware County E911 Data: This dataset uses an LBRS system of Address Points and is used in particular by 911 Emergency Services. Other uses include appraisal mapping, geocoding, reporting accidents, and managing disasters. This is measured in terms of US Military and Virginia Military Survey Districts.

Farm Lot: Contains all farm lots (as measured by military districts). Many are different shapes: square, long and thin, uniform rectangular, or irregular (as in the western and central parts of the county).

Building Outline (2021, 2023, 2024): Contains all building outlines in Delaware County. Very reminiscent of a Google Maps view. Each of the three databases was updated in its respective year.

Dedicated ROW: ROW stands for Right-of-Way, which is a type of easement, so it shows accessible street routes in the form of line data. It appears that streets that are not included as ROW routes are in private subdivisions or similar areas.

Railroads: The dataset highlights railroads running through Delaware County, and it appears that most of them run north-south.

Original Township: Displays boundaries of Delaware County townships prior to division by tax districts. Consists of 18 original townships. The eastern portion of the county has rectangular parcels, and the western portion’s parcels are more irregularly shaped, which is consistent with other similar datasets.

Map Sheet: A map sheet is just a map that is part of a larger map series. The data appears to show data at the sub-municipality or sub-township level. The smallest parcels are clustered around the cities of Delaware and Sunbury, and in the southern portion of Delaware County.

Hydrology: Contains the portions of all *major* waterways in Delaware County. Many small ponds and lakes on the map do not appear to be counted in this dataset.

ROW: Just like the Dedicated ROW dataset, this contains all line data of street rights-of-way in Delaware County.

Delaware County Contours: Contains two-foot contours showing the topography of Delaware County. This data was updated in 2018. It is in the form of a downloadable geodatabase.

Map:

Figure 1: Delaware County Parcels (yellow), Street Centerline (green), and Hydrology (blue) layers.

Once I remembered I had to use the Add Folder button to add my files into the Catalog pane, it was smooth sailing making this map!

 

Koob Week 6

 

chapter 7

I enjoyed completing chapter 7, especially the first few tutorials, where I got to just move around and adjust the outlines of buildings and correct them. It was fun to do but it also made a lot of sense. I liked that there were only 4 in this one, I was able to take in each tutorial really fast and go back through to make sure I fully understood. I actually really enjoyed doing these ones because I have gotten much more accustomed to all the contols like adding features, symbols, bookmarks, and the repetitive tabs.

chapter 8

It was really easy to do these ones, considering it was only 2 tutorials, but I still feel like I gained a lot from them and got valuable knowledge on Geocoding and analyzing locations, plus learning about things with zipcodes. Most of the infomation was stuff I had begun to feel confident with, doing things with the creating locator, for example, went smoothly. When rematching attendees by zipcode on 8-2 I got a bit lost at a certain part where I had to click the match button for the zipcodes, for some reason it wouldn’t load for me. Also, the create locator section kept resetting on me and deleting my data.

chapter 9

This one had the most tutorials technically but it still wasnt a long time to get through. Buffers were something I had been confused on before too, so i liked being reintroduced to this chapter. I know you use them to find whats near the features being buffered, but seeing them in action helped a lot. 9-1 and 9-2 on the swimming pools in Pittsburgh and estimating the number of youths was really quick and easy. I also thought it was cool how it highlighted the ones within half a mile. I also thought the dissolve option was really neat, helping with overlapping buffer rings helps make it much better to look at.

9-3 it estimates gravity models of geography, you can see the amount of attraction between two features. I had trouble figuring this part out, it was probably the hardest for me out of all the units. It wasnt that bad but I kept getting stuck at the parts where it would ask me to click the service area layer, it would genuinely not give me what I needed, it took me a second to try and figure out how to get through these tutorials at the end but it was still cool. I think analyzing the optimal soltuions and doing your turn was difficult, but it was also a pretty informative.

 

 

 

 

 

 

Gregory Week 6

Chapter 7

Chapter 7 introduced the practical processes involved in editing and creating spatial data. It emphasizes that GIS is not simply about viewing maps but also actively maintaining them as well. Through moving and reshaping polygon features of the campus, it became clear to me that spatial data must continuously evolve. The ability to edit vertices and create new feature classes such as parking lots and bus stops seemed quite interesting to me. I found applying the Smooth Polygon tool to be easy and the end result of it aesthetically pleasing. 

Chapter 8

Chapter 8 explored the process of geocoding, which connects tabular data such as addresses and zip codes to geographic locations on a map. The only part I found interesting in this reading was the process of reviewing matched and unmatched records – was almost like a puzzle. It demonstrated that GIS analysis depends not only on automated tools but also on critical evaluation. This chapter emphasized that spatial accuracy directly influences the validity of conclusions drawn from mapped data. The consequences of having incorrect mapped data could affect overall public safety, or even service delivery. Moving on, the buffer analysis tutorial around public swimming pools in Pittsburgh taught me how juxtaposition plays a role in accessibility. This idea particularly applies to youths living within a half-mile radius of recreational facilities – in other words, the pools.

Chapter 9

This last chapter focused mainly on spatial analysis, specifically showing how GIS can move beyond simple mapping. Using buffer tools around public swimming pools in Pittsburgh, I calculated how many youths live within a half-mile of a pool. Afterwards, I calculated what percentage of the city’s youth population that represents (I am thankful I don’t have to do actual calculations. This made the concept of accessibility much more believable, especially when thinking about how distance affects whether someone will realistically use a public facility. With this in mind, I can see how it is important to look at maps such as this in order to acquire the most efficient spot for a public facility.  It also became clear that straight-line buffers do not always reflect real travel conditions. It is not accurate when terrain and bridges are not included. My thoughts for this chapter include knowledge of how GIS can support planning decisions, such as which public restrooms to shut down because of low funds. It is advantageous because it can turn spatial data into measurable series, ones you can apply to real-word problems. 

 

Moore Week 6

7:

     In chapter seven, we learned how to edit polygon features as well as how to use specific tools to directly depict these features. I really enjoyed chapter 7 as it reminded me of working with vectors in Adobe Illustrator, where you can adjust shapes using various tools and vertex points. When creating polygon features, I found the trace tool very satisfying to use. Learning how to smooth out the edges of the polygon was also very satisfying. I also found it interesting that when the chapter was teaching us how to transform features, it used an AutoCAD drawing as a base to transform. Until now, I had no clue that AutoCAD could be utilized by ArcGIS, as I am semi-familiar with AutoCAD software. In chapter 7, it also says we can click and hold the wheel button on our mouse in order to pan around the map when in editing mode. This was helpful information that I wish the book had told us when making feature edits earlier. 

8:

        Chapter eight was more confusing than the previous chapter for me. It has to do with geocoding, which is the ability to convert various geographic data descriptions/addresses into geographic coordinates that can be displayed on a map. To be specific to chapter 8, it taught us to geocode using zip codes and street centerlines. These actions that allow for geocoding are done mostly through the use of specific tools found in the Geoprocessing pane. Throughout the later tutorials, sometimes they dont actually say “run tool” when that step needs to be conducted. I guess it can be assumed that the tool needs to be run, but I prefer tutorials that are very detailed and explicit in their instructions. This was a small detail, but it made the workflow feel slightly less clear and more frustrating at times. It also doesn’t help that you don’t get a visual on what you’re imputing until the very end of the imput.

 

9:

        I ended up understanding the concepts from chapter nine much better than in chapter eight. With the concept being how to work with/display spatial data, which is (to my understanding) the area surrounding a data point that can be utilized to answer specific questions. For example, we created service areas to help compare how often a pool is used by youths depending on the travel time to said pool. The travel time is visualized by the service areas as ring boundaries around the pool data point. As expected, the use rate of the pool was decided as travel time went up. I also may have found this chapter easier because, unlike the previous chapter, you could visually see the map changing as you performed each little step. 

Whitfield Week 6

Chapter 7:

In this tutorial, I learned more about tools used for manual digitization by tracing and a more in-depth understanding of the use of base maps. Base maps will help me edit and create vector map features while also learning how to use other existing layers like streets as spatial guides for digitizing features. Also learned a little about lidar and how it is used as a reference for heads-up digitizing. ArcGIS field maps or GPS receivers that collect data about longitude and latitude to then create vector features and build information modeling, can then be imported into GIS maps creating feature classes. I had some issues when trying to shape and form a building that fits to scale in the map while doing this chapter. This tutorial was definitely one of the easier ones, but it was pretty hard to make vector points when I was confused on the correlation between the directions and my own screen. It took me a while but I eventually got it, I then promptly closed my computer and didn’t reopen ArcGIS until the next day. I think that in general, this chapter had some fairly easy to follow directions, but it also took me a longer amount of time to accomplish. It made me realize that I do better when dealing with point locations on maps, and sometimes have more trouble when working with CAD drawings and using cartography on zoom scale maps. In this chapter, the outlines of buildings in my practice GIS sites were a little askew and not as lined up as in the directions. This can also be said when talking about what the maps looked like and how they differed between my directions and my computer. This caused me to be confused in tutorials like the second one where I had to simply map bus stop locations as points. I was confused because part of the map that I had, differed from the pictures and images that were in the diffraction, causing me to guess at some parts on where points are supposed to go (not to say that mapping these points was a high stakes operation or anything like that). 

 

 

Chapter 8:

In this chapter, I learned about geocoding and how it matches location fields in tabular data to corresponding fields in existing feature classes in order to map the tabular data. Through these location fields, we are able to geocode their location by using zip code polygon or street feature classes. GIS has to use “fuzzy matching”, and make matches that are approximate as opposed to being one hundred percent accurate. Fuzzy matches are made through  rule-based expert system software, ArcGIS is seen as a system. The geocoding expert system can be seen as attempting to mimic a mail delivery person through using expert knowledge to get mail that was messed up or written differently, to the correct address. This can be shown in GIS through source tables, reference data, locator, or the geocoding tool. This tutorial only had two sections so I feel as though I don’t have as much to talk about or say that I had difficulties with (though I definitely did have trouble figuring things out throughout this section and chapter). I do feel like it was easier to run through the two tutorials, especially in comparison to the other chapters and tutorials that I had to push and force myself to finish. This feeling of understanding can be associated with the fact that I paid attention and was learning how to do things and locate different tools and functions white struggling and feeling like I wasn’t learning anything at all, genuinely just following the directions blindly. Although I have great disdain for the “your turn” sections of the reading, they really do check your knowledge, comprehension skills, as well as your problem solving skills while recalling everything that you have previously learned and read about. If I were to go back and compare how I handled these tutorials when I first started this course, confused on where the content pane was, and stressing out forgetting how to symbolize circles with different traits there is a big difference. I would like to think that I am learning and improving, not to say that I’m ready to tackle the final/midterm which has a fast approaching due date. 🙁

 I again forgot to take pictures of my work in chapter 7 and 8 so I will be leaving this message here incase I forget to go back and collect images for some of my work. 

 

Chapter 9: 

In this chapter I had some issues but it was mostly in relation to the amount of work that I had to do, in relation to the other 2 previous chapters of work. In this chapter, I visualized spatial data so that I and other people can get answers or solutions to problems. I learned about four different spatial buffers including: buffers, service areas, facility location models, and clustering. I also learned about and used another spatial data type, “the network data set”. Which we use to estimate travel distance or time on a street network. We apparently used the free service version in GIS, not that I would have honestly been able to tell the difference between the two in the first place. In the first tutorial, I learned about using buffers for proximity analysis. With a buffer being a polygon surrounding map featured of a feature class. I tough that this first tutorial looked cool, and was fairly easy to do. I will say that I had issues trying to understand what they meant when they said to find the “number and percentage of youths” within a distance after having created a one mile buffer. I was confused because I didn’t really understand how we were supposed to be getting these calculations, or how we were supposed to be comparing them. I was also confused and I believe that I might have done my map wrong because when I compared what I had created to the picture, they had darker shading in the circles then I did. In the third tutorial I also had fun making the map and adding the colors, but my computer kept freezing while I was trying to swap that color out per the instructions so I was getting fairly frustrated. 

Bulger Week 6

Chapter 7

In chapter seven, we learned how to create polygon and point features. We also used a CAD drawing and learned how to match overlays with the basemap features. I really enjoyed this chapter. It was much more intuitive than the previous ones, and I absorbed the material quickly. We began by matching the outline to the buildings on the campus, and learned how to create intersections to split buildings. We then learned how to use the cartography tools to smooth areas on the map. The transformation with the links on the CAD drawing confused me at first, but the drawing helped me realize what it was asking us to do.

Chapter 8

Chapter eight was short but had us do a lot. We learned a lot about geocoding and how to use zip codes. Most of the chapter wasn’t a tutorial, but an introduction to geocoding. The first tutorial taught us how to apply zip codes. I thought it was interesting that it does it by matching rates with percentages. I never knew that was how it worked. I got an error when doing the collect events section in the first tutorial, but it looked the same as the picture in the textbook. The second tutorial went over how to use geocoding with addresses. It was a little similar in theory to the first tutorial, but it went more in-depth about connecting the address to a zip code and assigning them based on a match.

Chapter 9

Chapter nine was also very interesting. It taught us about buffers and service areas. I feel like buffers are a very common thing to have on a map, so I am glad we went over how to apply these in ArcGIS. The first two tutorials were pretty self-explanatory. I was a bit confused with the rest of the chapter in some parts, because of the wording. It was also hard to tell what exactly we were doing. I really enjoyed learning everything in the fourth tutorial. I feel like learning how to find the most attended pools is something that is very important to use in real life.

Payne Week 6

Chapter 7: 

Chapter 7 overall was easier for me because it was all very Intuitive. Formatting buildings as polygons and using the tools to relocate their outlines was pretty simple. I had to try a few times on splitting the buildings in two because I kept getting error messages but figured it out in the end. I also find it interesting that a lot of the tutorials involve Pittsburg, maybe the person writing it is from there.

Chapter 8: 

My major problem with this chapter was finding where the tools were and making sure I was accessing the right ones. This continues to be my main issue as I don’t always remember where every tool we have used is. I found it helpful to ask google ai for guidance on where I could find the tools in Arc Gis and that turned out to be fairly helpful. This chapter dealt with zip code data and how to change symbology for it along with a few other things. 

Chapter 9: 

I forgot to take a picture for this chapter but this chapter felt smoother than the rest. It was interesting seeing all the different visuals from the different tools used and how they layered with each other to make the full picture. I did struggle some and had to redo a few steps but overall it wasn’t too bad. 

Pichardo – Week 6

Chapter 7: 

Chapter 7 was one of the most hands-on chapters so far, and I honestly liked that it felt practical instead of just procedural. Learning how to create, edit, and adjust polygon features made GIS feel more interactive. Moving vertices and reshaping buildings took some patience at first, especially when I accidentally selected entire features instead of individual points. Once I got comfortable with snapping and adjusting boundaries, it became much easier and actually kind of satisfying.

Working with CAD drawings and spatial adjustments showed me how GIS isn’t just about viewing data — it’s about improving and updating it. I could see how these tools would be useful in real-world campus planning or city development projects. If a building changes shape or a parking lot is added, these skills would make it possible to update the map accurately.

One thing I noticed, similar to earlier chapters, is that sometimes the wording in the book didn’t perfectly match what I saw in ArcGIS Pro. That caused a little confusion, but I’ve gotten more confident using the search tool to find what I need. Overall, this chapter helped me feel more independent in the software rather than just following instructions step-by-step.

Chapter 8:

Chapter 8 focused on geocoding, which at first seemed straightforward but actually required more attention than I expected. Learning how ArcGIS matches addresses or zip codes to spatial locations helped me understand how tools like Google Maps might work behind the scenes. The idea that there’s a scoring system for matched and unmatched addresses was really interesting.

Creating the locator and working through matched versus unmatched addresses was probably the most challenging part. At times, I had to go back and double-check fields because one small mismatch would cause errors. However, once I understood what the software was looking for, the process made much more sense.

I also liked seeing how the same data could look different depending on the basemap used. It made me think more critically about presentation and how the background layer affects interpretation. While I’m not sure how heavily I’ll use geocoding in my final project, I do think understanding this process is important because it connects tabular data to real-world spatial patterns.

This chapter definitely required careful reading, but it helped me feel more comfortable working with attribute tables and troubleshooting errors.

Chapter 9:

Chapter 9 was probably my favorite of the three. The buffer tools were really cool to visualize. Being able to create proximity zones and adjust the radius made the concept of spatial analysis feel very clear. Seeing the blue buffer circles expand or shrink depending on distance helped me understand how GIS can model real-world impact zones.

Using the Pairwise Buffer tool and creating multiple-ring buffers showed how planners or policymakers might analyze service areas. I started thinking about how this could apply to environmental science, like mapping wildlife hotspots or pollution impact zones. The Network Analyst tools were also interesting because they move beyond simple distance and consider travel routes and accessibility.

One thing I noticed was that changing units (like switching to U.S. Survey Miles) affected the output in ways I didn’t initially expect. That made me realize how important measurement units are in GIS analysis.

Overall, this chapter felt like it tied everything together. Instead of just editing data or matching addresses, we were actually analyzing patterns and relationships. It made GIS feel more powerful and applicable to real-world problem solving.

Evans Week 6

Altering the polygons in chapter 7 was simple because it was pretty intuitive. I’m curious about making CAD drawings. I wouldn’t think that a map of that size would want the interior usage of the buildings displayed. I assumed it would be a scenario where you would use separate maps, one for the full land area and individual floor plans for each building, since its so much information to have on a single map.

Chapter 8 was also easy, since it was a short chapter. It is interesting that setting the needed score lower can make so many mistakes, even if there is a perfectly matching address already. I would think that it would run through perfect matches first, then move on to anything else, but it looks like it runs them all at once.

In 9.3, I restarted at one point because the usage percentages didn’t match with what the book said they should be. They were over 100% for each section. I got the same numbers the second time though, and when I tried to mess with the equation a little, I was still unable to get it to match. The book says that the 11.3 in the equation is to calculate for having a partial sample of the population, but I think that number might be what is causing the difference in percentage usage.

In “Select by Attributes” you can either hit “apply” or “okay.” Is the difference just that “apply” doesn’t close the pop-up, but “okay” does? It seems to do the same thing.