Thursday, December 8, 2016

Lab 9 - Corridor Analysis & Feature Extraction

GOAL AND BACKGROUND

The goal of this lab was to learn to project and navigate corridor LiDAR data. Feature extraction would also be explored.


METHODS

First, data's metadata was used to determine the projection of the data. The LAS tools in ArcMap were then used to define the projection for the data. The projected data was then opened in LP360, and some measurements were taken such as road widths and locations of power lines that were at risk for vegetation encroachment.

Next, building extraction was explored. Lake County data, classified in previous labs, was used for this. First, building footprints were extracted. Two shapefiles were created from this. One was a perfect outline of building points. The other was outlines with straight lines, which removed the noisy appearance of the footprints by making them more square. Lastly, the buildings heights were analyzed to determine qualification for FEMA's LOMA applications.



RESULTS

Shown below are two instances of vegetation encroachment on power lines found in the corridor data.





Shown below is an example of the difference between the non-squared footprints and the squared footprints, respectively. Notice the jagged lines in the non-squared footprints compared to the straight lines in the squared footprints.




Shown below is a map created of Lake County of buildings in the region that qualify for FEMA's LOMA applications. FEMA recently changed the cutoff from 810 ft ASL to 800 ft ASL. The map shows the buildings that the change affects.







SOURCES

Terrestrial LAS for Algoma, WI, project boundary KMZ, and metadata are from Ayres Associates.

Thursday, December 1, 2016

Lab 8 - Vegetation Metrics Modeling

GOAL AND BACKGROUND

The goal of this lab was to learn to learn to calculate vegetation statistics using LiDAR data.


METHODS

First, LP360 was used to create a DTM for the canopy height of the study area. This was done by creating DTM images for both the canopy surface, using first return vegetation points, and ground surface, using last return ground points. A raster calculator was then used to subtract the canopy height from the ground to create a vegetation canopy height raster. There were some negative values in the data from error, and were selected and removed for further calculations.

Next, above ground biomass (AGB) was calculated using model builder in ArcMap. This was done by separating vegetation by species in the study area because species have different parameters for calculations.

This data was then used to calculate the breakdown of the AGB for each species based on stem, branch, and foliage mass. This was done by calculating percentages of each of these based on species and multiplying this by the previously found AGB.



RESULTS

A graph of distributions in vegetation height is shown below.



The canopy height raster is shown below. Negative values are indicated by black symbology.


The AGB map for the entire area is shown below. The species are marked with different colors. The black areas were non-vegetated areas.


Maps for the three parameters: stem, branch, and foliage mass are shown below.


The model created to find the AGB is shown below, followed by a section of the model showing the steps for a single species.





SOURCES


Data obtained from Cyril Wilson for use in 358 LiDAR course.

Tuesday, November 22, 2016

Lab 7 - Flood Inundation Modeling

GOAL AND BACKGROUND

The goal of this lab was to learn to use ArcScene to create a flood inundation model for the Eau Claire area.


METHODS

First, using ArcMap, breaklines were created and enforced for water bodies. A DTM image with flattened water was then created. These were brought into ArcScene. An animation was created by raising the water level by 5 feet per frame to simulate the rise in water level due to flooding. The model had to be adjusted using breaklines to ensure that the flooding begins from the river.


RESULTS

The final animation that was created is shown below.




SOURCES




Data obtained from Cyril Wilson for use in 358 LiDAR course.

Thursday, November 17, 2016

Lab 6 - Topo-bathy Applications

GOAL AND BACKGROUND

The goal of this lab was to get experience working with topo-bathy lidar data.


METHODS

First, the data underwent some light QA/QC as some ground points were not well classified. After this was corrected, a shoreline breakline was created in ArcMap. This was then conflated. Next, a breakline was made of the unsubmerged topographical area.

Lastly, a DTM image was exported using the LAS data and the breaklines that were just created.


RESULTS

The image below shows the shoreline breakline that was created and conflated.



The image below shows the breakline that was created for the unsubmerged area.



The next images show the DTM image, followed by the hillshade image that was created.





SOURCES



Data obtained from Cyril Wilson for use in 358 LiDAR course.

Monday, November 7, 2016

Lab 5 - Breakline Creation, Conflation, and Enforcement

GOAL AND BACKGROUND

The goal of this lab was to learn how to create breaklines within ArcMap using LAS data, to use conflation to constrain downward slopes in rivers, and to enforce breaklines for hydro-flattening of water bodies.


METHODS

The LP360 extension in ArcMap was used for breakline creation. LiDAR data for Eau Claire was used for this section. First, the LAS data was added to ArcMap and displayed using the TIN surface. A ground filter was used to only display ground points. This was necessary so the edges of the river bank were shown instead of vegetation. First, the outline of the river was created, followed by a centerline of the river used to ensure a downward slope.

These shapefiles were brought into LP360. The breakline for the outline of the river was enforced. Then, the river-flattening tool was run, using the river bank shapefile and the river centerline shapefile.

Hydro-flattening was also performed on the Lake County data that the previous labs have used. There was no need to create breaklines for this as they were given, and no rivers in the data meant simpler hydro-flattening could be used. The breaklines were enforced and DTM and contour images were created.



RESULTS

Shown below is the final product in the creation of the centerline and riverbank shapefiles for the Eau Claire data.


Shown below is part of the table showing the constrained Z values along the centerline. The M value is how much the Z was adjusted to keep the river continuously downstream.


Shown below is the final result of the hydro-flattening in the Eau Claire data. Changes in elevation along the river are continuously downward as it flows downstream.




Next, the result from hydro-flattening in the Lake County data is shown. The simple ponds were flattened to be a single elevation.



A contour map of a section of the Lake County data was also created, shown below. It is overlaid on the NAIP imagery of the area.





SOURCES



Data obtained from Cyril Wilson for use in 358 LiDAR course.

Thursday, October 27, 2016

Lab 4 - QA/QC

GOAL AND BACKGROUND

The goal of this lab was to learn how to perform relative and absolute QA/QC, along with doing manual QA/QC on classification.


METHODS

LP360  and the LP360 extension in ArcMap was used for this data processing. First, point cloud density was checked by creating a map that showed low density areas. According to this, point density was acceptable throughout the study area.

Next, relative accuracy was assessed. A map was created that displayed differences in elevation. This was used for swath-to-swath analysis. For this analysis, seamlines were created along flat, unobstructed areas within the overlap lines.

Next, absolute accuracy was assessed. This was done for both vertical and horizontal accuracy. Checkpoint locations for each were imported from a spreadsheet. Vertical checkpoints were referenced for measured values vs. values from the LiDAR data. Horizontal checkpoints were displayed on the map and refenced with the LiDAR data.

Manual QA/QC of classification errors involved looking through the study area and flagging classification errors. Once this was done, these errors were rectified using manual classification tools such as classifying in the profile window.


RESULTS

The map created in relative accuracy assessment showing differences in elevation is shown below. The horizontal lines are overlap of flight lines. The results from swath-to-swath analysis showed few areas above the maximum height value.




The results from the absolute accuracy showed the data was good. Vertical checkpoints were all within 0.10 cm error. Horizontal checkpoint analysis determined the data was in class 0 for both X and Y accuracy.

Manual QA/QC was done through much of the data, an example of this is shown below. The error was first flagged, as shown in the first image, then rectified, as shown in the second. The flag was removed after fixing the error.





SOURCES

Data obtained from Cyril Wilson for use in 358 LiDAR course.

Thursday, October 13, 2016

Lab 3 - Vegetation Classification

GOAL AND BACKGROUND

The goal of this lab was to learn how to classify vegetation in LiDAR data. The vegetation would be classified by low, medium, and high vegetation. High noise would also be classified in this process. QA/QC will be necessary to minimize error.


METHODS

LP360 was used for this data processing. A height filter was used to classify data into either low, medium, or high vegetation. If the points were above the high vegetation threshold, they would be classified as high noise. Only unclassified points were chosen, since vegetation is the last to be classified.


RESULTS

The study area after classification is shown below. The newly classified vegetation is shown as green.



Below are a few examples of closer views of classification within the study area. First is a building that was shown in the previous posts, now with classified vegetation.



Below is a view of some residential housing in the study area. The algorithms had some trouble differentiating between houses and vegetation at some points, especially when it involved overlap. Manual cleanup was used after the algorithm.




SOURCES

Data obtained from Cyril Wilson for use in 358 LiDAR course.

Thursday, October 6, 2016

Lab 2 - Building Classification

GOAL AND BACKGROUND

The goal of this lab was to learn how to classify buildings in LiDAR data. This would be done with the data that ground was already classified, which was done in the first lab. Some basic QA/QC would be necessary to minimize error.


METHODS

LP360 was used for this lab. A planar point filter was used to classify buildings. Basic parameters were set, such as a height filter. The purpose of the height filter was to cut out planar surfaces low enough that would not be buildings, such as cars, and high enough that high noise would be ignored. Other parameters, such as minimum edge plane, grow window area, and N fit were also set and adjusted to best filter the data.

The results are shown and discussed below.


RESULTS

The entire study area after building classification is shown below. Buildings are classified by red, ground by orange, water by blue, and gray areas are unclassified, which at this point is vegetation.




Below are a few examples of closer views of buildings within the study area. First is a building that was shown as previously unclassified in the lab 1 post, now classified accurately.



The next image shows residential housing that was classified. Note that residential housing had trouble being classified with the algorithm, so a lot of manual cleanup was necessary.




SOURCES

Data obtained from Cyril Wilson for use in 358 LiDAR course.

Thursday, September 29, 2016

Lab 1 - Ground and Water Classification

GOAL AND BACKGROUND

The goal of this lab was to learn how to classify ground in LiDAR data. This would be done by learning the tools necessary to filter out noise points and classifying ground. Some basic QA/QC would be necessary to ensure accuracy. 


METHODS



LP360 was used for this entire lab. First, the data was loaded into the program. Statistics were then taken through the Point Cloud Statistics Extractor, whose results are shown below.


The next task was to remove low noise points. These points were clearly shown using the profile view, which displayed isolated points far below the ground. First an automated point cloud task was run to filter out these noise points. After this was completed, the profile view was used to check points that the automatic filter could have missed. After the entire study area had been checked, I could move to classifying ground points.

First, seed points were needed to determine average ground levels. The point cloud task used to do this works by using windows of a given size to determine a single ground point on each window. The given size of these windows must be larger than the largest building in the study area so it does not choose a point on a roof. The size of the largest building was determined to be 500 feet. The point task was then executed, and created a grid of seed ground points.

The seed points needed to be check since not all landed on the actual ground. This was easily checked by having LP360 display only seed points using a TIN model. Any noise points chosen as seed points were shown as a sharp change of elevation, and could be reclassified. Once the seed points were determined to be accurate, the TIN model looked fairly flat, as shown below.


With accurate seed points chosen, ground points could now be classified. The same point cloud task was run as before, only this time to classify all ground points. After running and checking the results twice, it was determined good enough to work with for QA/QC.

To do QA/QC, I slowly scanned across the study area and fixed incorrect classifications from the algorithm. Most of these were points that were classified as ground when they were not, such as buildings.

After this was done, water was classified using a water break-lines shapefile.

The results are shown and discussed below.

RESULTS

The results of the data after the algorithms and manual cleanup are shown below. This is a view of the total study area after the beginning classification work was completed.


Below are a couple of examples of closer views within the study area displaying how buildings are separated from ground. Orange is ground classification and grey is unclassified. The buildings will be classified in subsequent labs.




Note how the blue areas, water, have patches missing. This is due the absorptivity of water being high, leading low reflectivity and low signal return.



SOURCES

Data obtained from Cyril Wilson for use in 358 LiDAR course.