Sunday, February 7, 2016

Constructing Maps with UAS Data

Introduction 

Why are proper cartographic skills essential in working with UAS data? 

Proper cartographic skills are essential when displaying at data you want people to be able to understand and decipher.  Data captured with UAS platforms can cover a wide variety of area sizes and terrain types.  While the operator of the UAS knows and understands exactly what he is looking at someone who was not present when the data was collected has no prior knowledge of the image they are viewing.

What are the fundamentals of turning either a drawing or an aerial image into a map?

The fundamentals of turning a image or drawing into a map requires the cartographer to insert or add items for proper interpretation.  I feel there are 4 essential items which should allow for proper interpretation of a map.  First, you should have a title at the top of the image to give the viewer a basic preview to the image they are looking at.  Secondly, you should have a north arrow within the image to give the viewer a sense of direction. A scale of some form should be the third item provided with your image so the viewer can gain a sense of distance, dimension, or the area being displayed. The fourth and final item should be a legend of some form so the reader can properly interpret figures on the map.  These are not the only items you could add to make it a map but I feel these are the essential items to be added.


What can spatial patterns of data tell the reader about UAS data. Provide several examples

The spatial patterns can tell the reader all sorts of information about the image if they understand to view the image in aspects such as shape, texture, size, color (if applicable).  In many instances you will need to incorporate more than one of aspects to determine exactly what you are looking at.  One example would be if you were looking at a pile of what appears to be some form of rock, looking at the texture and size of the rock pile you should be able to get a sense of rock size whether they are large (boulders) or if they are small (sand or gravel).  Another example of spatial patterns would be if you are looking at a forest of trees you can easily tell if the forest is naturally occurring or if it was planted by humans.  If the trees are all neatly in a row and appear to be the same texture (type of tree) it would be reasonable to assume humans planted the trees.  If the trees are in no formal pattern and appear to be different textures (type of tree) then it would be reasonable to determine the forest has grown naturally over time.


What are the objectives of the lab?

The objectives of this lab are to develop skills in taking previously collected UAS data and transferring the data to a Geographic Information System software for further analysis and proper map creation from the data.    Additionally, we will be focusing on how to properly describe spatial patterns and other information displayed with in the UAS data/imagery.

Methods

Flash Flight Logs: (Questions in Italics)


(Fig. 1) Image of KMZ file displayed in Google Earth.

What components are missing that make this into a map?

The components missing from (Fig. 1) which would make it a map are a title, scale, north arrow, and legend.  The title for this image would help you locate the place the imagery is taken, such as the city and possibly the school in the view.  A scale should be inserted so the reader can interpret the actual distance being flowing in the image.  The north arrow should be inserted because you have no idea what direction is what in the image.  The legend should be inserted so you can label the blue flight plan so the reader knows what the lines are actually displaying.

What are the advantages and disadvantages of viewing this data in Google Earth?

The speed and simplicity of viewing this data in Google Earth is one of the largest advantages.  Without the use of Google Earth or a similar program you would use ArcMap or a similar program which is very expensive to purchase.  Additionally to the purchase price you would need to have some form a training to proper operate ArcMap.  With Google Earth it is as simple as opening a file to be able to view the flight path.

The inability to create a proper map within Google Earth is the largest disadvantage.  Returning to what I said in the previous question, without a title, scale, north arrow, and a legend it can be really tough to properly interpret what you are looking at in the image.

The lack of information about the platform and sensor tied to the data is a disadvantage with both systems.  Though you can recall the information from the flight path program it would be advantageous to have this information tied to the kml and kmz files.

How do you do save the Flight Path Auto as a KML?

To save the Flight Path Auto as a KML you have to right mouse click on the flight path KMZ file under Temporary Places with in the menu on the left hand side of the Google Earth window.  Within the drop down menu that appears you need to click on Save Place As... Then the save as window will appear and you need to swith the Save as type: to Kml (*.kml).  Then rename the file as you deem necessary and click save.

How do you do bring in imagery base data, and import the kml into ArcMap?

In order to be able to view a kml file in ArcMap you have to covert the file using the KML to Layer tool in ArcMap.  To locate this tool utilize the search feature and type in KML and you will see KML to Layer (Conversion) (Tool) in the list.  Once the tool has completed running the flight path file should open automatically.

To bring in basedata click on the add data button on the top of the screen (little tan square with a plus symbol on it) and select add basemap from the menu.  Then select your desired basemap from the display menu that appears.  When dealing with basemap data be patient as it can dramatically slow down functions with in ArcMap.

Tlogs (Questions in Italics)

How do you use Mission Planner to convert a Tlog into a KMZ?

The first step to covert a Tlog to a KMZ using Mission Planner is to click on Telemetry Logs subtab when on the Flight Data tab.  Once you have selected Telemetry Logs you need to click on Tlog>Kml or Graph which will open a separate window.  From this window click Create KML + GPX and select the file you wish to create the KMZ out of and click open.  After the tool runs you will find a KMZ file in the same folder as the Tlog originated in.

GEMs Geotiffs (Questions in Italics)

What does calculating statistics of a GEMS .tiff file allow you to do?

Calculating statistics of a .tiff file produces statistical information from the image.  The result of the statistical information gives you the min, max, mean, and standard deviation for the values for each band of the image.

Pix4D Data Products (Questions in Italics)

What is the difference between the DSM and the Orthomosaic?

The first obvious difference is the DSM is displayed in black and white while the Orthomosaic is displayed in color.  The DSM is simply a representation of the elevation for the surface or the earth.
The DSM (Digital Surface Model) has a tough time representing the vegetation in the image.  The Orthomosaic is a 3 Dimensional image of the area compiled from the pictures captured by the camera. The Orthomosaic does a much better job of displaying and representing the vegetation in the image.

What are the statistics for the DSM images? Why use them?

The statistics give you the min, max, mean, and standard deviation for the elevation of the image.  This information give you a basic understanding of the elevation of the image for the image.  From these numbers you should be able to analyse if the data is accurate or not.

How did you Hillshade the DSM images

Opening the Hillshade tool under the Spatial Analyst and Surface tool box with in ArcMap will allow you to select the DSM and run the Hillshade tool.

Delineate regions of the DSM, thinking of each region in terms of topography, relating that to the vegetation.

To delineate the regions of the DSM I created a numbered map with descriptions.  See the results section of my blog for the final product.

(Steps before question: Open ArcScene and bring in the DSM and Orthomosaic for flight2 of sivertson mine. Then set the base ht to the dsm. Display the ortho in 3D, along with the DSM)
Explain what this information the DSM needs to become a map. How might one do that? That is, list out the criteria needed for this to become cartographcally correct.

The 3D display of the data needs the same components as all the previous maps.
  • North Arrow
  • Title
  • Legend 
  • Scale or Distance Reference
To achieve this I utilized a method which was thought of by a classmate of mine Peter Sawall.  The first step is to utilize the Fishnet tool with in ArcMap on the DSM image.  The Fishnet tool creates a set dimension measured grid on the surface.  This allows you to look at the grid spacing to see the distances across the image.  You can then import the created Fishnet in ArcScene along with the DSM and the Orthomosaic.  After creating the 3D image and setting the base heights for the Fishnet you can export the image as a 2D image file.  Next, open the exported 2D image in ArcMap and create a display map as you normally would.  The only exception is there is not any dimensions associated with the image so you will have to create your own Legend to explain the Grid/Fishnet distances.

Results

Flight Logs


(Fig. 2) Final map displaying UAS flight path.
Examine the flight log data in Google Earth and in ArcMapDescribe the overall pattern?

The flight log data displays the path which is always evenly spaced continuing "U" shaped patterns. Once the all of the area for the flight has been covered the UAS returns to the point in which it was launched.

Does this flight log appear to be from a multirotor or from a fixed wing? What clues led you to your decision?

Th flight log appears to be from a multirotor platform.  The short/lack of a turning radius to align for the next flight line makes me believe it was a multirotor platform.

What is the spacing between flight lines? Why might this vary according to the sensor and the altitude flown?

20 meters is the spacing between the flight lines for (Fig. 2).  This can vary depending on the focal length of the sensor and the altitude the UAS was flown.  The higher the elevation the wider the spacing distance can be.  However, too high of altitude will result in poor quality images.  The focal length determines the sensors the field of view.  If the field of view is narrow you will need to reduce the spacing distance to obtain the overlap required.

(Fig. 3) Final map of Tlog I converted to a KMZ file.


Geotiff


(Fig. 4) Layout of imagery variations which result from the GEMs sensor.


How does the RGB image differ from the base map imagery. What is the difference in zoom levels? How does this relate to GSD? 

The RGB image is more up to date than the base map imagery.  You can see the community gardens in the RGB image but the base map image from ArcMap does not display even the start of a garden. The imagery from ArcMap is obviously old and not displaying present day information.

The zoom level/resolution is much higher for the RGB image.  The high resolution is closely tied to the GSD (Ground Surface Distance).  The lower the GSD the higher the resolution of the image can be provided the sensor has high resolution capabilities.

What discrepencies do you see in the mosaic?  

The majority of the discrepancies seem to be located around the edge of the flight path.  The shadows from the trees in the image also add to interpretation issues.

Do the images match seamlessly?
To say the images match seamlessly might be a bit of a stretch but the RGB matches up very well with the basemap imagery.

Are the colors 'true' Where do you see the most distortion.

The colors are mostly true, I feel there is distortion with the light colored objects along the garden outline in the image make it tough to determine if they are tan or white.

Compare the RGB image to the NDVI mosaics. Explain the color schemes for each NDVI mosaic by relating this to the RGB image? Explain the color schemes for each NDVI mosaic by relating this to the RGB image? Discuss the patterns on the image. Explain what an NDVI is and how this relates.

The RGB is displaying the visual spectrum which we as humans can see with our own eyes.  The NDVI (Normalized Difference Vegetation Index) displays vegetation in colors as they relate to the health of the plant.  NDVI is analyzing the moisture content which is calculated from the reflective properties of a number of bands within the sensor.  NDVI FC1 displays health vegetation as an orange and red colors.  This is very tough for people not used to looking at this kid of data to interpret.  Red and orange are usually associated with bad or poor health.  NDVI FC2 address this by displaying healthy vegetation as greens.  Comparing the RGB image to the NDVI you can see the darker green vegetation from the RGB image seems to correlate to the healthier vegetation from the NDVI images.  The NDVI Mono and the NDVI Mono Fine are displaying reflectance levels for the surface.  There is very little difference between the values for the Mono, the Mono Fine has greater variation which makes it far more accurate.



Orthomosaic/DSM


(Fig. 5) Delineated Litchfield mine from UAS flight.

(Fig. 6) Map display of 3D image from ArcScene with grid for distances reference.


What is the differeence between an orthomosaic and a geofereenced mosaic?

An orthomosaic image has calculated z-values (elevation) which went into the point cloud information to stitch together and create the mosaic.  A georeferenced mosaic does not have any z-values associated with it which can lead to distances and other measurable distortion.

What types of patterns do you notice on the orthmosaic and DSM. Describe the regions you created by combining differences in topography and vegetation.

The hillshaded DSM appears to be very useful for determining elevation and slope characteristics of the images.  Combining the elevation image with the orthmosaic color information you can withdraw a vast amount of information from the images.  I created regions by first seperating mine related areas and non mine related areas.  The body of water and the trees/vegetation were two areas which were not mine related.  The actual mining operation I broke down into 3 different areas.  The ridge which was a divider to the body of water, and then the flat ground and lastly the actual aggregate piles themselves.

Conclusion

Summarize what makes UAS data useful as a tool to the cartographer and GIS user.

The data which can be collected with a UAS is very useful as it is the most recent and up to date aerial imagery for an area.  Depending on the platform and the sensors attached to the platform you can obtain far more data than a regular satellite image.  As we saw above you can gain vegetation health information, create 3 dimensional images which you can obtain a host of accurate measurements from. The possibilities for UAS imagery are just starting to be explored and we have not yet seen all the imagery has to offer to the cartographer or the GIS user.

What limitations does the data have? What should the user know about the data when working with it.

The largest limitation the UAS has is the area in which it can obtain data for.  The "short" flight times don't allow both accurate and large areas (miles) of data to be collected yet.  Additionally the data is only as accurate as the person who set up the flight plans and their documentation of the parameters in which they set for the flight.  Without the specification of the flight altitude, spacing, speed, ect, the data is relatively useless to a GIS user.

Speculate what other forms of data this data could be combined with to make it even more useful.

I speculate there is a possibility to combine new UAS data with existing highly accurate Lidar data to come up with some the most accurate DSM, DEM out there.  I believe if you could combine the information between the two you could help eliminate anomalies from both and have a highly accurate product.

No comments:

Post a Comment