Sunday, February 28, 2016

Adding GCPs to Pix4D Software

Introduction

This weeks lab assignment is an extension from last weeks Pix4D introduction lab.  Ground control points (GCPs) are the focus of this weeks lab assignment.  GCPs are used to align the image with the surface of the earth so your results are spatially accurate.  The objective in utilizing the GCPs is to produce a true orthorectified image.  I will be comparing the accuracy of the results from the same image which I will process twice.  The first time I process the images I will utilize GCPs to correct the image and the second processing I will be using GeoSnap to add the geolocation information to the images.

Ground Control Points (GCPs)

GCPs can be:
  • Measured in the field using topographic methods such as survey grade equipment.
  • Obtained from existing geospatial data
  • Obtained from Web Map Service (WMS).


There are three ways in which to add/apply GCPs outlined in the Pix4D manual/help section:

Method A


This method is utilized when the image geolocation and the GCPs have a known coordinate system which can be selected within the Pix4D database.  The coordinate systems do not have to match as Pix4D can complete a conversion between the two systems.  This method is the most common method to add GCPs to a dataset.  The method allows the user to mark the GCPs on the image with minimal manual user input (Fig.1).  This method is not user friendly for over night processing.

(Fig 1.) Outlined workflow for geolocation in a known coordinate system.


Method B

Method B can be utilized in a few different scenarios:

  • The initial images were collected without and geolocation information
  • The initial images were collected in an arbitrary coordinate system which is not found in the Pix4D database.
  • The GCPs were collected in an arbitrary coordinate system.
This method requires more manual intervention compared to Method A.  Instead of having one step to mark the GCPs in the images you have an additional step and a varied order in comparison to Method A.  This method is not user friendly for over night processing.

(Fig. 2) Outlined workflow for geolocation Method B.
Method C

Method C works for any situation no matter what the coordinate system is of the GCPs or the images. Method C requires the highest amount of user manual input to mark the GCPs on the images.  However, this method allows over night processing of the imagery.

(Fig. 3) Outlined workflow for geolocation Method C.
GeoSnap

Geosnap is a product produced by Field of View.  The GeoSnap Pro is a GPS device which attaches to your sensor (camera) on UAS platforms and produces and log of position and attitude of the camera when the images are captured.  Additionally, the GeoSanp Pro can also help manage the triggering of the camera during the flight.

In the following section I will be utilizing and outline the process of Method A to apply the GCP locations to my images.

Methods

During this weeks lab I will be processing 312 images of a local mining facility collected with a Sony ILCE-6000.

Many of the following steps are the same as last weeks lab.  If you have questions concerning the basic processing of images please consult my blog post for processing images with Pix4D.

Creating a new project is the first step required to begin processing an image with GCPs in Pix4D. After loading the images into the New Project window you do not have to attach any geolocation information to the images.  Make sure you have the correct sensor in loaded the Selected Camera Model Window before proceeding to the next screen


(Fig. 4) New Project window with images loaded without geolocation information attached and proper sensor identified.
Proceed through the next few windows of the New Project creator by selecting 3D Maps for the Processing Options Template, and inspect to make sure the Output Coordinate System is correct before creating the new project.

Once the project is created and the flight plan is loaded in the viewer the next step is to add the GCPs.
To add the GCPs select Project to open the GCP/Manual Tie Point Manager window.  Import your GCP's with the Import GCPs button.  Preview you GCP file so you properly select the correct coordinate order.  With all of the GCPs loaded in the window check to make sure the Datum is set correct and select ok.

(Fig. 5) GCP/Manual Tie Point Manager with GCPs imported with correct X,Y,Z order and correct Datum.

After selecting OK the GCPs will be automatically loaded in to the Map View and displayed as X's (Fig. 6).  If the X's do not appear in your flight area then double check the X,Y,Z order of your GCP file to assure you have the correct order.

(Fig. 6) GCPs displayed as X's in the Map View window of Pix4D.

Before processing the image there is once final step required.  Under the Local Processing menu you will have to deselect 2. Point Cloud and Mesh and 3. DSM, Orthomosaic and Index before starting the image processing (Fig. 7).


(Fig. 7) Local Processing menu with only Initial Processing selected.

Once the initial processing is complete you will need to reopen the GCP/Manual Tie Point Manager and select rayCloud Editor.  The rayClound Editor will open the GCP display properties on the left hand side and adjustment window on the right side of the screen.(Fig. 8).

(Fig. 8) GCPs display properties (Left) and adjustment window (Right) in Pix4D.
Selecting one of the GCP points from the Display Properties from the left hand menu will open images which contain the GCP marker flag in the right hand Properties menu (Fig. 9).  The blue circle with the blue dot in the middle is where Pix4D believe the center of the GCP point is.  To correct the location you need to select the center of the marker flag.  After selecting the center of the flag in two image select Apply to correct the location based on your corrected locations.  This initial correction will help bring the marker flag into view in all the images (Fig. 10).  The more images your apply correction to the closer the blue circle and dot will become to lining up with the center of the marker flag.  Proceed to select the center of the flag for all the images which contain the marker flag.  Do not selected any point in the images which do not contain the marker flag.  Complete the same process for all of the GCPs from the Display properties menu which will display the number of imags you have corrected after the GCP number in brackets (Fig. 11)


(Fig. 9) GCP marker flags displayed in the Properties window of the rayCloud editor.  The blue circle with the blue dot in the middle is the believed center of the GCP point.  I manually selected the location marked with the  green X and the yellow circle w/plus symbol.

(Fig 10) GCP marker flags after applying the first corrections to bring the other marker flags in to view of the remaining images.
(Fig. 11) Display Properties menu with the corrected image numbers in brackets after the GCP number.
After correcting all of the GCP point locations you can finish the processing by selecting 2. Point Cloud and Mesh, and 3. DSM, Orthomosaic and Index boxes from the Local Processing menu (Fig. 12).

(Fig. 13) Select 2. Point Cloud and Mesh and 3. DSM, Orthomosaic and Index before starting the processing of the images. 

Results

The error for between the two mosaic images is minor when observed in full view.  Had the error been more drastic I could have created a map with both mosaics displaying the variance between the two.  However, when I brought both of the images into ArcMap you could not tell the difference at the extent need to display the entire mine area.  Analyzing where the road connects to the mosaic from the basemap is one of the most noticeable (though very minor) errors with the mosaic (Map 2).

(Map 1) Display of the orthomosaic created with GCPs data.
(Map 2) Display of orthomosaic image created with Geosnap data.
To display the mosaic from a different point of view I created a 3 dimensional (3D) image in ArcScene.
(Map 3) 2D display of a 3D image created in ArcScene of the Litchfield Mine.

Discussion

I first compared the Quality Report of the two projects I ran in Pix4D to see if I could identify the differences.  The majority of the values were the same except when I compared the Geolocation Details.  The report for the project utilizing GeoSnap displays an RMS error of .36-.44 for the various axis (Fig. 12). The report for the project using GCPs showed the RMS error was between 1.01-1.97 for the various axis (Fig. 13).

(Fig. 12) Absolute Geolocation Variance chart from Pix4D Quality Report of the mosaic created without GCP points.
(Fig. 13) Absolute Geolocation Variance chart from Pix4D Quality Report of the mosaic created with GCP points.

Does this mean the GeoSnap is more accurate than using GCPs?

Based on the RMS error one would believe the Geosnap is providing more accurate results.  To make a comparison I exported the GCP coordinates to a feature class in ESRI Arc Map.  Next, I brought in both of the created mosaics for comparison.

(Fig. 14) GCP location (green triangle) and actual GCP location (orange and white triangle) from Geosnap mosaic.

(Fig. 15) GCP location (green triangle) and actual GCP location (orange and white triangle) from GCP mosaic.

After comparing both the mosaics it was easy to see the mosaic created with the GCPs was more accurate than the mosaic created with Geosnap.  I utilized the Georeferance tool in ArcMap to compare the RMS error between the two created mosaics.  The results from ArcMap shows the RMS error of the GCP image is lower than the Geosnap image.

(Fig. 16) RMS Error from the Geosnap mosaic in ArcMap.


(Fig. 17) RMS Error from the GCP mosaic in ArcMap.


Pix4D believes the Geosnap image is more accurate based on the information provided. However, GPS on the camera does not have the accuracy of the GPS unit which collected the GCP locations. The GPS location was collected with a Topcon Hiper and Tesla unit in the same method as I collected point in my topographical survey in my Geospatial Field Methods class.  The Topcon has an accuracy of about 3-5 mm depending on the axis.  The Geosnap Pro model has an accuracy of approximately 1.5 m.

I feel the Geosnap still has applications in the field.  There are many instances where it may not be feasible to layout and collect the information required to produce GCPs coordinates.  In very rugged terrain I believe the Geosnap would really shine through and obtain a high enough level of accuracy for the task at hand.

The above discussion displays why the use of GCPs are more accurate and required when performing flights where highly accurate data is necessary.  In the following labs we will be exploring additional uses with in Pix4D and utilizing the high level of accuracy which GCPs provide.




Sunday, February 21, 2016

Processing Pix4D Imagery

Introduction

The purpose of the lab is to introduce me to the software package Pix4D Pro.  Pix4D software has the ability to generate orthomosaic and georeferenced 2D and 3D maps and models from images collected by various methods including UAS platforms.

During this lab I will be creating orthomosaic maps from data captured with two different sensors. The first data I will be processing was collected with a Canon SX260 digital camera.  The second set of data I will be processing was collected with a GEMs sensor.  For more information on the GEMs sensor see my previous blog post.  In the following sections of my blog post I will discuss some of the specifics of Pix4D as answers to questions asked by my professor in the lab assignment.  The following section will give you (the reader) a good understanding of the steps required to process data in Pix4D.

Get familiar with the product (questions in italics)

Look at Step 1 in the software manual (before starting a project). What is the overlap needed for Pix4D to process imagery?

Step 1 in the software manual highlights the proper planning to achieve the highest quality results. Collecting all of the data properly in the field will allow for a streamline process and produce quality mosaic images.  Step 1 in the manual highlights the necessary minimum requirements your data must meet to create mosaic images.

The recommended overlap for most situations is 75% frontal overlap and a minimum of 60% sidelap (Fig. 1).

(Fig. 1) Ideal Images Acquisition Plan for General Cases.

What is the overlap needed if the user is flying over sand/snow, or uniform fields?

Due to snow and sand having large uniform areas it is recommended to have a higher overlap than general landscapes.  The recommendations from the manual is a minimum of 85% frontal overlap and a minimum of 70% sidelap.

What is Rapid Check?

Rapid Check is a feature in Pix4D which allows you to check and see if the parameters set in the flight plan was adequate enough to produce a mosaic image.  Rapid Check reduces the resolution of the captured images to 1 megapixel (MP) to allow for faster processing.  This reduction in resolution leads to lower positional accuracy and can lead to incomplete results.  The manual states, "If Rapid Check succeeds then it is safe to assume that the results of Full Processing will be of high quality." Then the manual states if the Rapid Check fails to run that adjustments to the overlap may be needed. Another option would be to fly the mission again to combine both sets of images to try and achieve success of the Rapid Check.  The manual also states it is possible to run Full Processing on images which fail the Rapid Check but the results may be of a lower quality and could contain erroneous results.

Can Pix4D process multiple flights? What does the pilot need to maintain if so?

Pix4D can process images collected from multiple flights.  There are three parameters you should follow when collecting data with multiple flights.

  1. Make sure each flight plan captures the images with enough overlap for the situation.
  2. Make sure there is enough overlap between the two images for proper correlation (Fig. 2)
  3. Make sure the flights are flown in the same conditions like sun angle, weather, and no new features on the surface.
(Fig. 2) Proper and improper overlap between 2 flights display. (Pix4D Manual)

Can Pix4D process oblique images? What type of data do you need if so?

Yes Pix4D can process oblique images.  The example given in the manual is based around constructing a 3 dimensional image of a building.  The instructions state the first flight around the building should be at a 45 degree angle (Fig. 3).  The following flights should increase in height and decrease  the camera angle.  The recommendation is to decrease the angle by 5-10 degrees  per flight to ensure the images have enough overlap to generate the mosaic image.  However, do to variation in spatial resolution you do not want to increase the height more than two times between flights.  Processing oblique images results in a very good 3 dimensional image but does not produce a orthomosaic image.

(Fig. 3) Proper collection of oblique imagery. (Though they don't follow their own instructions) (Pix4D Manual)

Are GCPs necessary for Pix4D? When are they highly recommended?

Ground Control Points (GCPs) are not necessary when processing images in Pix4D.  The use of GCPs improves the global accuracy of the project.  GCPs are highly recommend when processing images which do not have geolocation information.  While you can still process the images without the GCPs your final mosaic output will not have a scale, orientation, or absolute position information.  Without this information you will be unable to complete measurements, perform overlays, or compare previous results.

What is the quality report?

The quality report is like a report card for the images you processed.  The report contains almost every bit of information about the image processing results you could think of.  The first section is a overview of the project and you will see is a Summary, Quality Check, and a Preview (Fig. 4).  The Summary has basic information about the processing such as the name of the project, date and length of time to process, and area dimensions of the processed image.  The Quality Check displays specific information about the calibration results between images.  The Preview displays and 2 dimensional image of an Orthomosaic and a Digital Surface Model (DSM) image created from the processed images.

(Fig. 4) First section of the quality report containing the Summary, Quality Check, and Preview.
The second section of the quality report is Calibration Details and contains information about the flight and the images collected during the flight.  The first detail image contained in this section is of the flight path and the locations the images were collected.  The next part image contains a display of the Computed Image/GCPs/Manual Tie Points Positions.  The next section displays the image overlap which the most important details of this section to me as an analyst (Fig. 5).  The higher the overlap the more accurate and the better the mosaic will be.


(Fig. 5) Number of overlapping images reported from the quality report from Pix4D.
The final three sections which include, Bundle Block Adjustment Details, Geolocation Details, Point Cloud Densification details, and DSM, Orthomosaic and Index Details are in depth results of the math which went into configuring the complied image, location accuracy, and processing options with specifications.

Methods

The following steps are completed after the flight has been flown and downloaded your computer. After opening Pix4D select Project and New Project from the menu bar (top left) to open the New Project window (Fig. 6).  From this window you will fill in the name your project and select the location where Pix4D will save the files created during the project process.  Under the Project Type select the New Project and then select Next from the bottom right hand menu bar.

(Fig. 6) New project window in Pix4D.
The next screen you are brought to is the Select Images section of creating a new project (Fig. 7).  From this window you will select the images from the flight which were previously downloaded.  Select Add Images and then locate all of the images you want to be included in the processing then click Next.  After selecting next it will take a brief moment to process the images and proceed to the next screen.


(Fig. 7) Select Images section of creating a new project.

After the image load you will be brought to the Image Properties window (Fig. 8)  The first line in the window will show you the coordinate system which the images are displayed in.  Then next line displays how many of the images are Geolocated.  The third line shows the camera/sensor which was used to capture the image.  Not all sensors are loaded into the Pix4D software.  In this case the GEMs sensor was not loaded in and I had to located the sensor specifications and add them to the image properties using the Edit... button.  I also had to process images collected by a Canon SX260.  When I loaded the images from the SX 260 all of the geolocation information was already attached to the file and loaded automatically.
(Fig. 8) Image properties window of creating a new project without geolocation information.
My example in (Fig. 8) shows none of my images are geolocated.  The images in this window were collected with the GEMs sensor which does not automatically attach the geolocation to the images. This must be done manually by selecting the From File... button which will bring you to the Select Geolocation File (Fig. 9).  From this window you can select the file which contains the geolocation information for the corresponding images.


(Fig. 9) Select geolocation file window.

After adding the geolocation information your the Latitude, Longitude, and Altitude should no longer contain zeros (Fig. 10).  Make sure to check all of your images have been geolocated and filled in.  In the last three projects I have ran in Pix4D I have had one image from each which was not properly located.  When you find which image does not have any location information attached simply uncheck the Enabled check box to exclude it from the process.

(Fig. 10) Image properties window with geolocation information.
With all the information set in the Image Properties window select Next which will bring you to the Processing Options Template (Fig. 11).  From this window you can select what type of project you would like the software to develop.  In my situation I will be creating a 3D Map from the images.

(Fig. 11) Processing options template window.


With 3D Maps selected I clicked next which brought me to the Select Output Coordinate System window (Fig. 12).  I did not change anything in this window and clicked finish to create the project.

(Fig. 12) Select output coordinate system window.

Once the finish button has been selected the flight and location of recorded images will be loaded on a aerial imagery basemap with labels in the processing window (Fig. 13).  From this window to process the image, you have to select the Start button from the Local Processing menu at the bottom of the screen.  The three check boxes which are green in (Fig. 13) will be displayed as red in an unprocessed image.  Once the processing has completed for each section they turn from red to green. Processing time depends on the number of pictures you are utilizing to create the mosaic and the processing power of your computer.  The minimum system requirements recommended by the manufacturer for medium projects (100-500 images @ 14 megapixels) is 8 GB of RAM and 20GB of HDD Free Space.

(Fig. 13) Image processing window with loaded project in Pix4D software.
When completed you should see your processed image with the tie points and camera images displayed along with the Quality Report (Fig. 14).  Examining the Quality Report will tell you the number of images which were calibrated.  For the report displayed in (Fig. 14) I had 105 out of 108 images (97% ) which were calibrated.


(Fig. 14) Post processing display in Pix4D.
Examining the overlap display in the Quality Report (Fig. 5) shows a number of areas where the overlap could have been improved.  The side lap and the frontal lap could have both been increased for better results.  Though the spacing of the images was not quite correct the processing still produced a quality orthomosaic.

Collecting Measurement from the Results

After creating the orthomosaic, one of the tools with the most practical applications is the measurement tools.  The measurement tools allow you to measure straight line distance, area, or volume of a 3 dimensional surface (Fig. 15).



(Fig. 15) Measure tool bar. Straight line distance (Left), Area (Center), Volume (right).

To measure an area after selecting the area measure tool, simply draw a polygon around the feature to measure using as many vertices as necessary (Fig. 16).  Once you have completed the polygon simply right mouse click to end the drawing which will display the measurement in the upper right hand corner of the screen.  The measurement window displays Terrain 3D Length, Projected 2D Length, Enclosed 3D Area, and Projected 2D area.  Below the measurement window is a display of the vertical view of the images with vertices (tie points) in them.


(Fig. 16) Measurement of an island area with in Pix4D.

To measure a straight line distance after selecting the proper tool, simply select the starting point and when you place the cursor on the point you want to measure to use the right mouse button to create the end point.  To display the accuracy of the tool I measure from the start and finish line for the 100m dash at the school track which was in a portion of my image (Fig. 17).  The measurement window displayed a Terrain 3d Length of 100.84 m and a Projected 2D Length of 100.78 m.  When zoomed all the way in I could see my tie point was past the line which explains the variation in the measurement (Fig. 18).


(Fig. 17) Measuring the 100 m dash start to finish line using the straight line distance measuring tool in Pix4D.



(Fig. 18) Overshoot of the tie point for the line measurement.

Measuring volume following the same steps as the previous measurements.  The only difference is when you right mouse click to create the last point it does not automatically update the volume measurement window.  Select Update Measurement button with in the Measurements window. There are various measurement displayed in the window including but not limited to Cut, Fill, and Total Volume.


(Fig. 19)  Volume measurement of a surface in Pix4D.

The last feature in Pix4D I explored was creating a "fly" through animation video of the mosaic image.  Following the help instructions I created way points for the video and the software produced the following video (Fig. 20).  To create a "fly" through animation right click on "Objects" when in the rayCloud menu and select New Video Animation Trajectory.  From this menu you can create your own trajectory or use the flight path records to create the animation.  You have the ability to adjust the speed and duration of the video.  Once finished you can export the animation path you have to Render the video before you export it.

(Fig. 20) Fly through video created in Pix4D.


Results

(Fig. 21) GEMs imagery map mosaic of a pond at South Middle School in Eau Claire, Wisconsin. I exported the measurements made in Pix4D as shapefiles and displayed them on the map. 




(Fig. 22) Canon SX260 map mosaic of a portion of South Middle School in Eau Claire, Wisconsin.

Discussion

Pix4D is very user friendly program to operate.  The methods and tools described above are just basic operations of the program.  One of the most useful functions in Pix4D was the ability to export measurements as shapefiles.  Displaying measurements on maps is a great way to give a sense of distance to the reader.  The help feature and software manual are very useful and easy to understand. When you run into an issue, a simple search leads you to a link with the answers.

The only down side to Pix4D is the amount of processing power and time required to fully process images into a mosaic.  The program will crash conventional computers which have low RAM capacities.

The fly through animation was an interesting feature, but like any video files they take up a long of space.  I created a longer animation video than the one displayed above but could not embed it in Blogger due to size restrictions.

Overall, I am impressed with the Pix4D software.  I look forward to exploring additional tools and uses with in Pix4D throughout the semester.



Sunday, February 14, 2016

Use of GEMs Processing Software

Introduction

Get familiar with the product (Questions in italics)

What does GEMs stand for?

GEMs is an acronym for Geo-localization and Mosaicing System.

Name what the GSD and pixel resolution are for the sensor. Why is that important for engaging in geospatial analysis. How does this compare to other sensors?


(Fig. 1)  GEMs sensor from Sentek Systems.


Ground sampling distance (GSD) for the GEMs is 5.1 cm @ 400ft and 2.5 cm @ 200ft.  The pixel resolution for the sensor is 1.3 mega pixels (MP) for both the RGB and the Mono. Knowing the GSD and pixel resolution help you determine the quality of data therefore helping you select the correct sensor for the task.  The pixel resolution is very low by today's standards.  The majority of every camera out there including your cell phone has a pixel resolution of 10 MP or higher.

How does the GEMs store its data?

The data collected by the GEMs is stored on a USB jump drive which is mounted on the sensor during the flight.

What should the user be concerned with when mounting the GEMs on the UAS?

The following is a list of concerns when mounting the GEMs sensor on a UAS platform.

  • The GEMs sensor is designed to for the cameras to be point downwards towards the ground.
  • The GEMs sensor should be attached to the flattest portion of the underside of the UAS platform.
  • You should not place and magnetic material withing 4" of the GEMs sensor.
  • Minimize vibrations to the sensor the best as possible.
  • Reduce the electromagnetic interference (EMI) to prevent jamming of the GPS unit.
Examine Figures 17-19 in the hardware manual and relate that to mission planning. Why is this of concern in planning out missions?o
  • Figure 17 is displaying as the elevation of the sensor rises the GSD also rises.  
  • Figure 18 is displaying the correlation to elevation of the sensor and the speed of platform to the quality of imagery the sensor can capture.
  • Figure 19 is displaying the correlation to elevation of the sensor and the row spacing of the flight path to the quality of imagery the sensor can capture.
These figures are of great concern for planning out UAS missions.  You have to set the parameters which include the altitude, speed, and the row spacing when programming the mission plan.  You want to set the parameters to collect the highest quality data you can but you must also be concerned with flight time of the UAS.  You would also take into consideration the goal of the project to determine the quality of data you need.  Setting these parameters incorrectly will give you erroneous results and you will have to adjust the parameters and refly the mission.

Write down the parameters for flight planning software (page 25 of hardware manual). Compare those with other sensors such as the  Cannon SX260, Cannon S110, Nex 7, DJI phantom sensor, and Go Pro.

When comparing the GEMs with the listed sensors you can see the GEMs sensor has the lowest megapixels of any sensor on the list (Fig 3).  I was unable to obtain all the information for all the sensors.  With the information I was able to obtain you can see the GEMs sensor fall behind in every category.

Read the 1.1 Overview section. Then do a bit of online research and answer what the difference between orthomosaic and mosaic for imagery (orthorectified imagery vs. georeferenced imagery). Is Sentek making a false claim? Why or why not?

The orthomosaic process is a combination of orthorectification and mosaicking.  Orthorectification is the term to focus on here.  "Ortho-rectification is the process of correcting imagery for distortion using elevation data..." (ImStrat)  The GEMs software does not utilize any elevation data in the calculations when creating the mosaic images.  So it is with this information I believe Sentek is make a false claim of creating "Orthomosaiced RGB, NIR, and NDVI imagery." (Sentek)

What forms of data are generated by the software?

The software generates the following data:
  • RGB (Red, Green, Blue additive color model)
  • NIR (Near-Infrared)
  • NDVI (Normalized Difference Vegetation Index)
  • GPS Coordiantes for previous images
How is data structured and labeled following a GEMs flight? What is the label structure, and what do the different numbers represent?

The images are in the file by type and numerical order.  Only images in the file after the flight are Mono0 images and RGB0 images.  There is also a number of text files/folders along with the .bin file located in the folder.

The labeling scheme for flight date and time on the folder is GPS time.  Week=# and that #= (a specific date). TOW=H-M-S (Hours-Minuets-Seconds).  The GPS time is automatically generated by the GPS unit and the script with in the GEMs unit automatically attaches it to the file for the flight.


What is the file extension of the file the user is looking to run in the folder?

The user is looking for a .bin file to open in the Sentek software program.

Methods

What is the basis of this naming scheme? Why do you suppose it is done this way? Is this a good method. Provide a critique.

The labeling scheme for flight date and time on the folder is GPS time.  Week=# and #= (a specific date). TOW=H-M-S (Hours-Minuets-Seconds).  The GPS time is automatically generated by the GPS unit and the script with in the GEMs unit automatically attaches it to the file for the flight. To convert the time to conventional dates and time you must find a conversion calculator.

The GPS time is a good theory but leaves a bit to be desired when the operator doesn't understand and cannot quickly convert the time to their own conventional time.  Trying to find the conversion calculator I linked above took a bit of searching to locate.  Additionally, this conversion calculator was not created by the company, it is just a random person who created the tool.  How do I know the conversion calculation they are using is correct?  I feel it would be the best if they incorporated an automatic conversion with in the script or a converter with in the software which comes with the GEMs unit.

Explain how the vegetation relates to the FC1 colors and to the FC2 colors. Which makes more sense to you? Now look at the Mono and compare that to the vegetation.


(Fig. 1) Display bar scales for FC1 (Left) and FC2 (Right).

The display scales for FC1 and FC2 are representing healthy vegetation.  With FC1 red is the healthiest color which will be displayed.  With FC2 green is the healthiest color which will be displayed.

I feel FC2 makes more sense for the color scheme.  When I think of healthy vegetation, I envision bright green leaves and not red.  Throughout my existence I have been taught to associate red with danger such as stop lights, firetruck lights, lava, ect.
(Fig. 2) Display bar for both Mono images.
The display for the Mono image has the unhealthy vegetation displayed in black and the healthy vegetation displayed as shades of gray to white being the healthiest.  This scheme follows a conventional way of thinking and the average reader would interpret the information properly.

Now go to section 4.5.5 of the software manual and list what the two types of mosaics are. Do these produce orthorectified images? Why or why not?


Two types of Mosaics
  1. Fast Mosaic
  2. Fine Mosaic
No, neither of these mosaics produce an orthorectified image.  As stated before to be an "orthorectified" you have to utilize elevation data to complete the process.  The definition used in the manual for the fine mosaic say it uses "techniques to finely align the imagery".

Generate Mosaics.  Describe the quality of the mosaic. Where are there problems. Compare the speed with the quality and think of how this could be used.

The quality of the mosaics are decent.  The time to generate the mosaics was short compared to more complicated systems.  This could be very beneficial when working in the field and you want to make sure your flight successfully captured your images.  If the flight was not successful you would know immediately and would have the opportunity to make the necessary adjustments and refly the mission.  Additionally, if you could analyze the results right in the field you could also use that information to locate and physically examine points of interest before leaving the site.

There are areas in the images which do not line up perfectly.  Most noticeably on the east portion on the images there is an alignment issue (Fig. 7-11).

Navigate to the Export to Pix4D section. What does it mean to export to Pix4D? Run this operation and look at the file. What are the numbers in the file used for? (Hint: you will use this later when we use Pix4D)

The GEMs software generates a file which can be open and utilized in the program Pix4D.  Pix4D
is a photogrammetry software which CAN produce orthomosaic 3D images if the proper elevation values are captured with the image.  The software produces Excel files which contain the latitude, longitude, Omega, Phi, and Kappa for each image in the respective modes (NDVI FC1, NDVI FC2, NIR, RGB).

What is a geotif, and how can it be used? 

Geotiff refers to a TIFF file which has geographic data embedded with in the image file.  The geographic data contained within the image can be utilized to position the image in the correct geographic location on software programs such as ArcMap.  The Geotiff is completely open source which allows the file type to be interoperable.  The file type behave exactly like a regular tiff file but has the additional geographic data attached.

Go into the Tiles folder and examine the imagery. How are the geotifs different than the jpegs?

After analyzing the images I cannot tell see any visual difference when viewing them in the windows explorer viewer.

Now open Microsoft Image Composite Editor (ICE) software and generate a mosaic for each set of images. What is the quality of the product compared to GEMs. Does this produce a Geotif? Where might Microsoft Image Composite Editor be useful in examining UAS data?

The quality seems to be the same as the GEMs software as far as resolution quality though the color schemes are different.  The black background makes the image standout better.  However, the programs lacks the ability to fully "stitch" or mosaic the image correctly. The program does not produce a Geotiff.  All of the images I stitched images I created were "upside down" (South was the top of the image).  The ICE program could be very useful in the field in analyzing captured images.  The program runs fairly fast and would allow you to "stitch" your images together to see the results and make sure you didn't have any gaps in your captured data.  All of the terminology throughout the program referred to panoramic images which I feel led to some of the errors I encountered.

Results



(Fig. 3) Comparison of various sensor typically attached to UAS platforms.


Microsoft Image Composite Editor

The first image I ran in the ICE software was the NDVI FC1 images.  Inspecting the results and comparing them to the results of the GEMs software the ICE software incorrectly stitched the image together (Fig.4).  The large yellow clump of vegetation on the left side of the image is incorrectly placed.
(Fig. 4) NDVI FC1 image "stiched" in Microsoft ICE. 


The second image I ran in the ICE program was the NDVI FC2 images (Fig. 5).  The results were better than the FC1 images.  The majority of the image is assembled correctly in comparison to the first one.  However, there are still a number of errors with the stitched image.  The square/rectangle sitting by it self at the bottom of the image is the most noticeable error.

(Fig. 5) NDVI FC2 image "stitched" in Microsoft ICE.

The final image I ran in the ICE program was the NDVI mono images (Fig. 6).  The created image has the least amount of errors for the three images I ran through the program.
(Fig. 6) NDVI Mono image "stiched" in Microsoft ICE.
Additional research will be need to work the bugs out of the ICE system to render it useful in the UAS world.


GEMS Software

The fine mono display gives the best image clarity image of the two mono displays (Fig. 7).  The image displays more precise locations which are healthy and those which are not.
(Fig. 7) Mosaic image of the Fine Mono results from the GEMs sensor.

The fine NDVI mono displays a lower clarity between healthy and unhealthy vegetation.  The brightest (healthy) areas of (Fig. 8) seem to run together and is more generalized.  According to the software manual the NDVI mono is beneficial when monitoring emergence of new vegetation.
(Fig. 8) Mosiac image of the Fine NDVI Mono results from the GEMs sensor.

The NDVI FC1 is the same results as the NDVI mono but displayed with different color ramp variation (Fig. 9). This adjustment to the color ramp makes it easier to read and see variations in the vegetation health.  However, with the "healthy" vegetation displayed as orange/red gives the reader the wrong idea of the health of the plant in my opinion.


(Fig. 9) Mosaic image of the Fine NDVI FC1 results from the GEMs sensor.

The NDVI FC2 image is the same display as the NDVI FC1 but the color ramp has been changed to fix the issue of readers misunderstanding the displayed results (Fig. 10).  Red is displaying the unhealthy vegetation and the green is displaying the healthy vegetation.  After comparing FC1 and FC2 I believe the FC1 shows better contrast between healthy and unhealthy but misleads the reader.
(Fig. 10) Mosaic image of the Fine NDVI FC2 results from the GEMs sensor.

The find RGB mosaic displays and great overhead view of the area (Fig. 11).  Comparing this image to the above results it is easy to see the darker green vegetation in the image is the healthiest.  This image is very beneficial when trying to locate exact areas and types of vegetation in the study area.
(Fig. 11) Mosaic image of the Fine RGB results from the GEMs sensor.


Conclusion

Relate the GEMs sensor to the software.

The GEMs sensor and the created software integrate well together and produce results in a timely manner.  The software is super easy to use and the instructions walk you through the proper steps to achieve the desired results.  The operator of the program need not be a remote sensing expert to understand the basics.  The only complaint I have is the file/date labeling system makes it complicated to decipher what day the flight was actually flown.

Relate the GEMs to UAS applications.

The GEMs sensor and software has the ability to be a valuable tool with various UAS applications. The weight and the size of the sensor make it capable of being attached to virtually any platform in the UAS industry.

One of the limiting factor with the GEMs sensor is the narrow field of view.  The narrow field of view requires long flight times to cover a small area.  Most of the multirotor UAS platforms only have a maximum flight time of 30 minutes.  If you were trying to fly an actual farm field of any size it would require multiple flights.  If the company was to increase the field of view you could dramatically decrease the flight times and obtain the same quality of data with the proper end and side lap set in the parameters.

Overall provide your impression of the GEMs imagery and the sensor.

The GEMs sensor in general is a good tool to be used with UAS platforms.  With a few minimal adjustments the sensor could be great.  For what this sensor cost I would have expected a higher resolution camera.  My cell phone which cost considerably less has a better camera on it.  I feel the technology could be derived from cell phones to increase the megapixels of the GEMs sensor.  The low megapixels reduces the ability for the sensor to focus in on individual plants would be useful in orchards or tree type applications.  As stated above the narrow field of view restricts the uses for a real farming applications.  With all the sensor options out there on the market today I cannot say this sensor would be on the top of my list until improvements were made to the camera field of view and pixel resolution.

Sources

ImStrat Corporation PDF document http://www.imstrat.ca/uploads/files/brochures/orthomosaic.pdf
Sentek Software User Manual