Insights from industry

How Coregistration is Used for Aerial Imaging

In this interview, Corey Fellows talks to AZoOptics about how coregistration is used in aerial imaging.

What is coregistration?

Coregistration is the process of registering two or more images to a common image plane. It can also be used to combine images from different sensors or images of different sizes. What this means is if two images are captured with two separate cameras, or with the same camera in two different positions, both images could be “flattened” into one larger image. Some artefacts might appear in the flattened image if only two source images are used, but if a large enough number of images are used with enough overlap, the surface can be captured in a homogenous fashion.

Where is coregistration used?

Coregistration has existed for a number of years and is widespread throughout many industries. For example, the panorama function found on many smartphones or even older point and shoot cameras uses some form of coregistration. As the camera pans across a landscape, the software stitches together one large image from a series of smaller images as common elements from each image are found.

A panorama made by stitching multiple images together through coregistration to achieve a very wide field of view

Another common example of coregistration is online map services with “street view.” As the street view car drives up and down city streets, multiple camera angles are stitched together to create one flattened image that can be panned through to virtually look around the environment.

How is coregistration used for aerial imaging?

Aerial imaging can benefit from image coregistration since the ground, from high above, forms one large image plane. For multiple camera systems, such as a precision agriculture system performing Normalized Difference Vegetation Index (NDVI), the images from one camera must be coregistered to the second so that the data from both cameras can be used in a single dataset. Some image processing is required to map each pixel from camera one to camera two to align with the same point on the ground. Once this is done, the images can be processed to calculate the area’s NDVI.

However, coregistartion can also be accomplished algorithmically using positional data taken directly from something like an inertial navigational system located on an aircraft. By synchronizing with the GPIO of the autopilot and the imaging solution, such as a Teledyne Lumenera camera, the images can be accurately aligned.

False-Color NDVI Approximation

Once images are coregistered to the same image plane, they are stitched together to form an orthorectified image. No matter what part of the image is being observed, the camera is looking straight down at the ground below also known as a “nadir” orientation.

 

Images stitched together (with overlap between images visualized) from a nadir orientation

Is proprietary software required to perform coregistration?

Not at all, because these techniques are readily available in open source software such as OpenCV and are quite simple to use. The function findHomography() and warpPerspective() are used to calculate the difference between the images and apply the coregistration, respectfully. This technique works with ground control points (GCP) where the known points in each image are mapped to corresponding points in the other image. If GCPs are not used, the function findTransformECC() can be used in place of the findHomography() function to iteratively determine the difference between the images.

OpenCV is an open source programming library for computer vision applications

How are coregistration measurements processed?

Going into the technical details behind the basic functions, the function findHomography() calculates a transformation matrix. This matrix illustrates the relationship between the image from camera one and camera two. The function requires at a minimum four common points from each image to calculate the transformation matrix.

Using findTransformECC(), more parameters need to be specified at first, such as the number of iterations and level of accuracy required, but does not require the identification of common points in both images. This method therefore requires more processing time, depending on the specified tolerances.

Both methods will create a transformation matrix that can be used to coregister the images from two cameras. The beauty of this approach is that once the transformation matrix is computed, regardless of the method, it can be re-used for all image pairs in the data set, provided the position of the cameras in relation to each other and the aircraft’s height above the ground remain the same.

If more than two cameras are involved in the imaging system, each camera will need to be coregistered with one primary camera before further processing takes place. This ensures all the pixels from different images correspond to the same data point on the ground.

What benefits does Teledyne Lumenera bring to the coregistration process?

For accurate coregistration among two or more cameras, a high correlation between the moments all images are captured needs to exist. Teledyne Lumenera cameras are known for being highly reliable, so missing or corrupted image data is extremely rare. When it comes to triggering, the camera can utilize either software or hardware triggering mechanisms (although the latter will ensure each image is captured within microseconds of the trigger, allowing for accurate post processing.

Furthermore, all Teledyne Lumenera cameras can be controlled by the same API, regardless of the camera model. This means that commands to set parameters such as gain and exposure time can be sent simultaneously to all the cameras in the system to ensure consistent image output.  This will increase the effectiveness of the findTransformECC() function allowing it to run faster and more reliably.

About Corey Fellows

Corey FellowsAs the Director of Sales at Lumenera, Corey is responsible for strategic account management and OEM business development across the Americas.

Corey has extensive experience in the global digital imaging industry, and has helped to create custom and OEM imaging solutions for a wide variety of industrial markets including: transportation, factory automation, biometrics and unmanned systems.

 

 

 

Disclaimer: The views expressed here are those of the interviewee and do not necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the Terms and Conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Teledyne Lumenera. (2019, December 05). How Coregistration is Used for Aerial Imaging. AZoOptics. Retrieved on November 23, 2024 from https://www.azooptics.com/Article.aspx?ArticleID=1711.

  • MLA

    Teledyne Lumenera. "How Coregistration is Used for Aerial Imaging". AZoOptics. 23 November 2024. <https://www.azooptics.com/Article.aspx?ArticleID=1711>.

  • Chicago

    Teledyne Lumenera. "How Coregistration is Used for Aerial Imaging". AZoOptics. https://www.azooptics.com/Article.aspx?ArticleID=1711. (accessed November 23, 2024).

  • Harvard

    Teledyne Lumenera. 2019. How Coregistration is Used for Aerial Imaging. AZoOptics, viewed 23 November 2024, https://www.azooptics.com/Article.aspx?ArticleID=1711.

Ask A Question

Do you have a question you'd like to ask regarding this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.