Wednesday, October 3, 2012

Journalism Drone Development: aerial photo mosiacs, and what's the spatial resolution on this drone, anyway?


Above is an aerial mosaic -- a series of 11 photos taken from a small unmanned aerial vehicle (colloquially known as a drone) that have been stitched together in a mosaiking program.

That program, Microsoft Image Composite Editor, is normally used to stitch together a series of sweeping photos taken from the ground to make a single panoramic image. However, the algorithm used to find and match the edges of a series of sweeping photos of, say, the Grand Canyon, is the same algorithm needed to fit photos together to make a map or similar map-esque image from aerial photos.

So, what kind of drone journalism could you do with this kind of image? Aerial photographers have been able to capture a breathtaking, panoramic view of Moscow protests from drones. These drones offer a perspective that is especially helpful at documenting the scope or extent of protests, political rallies, construction projects, landmarks, geographic features, and natural and man-made disasters.

But what kind of data journalism can you do with these drones? That's to say, what kind of hard data can you obtain from these images to launch investigations? How about proving the existence of or extent of something, such as oil spills, wild fires, droughts, or lax construction codes following a disaster, with actual metrics?



To answer those questions and more, it's helpful to first learn a very important term: photogrammetry. What is photogrammetry, exactly? Here's a deffinition from the American Society for Photogrammetry and Remote Sensing:


Photogrammetry is “the art, science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena.”

Simply put, we have to be able to make reliable measurements from our drone-based cameras before we can derive any sort of reliable conclusions about the content of our photos.

When we talk about measuring things from aerial photos, spatial resolution is key. That's the "minimum distance between two adjacent features or the minimum size of a feature, that can be detected by a remote sensing system," according to Geocomm.com. It's the smallest size object your camera sensor can pick up.

When we're working with metrics, however, it's more common to talk in terms of Ground Sample Distance, or GSD. This is represented in units, usually centimeters per pixel. In other words, if I have an image that I know has a GSD of 10cm/px, and I count an object on that image as having a width of 10 pixels, its real-life width is going to be 100cm, or 1m.

Let's say we're working on a journalism project where money is no object. What kind of spatial resolution could we expect from high-end systems? Leica advertises its ADS80 sensor, which is a top-of-the-line system that is mounted in manned aircraft, as having a 5cm GSD, while the aircraft is flying at 140 knots, or 161 mph.

For further comparison, the GeoEye-1 satellite, launched in September 2008, has a GSD of 41cm. The WorldView 3 satellite, due to launch in 2014, will have a resolution of 31cm.

How does the system that created the image at the top of the post compare?

This is the drone used to obtain the aerial photos - a balsa-wood RC airplane equipped with an Arduino-based autopilot microcontroller.



As it turns out, the drone already had a space that was well-suited to house the GoPro Hero2 camera I used for this aerial photography mission. Only minor surgery was required, and the camera was eventually secured using foam blocks, hook-and-loop straps, and electrical tape.

The camera was set to take 8MP images (3200px by 2400px) at 0.5s intervals, and then flown manually above the testing field. Among the 11 photos, I've picked the photo with perhaps the most appropriate position and angle:


Zoom in on this photo, and you'll find me operating the plane (I'm wearing a white shirt and an olive-green hat). There's also some items resting on a large, white surface. That surface is normally reserved for preparing aircraft for flight.


Zooming in even closer on the table, and you'll find two black objects. The object on the left is my trusty laptop (used to program the autopilot in the field, and also to compose this very blog post). The object on the right is my trusty leather satchel that carries the laptop.


Seeing as how I'm in front of that laptop at this moment, I know very well the dimensions of my Lenovo R61. It's 33.5cm long by 23.7cm wide. Let's take the longer of those two dimensions and approximate the spatial resolution.

According to Photoshop, the laptop measures 10.83px long. Dividing 33.5cm by 10.83px, we get a GSD of 3.09cm.

And just to double-check that calculation, Photoshop measures the width of the laptop as approximately 7.9px. Multiply 7.9px by 3.09 cm/px, and we get 24.4 cm -- a 70mm difference from what the real laptop is. That makes the margin of error from this oh-so-precise (sarcasm here) method about 2.9 percent.

That GSD is 40 percent smaller than the professional aerial rig, and about 10 times smaller than the best commercial imaging satellite. But here's another thing to think about: the camera used here has a maximum resolution of 11MP, which could work out to a GSD that's just slightly more than 2cm/px.

Is this $1,000 drone better than a $100,000+ professional sensor? No way. The professional rig has much better spectral fidelity and range. That means the photons of light the sensor picks up are recorded more precisely, and the sensor can record photons from more extreme ends of the spectrum, especially in the non-visible spectrum (i.e., infrared). It's also assisted by a suite of hardware and software to make those images map-accurate.

That's not mentioning the camera is attached to a highly mobile, high-range aircraft, enabling the sensor to canvas whole cities in less than a day.  I'm not sure how long it would take my drone to canvass the city, but given its current configuration permits a maximum flight of 8 minutes, I'm willing to bet that the professional rigs have my drone beat in that respect.

Having said that, for the price point and the resolution, a similar drone could be a powerful tool in a data journalist's arsenal.

Exactly how much area can a journalist or researcher cover with this drone? And how do you make those images map-accurate? That's for another post.