Sunday, March 13, 2016
Back in 2012 when I first started flying drones to make high-resolution photomaps (e.g., strapping a first-generation GoPro to the bottom of a balsa-wood DIY drone and hoping for the best), there were few options for processing the photos.
Basically, if you didn't have access to $2,000 software, you only had Microsoft Image Composite Editor (ICE) to stitch together the photos into mosaics. Fortunately, much has changed since then.
In a window of just two years, a number of software solutions became available. VisualFSM brought free, open-source photogrammetry to tech-savvy hobbyists and researchers. There was Autodesk's 123D catch, which could be used with drone imagery in a pinch. Pix4D came about in 2011, which later gained a huge market share in the professional UAS space. I won't get into all the options, but there's a fairly comprehensive table on Wikipedia that you might wish to look at.
The solution I use most often today is Agisoft PhotoScan. The feature set of the standard version is somewhat limited compared to solutions designed specifically for UAS use, but it's also easy to use, the software license is comparatively cheap, and it runs on ordinary desktop machines.
Many photogrammetry services are run in the cloud (123D catch, Pix4D, DroneMapper), which has its benefits. You don't have to upgrade your machine to run complex models. You don't have to tie up a computer for hours while it's processing 500-1,000 photos. You can start a job in another country, send your images to the cloud instance, and by the time you arrive back home, your job can be done.
But processing in the cloud can mean paying fees by the month or by the job. If you like paying a one-time fee for a license, the cloud may not be the most attractive solution.
Thankfully, PhotoScan can be run in the cloud. While it does mean incurring hourly fees for computer time and cloud storage, it can also help in a pinch when you're working on an especially large project.
Wednesday, October 3, 2012
Journalism Drone Development: aerial photo mosiacs, and what's the spatial resolution on this drone, anyway?
Above is an aerial mosaic -- a series of 11 photos taken from a small unmanned aerial vehicle (colloquially known as a drone) that have been stitched together in a mosaiking program.
That program, Microsoft Image Composite Editor, is normally used to stitch together a series of sweeping photos taken from the ground to make a single panoramic image. However, the algorithm used to find and match the edges of a series of sweeping photos of, say, the Grand Canyon, is the same algorithm needed to fit photos together to make a map or similar map-esque image from aerial photos.
So, what kind of drone journalism could you do with this kind of image? Aerial photographers have been able to capture a breathtaking, panoramic view of Moscow protests from drones. These drones offer a perspective that is especially helpful at documenting the scope or extent of protests, political rallies, construction projects, landmarks, geographic features, and natural and man-made disasters.
But what kind of data journalism can you do with these drones? That's to say, what kind of hard data can you obtain from these images to launch investigations? How about proving the existence of or extent of something, such as oil spills, wild fires, droughts, or lax construction codes following a disaster, with actual metrics?