One of GBDX’s most promising capabilities is its ability to identify and quantify a vast assortment of different objects visible in high-resolution satellite imagery. But GBDX’s strongest attribute—its ability to provide a very specific data set—sometimes creates an interesting dilemma: what’s the fastest, most cost-effective way to create a desired GBDX outcome?
We have seen the power and flexibility of artificial intelligence algorithms in the past, as when we successfully used a neural network architecture to identify properties with pools in Australia and remote villages in Nigeria. But training and deploying an effective model is expensive and slow, and while cloud-based computation is relatively cheap, the costs of feature detection do start to add up when you move from a relatively compact AOI to a regional or global scale. What are the alternatives?
Protogen (short for PROTOcol GENerator) is a geospatial image analysis and processing software suite developed within DigitalGlobe and available to GBDX subscribers. It uses state-of-the-art hierarchical image representation structures (called ‘trees’) to efficiently access, retrieve and organize image information content.
Here’s a real-world example of Protogen’s potential. Estimating oil reserves through analysis of high-resolution satellite imagery has become fashionable in geospatial analytics. Oil is typically stored in tanks with floating roofs. As the oil level (and therefore the lid) sinks, the shadow that’s cast on the inside of the tank (and is visible in Earth imagery) provides a good estimate of the fill level. A pretty neat idea.
But how are these oil tanks (regardless of fill level) detected in the first place? With sufficient training data, a neural network can probably learn to identify them—but as we’ve already established, this might not be the most efficient path. What are the other possibilities?
Oil tanks are distinctive. They’re round, they’re relatively big, and they look like bright disks when filled. Using the Protogen max-tree, we can extract oil tanks by simply selecting the max-tree nodes which satisfy certain size and compactness requirements. Here’s an example:
We’ve filtered a WorldView-3 panchromatic image chip from Houston, TX, to extract features with size between 100m2 and 3500m2, and compactness greater than 0.97 (1.00 being a perfect disk). For an image of this size, this filtering operation is instantaneous.
If we want to increase recall, we can decrease the minimum compactness. Here is another example if we set this value to 0.8:
We’ve picked up most of the tanks and a bit of noise. Not really a problem: we can use our crowd to weed out the false positives as we have successfully done in the past. You can imagine this workflow at scale: Protogen detects oil tank candidates on an entire strip then the crowd cleans up the results. Much faster than having the crowd scan the entire strip and much more accurate than doing it strictly with Protogen.
Protogen also includes a vectorization module that produces a geojson file with the bounding boxes of the detected oil tanks:
Having vectors makes it easy to count. According to Protogen, there are 133 oil tanks (give or take!) in this image segment.
GBDX makes it easy to run Protogen at scale. You can explore a full-resolution slippy map of oil tanks in Houston here. How about a different location? Want to find all the oil tanks in Cushing, OK?
The orderly spots to the north and south of the image center correspond to oil tanks, while the randomly scattered spots are noise. Check out this close-up:
You can find the full story here. We are currently working on improving the accuracy of our oil tank detector by using Protogen’s Land Use Land Cover classification method on the multispectral image in order to filter out false detections on soil and water, as well as combining Protogen with Machine Learning. Stay tuned for updates!
Interesant algoritmul… de citit. Mai jos se afla si un link cu sursele de date, pentru cei care vor sa afle mai multe sau sa experimenteze direct.
There are large regions of the planet which (although inhabited) remain unmapped to this day. DigitalGlobe has launched crowdsourcing campaigns to detect remote population centers in Ethiopia, Sudan and Swaziland in support of NGO vaccination and aid distribution initiatives.This is one of several current initiatives to fill in the gaps in the global map so first responders can provide relief to vulnerable, yet inaccessible, people.
Crowdsourcing the detection of villages is accurate but slow. Human eyes can easily detect buildings, but it takes them a while to cover large swaths of land. In the past, we have combined crowdsourcing with deep learning on GBDX to detect and classify objects at scale. This is the approach: collect training samples from the crowd, train a neural network to identify the object of interest, then deploy the trained model on large areas.
In the context of a recent large-scale population mapping campaign, we were faced with the usual question. Find buildings with the crowd, or train a machine to do it? This led to another question: can the convolutional neural network (CNN) that we trained to find swimming pools in Adelaide be trained to detect buildings in Nigeria?
To answer this question, we chose an area of interest in northeastern Nigeria, on the border with Niger and Cameroon. DigitalGlobe’s image library furnished the required content: nine WorldView-2 and two GeoEye-1 image strips collected between January 2015 and May 2016.
We selected four WorldView-2 strips, divided them into square chips of 115 m per side (250 pixels at sensor resolution) and asked our crowd to label them as ‘Buildings’ or ‘No Buildings’. In this manner, we obtained labeled data to train the neural network.
The trained model was then deployed on the remainder of the strips. This involved dividing each image into chips of the same size as those that we trained on, then having the model classify each individual chip as ‘Buildings’ or ‘No Buildings’.
The result: a file which contains all the chips classified as ‘Buildings’ or ‘No Buildings’, along with a confidence score on each classification.
Here are sample classifications of the model:
The intensity of green is proportional to the confidence of the model in the presence of a building. It is apparent that confidence increases with building density. The model is doing its job!
What is the neural network actually learning? Below are examples of hidden layer outputs produced during classification of a chip that contains buildings. Note that as the chip is processed by successive layers, the locations of buildings become more and more illuminated, leading to a high confidence decision that the chip contains buildings.
Here is a bigger sample of the results. A quick check on Google maps shows that most of these villages are not on the map.
So to answer our original question: yes, the same neural network architecture used successfully to detect swimming pools in a suburban environment in Australia can be used to detect buildings in the Nigerian desert. The trained model can classify approximately 200000 chip (a little over 3000 km2) on a GPU-equipped Amazon instance. GBDX allows the parallel deployment of the model over an arbitrary number of strips — making continental-scale mapping of population centers a reality.
Driven by tides, powerful sea currents and overall climate change, coastal change threatens shore communities and local economies. Accurate detection and measurement of coastal change can inform scientific investigations and facilitate flooding disaster preparedness and mitigation.
What do we mean by coastal change detection and measurement? We want to find where water has replaced land and vice versa, as well as the extent of these phenomena. So we developed an end-to-end GBDX workflow for coastal change detection and measurement at the native resolution (<2 m) of our 8-band multispectral imagery.
Let’s consider a sample area of interest that you may be familiar with: Cape Cod, a region well known for extreme changes in the coastal landscape. The image boundaries and their intersection are shown in the following figure.
The workflow takes two roughly collocated images of Cape Cod, captured in 2010 and 2016 by WorldView-2 and WorldView-3, and computes coastal change on the entire images, roughly an area of 1500 km2 in less than 30 minutes. Change is detected by aligning the two images, computing a water mask in each one, and then overlaying the two masks to compute the difference.
This a close-up of an area where water has retreated, most likely due to extreme tidal effects.
And here is the change heat map:
The colors represent the degree of water retreat. Note that in some areas the water has retreated by 1km!
Here is a snapshot of the Chatham area. Red indicates water loss and green indicates water gain. Note that water loss is due to tidal effects, while water gain is most likely due to shifting sand bars.
And here’s a snapshot of the Marconi transatlantic wireless station area. The red blob on the left indicates the presence of a tidal marsh.
Have these dramatic results and images caught your attention? You can find the full story at gbdxstories.digitalglobe.com/coastal-change, complete with Python code and a full resolution coastal change map!