Earth observation and AI
Traditionally, the various environments on the land were interpreted or classified using photo interpreters, which produces excellent results for interpreting the land target. However, this approach requires significant analysis time and is not feasible when there is an extremely high volume of images to analyze.
For over 20 years, machine learning classification algorithms, such as SVM and Random Forest, have made it possible to automate image analysis. However, prediction accuracy can vary greatly and often does not equal the work by an experience photo interpreter.
One of the reasons for this difference is that the environments that need to be classified sometimes have a high degree of complexity. This type of algorithm has difficulty capturing patterns that can be found in the environment, because it does not necessarily consider the context in which the object evolves. Still, this approach allows an approximation of land use classes and can be used as a tool for everyday projects.
In recent years, AI has become a must in the field of Earth observation. One of the reasons for this growing popularity is the ability of convolutional neural networks (CNN) to reflect the context in which the object to be classified is located. This aspect is crucial in correctly identifying objects on land. Additionally, the flexibility of these architectures makes them interesting for classifying satellite and aerial images. Finally, it has been reported that, in some cases, the accuracy of AI predictions came close to those obtained by photo interpreters.
Wetland detection using multispectral imagery and deep learning
For just over a year, there has been an interest in identifying wetlands using multispectral satellite imagery. To do this, an approach is being developed using a deep learning CNN model. This model is trained using many annotated patches that represent the various classes of interest for classification. These patches were extracted from images captured by the Sentinel-2a and Sentinel-2b optical imagery. By feeding the model with these thumbnails, it learns to extract features that identify the various wetlands.
These neural networks behave a bit like a brain while learning. They develop the ability to recognize an object after seeing it repeatedly. Eventually, a satellite image covering a region of interest that the trained algorithm has never “seen” can be provided, and the algorithm will predict a class for each of the pixels. A map is then obtained detailing the various wetlands, if any. Such a tool, while it does not replace the work of biologists in the field, provides real support for work planning.