Document Type


Date of Original Version





A cloud detection algorithm was designed as an adjunct to a companion edge-detection algorithm. The cloud detection integrates two distinct algorithms: one based on multiimage processing, the other on single-image analysis. The multiimage portion of the cloud detection algorithm operates on a time sequence of sea surface temperature (SST) images. It is designed to detect clouds associated with regions of apparently lower temperatures than the underlying SST field. A pixel in the current image is initially considered to be corrupted by clouds if it is significantly cooler than the corresponding pixel in a neighbor image. To refine the initial classification, the algorithm checks the current image and the neighbor image for the presence of water masses, which through displacement could explain the change in temperature. The single-image cloud detection algorithm is designed to detect clouds associated with regions of the SST image where gradient vectors have a large magnitude. These regions are flagged in the map of potential clouds. multiimage processing is integrated with the single-image algorithm by adding pixels classified as cloudy at the multiimage level to the map of potential clouds. Further analysis of the gradient vector field and of the shapes of potentially cloudy areas allows one to determine whether these regions correspond to clouds or SST fronts. A previous study has shown that the clouds identified by the single-image algorithm were in close agreement with those detected by a human expert. To validate the additional multiimage processing, the effect of the integrated cloud detection on the performance of a companion edge detection algorithm is examined. These results and a direct comparison with the cloud masks produced by a human expert indicate that, compared to the single-image algorithm, the multiimage algorithm successfully identify additional cloud-corrupted regions while keeping a low rate for the detection of false clouds.