Detecting anomalies from any image data, especially hyperspectral ones, is not a trivial task. When combined with the lack of apriori labels or detection targets, it grows even more complex. Detecting spectral anomalies can be done with numerous methods, but the detection of spatial ones is vastly more complicated affair. In this thesis a new way to detect both spatial and spectral anomalies at the same time is proposed.
The method has been designed with hyperspectral data in mind, but should work for conventional images also. This is achieved works by using 3-d convolutional autoencoders to learn commonly occurring features both spatial and spectral, across the the test data. By running the test data through this network, the data is transformed to a feature-space. In this space, the images can be analyzed for the presence of anomalies by the means of standard anomaly detection algorithms. A simple real-world use case with unmodified images is presented. Second run for validation purposes is done with data containing synthetic anomalies.