ADAPTING SMARTPHONE FOR MULTI-SPECTRAL IMAGING (MSI)
ADAPTING SMARTPHONE FOR MULTI-SPECTRAL IMAGING (MSI)
By Chuong Tran, Phong Tran, James Le, Eigen Imaging Inc., April 2013
Our paper showed that recent smartphones offer a promising platform for the development of challenging mobile multispectral image analysis applications. Results from our Android app implemented on a Google Galaxy Nexus smartphone show that multispectral or Color Infrared (CIR) imagery analysis can extract useful information specific to the applications which could opens the door for interesting mobile CIR imaging and detection applications. Many concepts known from traditional desktop architectures carry over to Android platform which the Java Native Interface allows us to execute time-critical algorithms using native instructions. Algorithms can be developed device-independently in C or C++, while the compilers provide strategies for automatic code optimization. However, interactions with special devices such as the camera, display, or user interface still require special care and must be optimized manually. Architecture-specific optimizations can be made to further improve performance. Many devices now have Single instruction, multiple data (SIMD) co-processors (i.e. ARM Cortex with NEON cores) that can result in a drastic speed-up for certain math functions. This can result in real-time performance on supported devices. The NDK allows support for multiple architectures within an app, so supported devices will use the hardware-accelerated functions, while unsupported devices will fall back to the software-only algorithm. With the right training and tools, multispectral imagery can be leveraged to help the naked eye see things that may not have been visible before.
Multispectral imager (MSI) has been used the most for remote sensing in natural resources applications because of its ability to "see" the health of vegetation. The NIR wavelengths of light are reflected most abundantly by healthy green vegetation. As vegetation health decreases, MSI can detect those changes long before a human eye would be able to see them. Different types of vegetation also reflect different amounts of NIR energy. This fact lets image analysts identify the different types of vegetation in an image. Natural resources are not the only place where MSI is useful. Water and asphalt, in particular, absorb almost all of the NIR wavelengths of light. This allows a user to easily identify manmade surfaces such parking lots, sidewalks and driveways, because most the NIR energy is absorbed by these features and appears in the imagery as black or shades of gray making them easily identifiable. Figure 1 shows typical spectra signatures of land covers.
Figure 1. Example of Land covers Spectral Response
VIS to NIR Sensing
Visible optical imagers are designed to emulate human by detecting lights in the visible spectrum (VIS 400-700nm) resulting in a "true color" image. However, additional information can be seen in the near infrared (NIR 700-1500nm) which cannot be seen to the eye. Different materials reflect and absorb differently depending on incident light, combining both visible and near infrared spectra sensing, their spectral reflectance signatures can greatly enhances image visual interpretation. For example, Oak trees reflect a different light signature of the infrared spectrum than do Evergreens, thus they appear on the image as two different colors so an image analyst can identify different types of trees just by using multispectral image analysis tools.
MSI imagery appears unusual in colors because the colors that the features appear in are the exact opposite of what we expect to see in many cases. Interpretation of a multispectral color composite image will require the knowledge of the spectral reflectance signature of the targets in the scene. Many image processing and analysis techniques have been developed to aid the interpretation of remote sensing images and to extract as much information as possible from the images.
MSI for remote sensing is not a new technology. It has existed in black and white form before color film. Today, digital imaging uses either CCD or CMOS sensor which has a wavelength response from about 350 nm to 1,000 nm. These sensors use a Bayer pattern array of filters to obtain red, green and blue bands for a digital image. Certain cameras have Bayer-pattern filters that transmit significant amounts of NIR radiation through the blue, green ,and red channels as shown in Figure 2., thus most high quality digital cameras have an internal NIR cut-off filters to allow only VIS spectrum through in order to produce accurate color images . This NIR filter can be replaced, for example, with blue band cut-off filter allowing the raw digital camera image to be post processed into a CIR image consists of green, red, and NIR.
Figure 2. Example of CMOS Sensor Spectral Response and NIR cut off filter
MULTI-SPECTRAL SMARTPHONE PROTOTYPE SYSTEM
To exploit MSI advantages, it is important to think of MSI imaging as an interrelated system that involves the scene illumination, sensor, and data processing. The scene is the most complex part for most applications since one cannot easily control its natural environment illumination. Sensing and data processing can be adequately designed to closely capture the data needed for extracting useful information specific to the applications.
In this paper, we will examine some commonly used algorithms for analyzing and interpreting remote sensing images. Our entire prototype system consists of the following:
1. Local San Diego scenery.
2. Sensor - Image acquisition is done using a Google Galaxy Nexus smartphone with a blue bandpass filter and modification to its rear facing camera module and
3. Data Processing - Imagery captured is acquired by the smartphone via our Android app, eigenCAM, written to transform the captured scene imagery into an enhanced multispectral imagery for "signatures" discrimination purposes. eigenCAM has video with near realtime capability.
Android Smartphone Basics
Android is an open software platform and it allows us to build applications in Java or C/C++ using the Android Software Development Kit. For implementing performance-critical algorithms like decorrelation stretch and k-means clustering, we use the Android Native Development Kit (NDK) to implement routines in native C/C++, resulting in a significant speedup compared to Java implementations. The NDK also supports a set of commonly-used system headers for native APIs like the math library. The incorporation of native code in the application uses a provided build system that lists the native source files and integrates the shared libraries into the application project. These can then be easily accessed via invocation of the Java Native Interface (JNI). Finally, the NDK allows for optimizations to the underlying hardware by targeting specific instruction sets for the ARM platform.
Prototyping of image processing algorithms was developed in MATLAB and then ported to C code which allows a simpler means of porting the image processing algorithms to a mobile app. The Android SDK is used to implement the graphical user interface, while the native C code is called via JNI to process the image data. To save as much system resources as possible, we need to process as small of an image as possible while retaining acceptable image quality. If the displayed image cannot be zoomed, we can resize a large image to the dimensions of the device’s display size to improve performance.
Many concepts known from traditional desktop architectures carry over to Android platform which the Java Native Interface allows us to execute time-critical algorithms using native instructions. Algorithms can be developed device-independently in C or C++, while the compilers provide strategies for automatic code optimization. However, interactions with special devices such as the camera, display, or user interface still require special care and must be optimized manually. Architecture-specific optimizations can be made to further improve performance. Many devices now have Single instruction, multiple data (SIMD) co-processors (i.e. ARM Cortex with NEON cores) that can result in a drastic speed-up for certain math functions. This can result in real-time performance on supported devices. The NDK allows support for multiple architectures within an app, so supported devices will use the hardware-accelerated functions, while unsupported devices will fall back to the software-only algorithm.
The speed of the processing algorithm is dependent on the size of the image and the image processing algorithm. For example, a 1600x1200 image processed by the decorrelation stretch, an algorithm which enhances the color separation of an image with significant band-to-band correlation, takes approximately 1.6 seconds to process, while a VGA-resolution (640x480) image takes 0.3 seconds on the Google Galaxy Nexus phone, which has a dual-core 1.2 GHz cpu. It is expected that further optimizing the code to utilize the NEON core on the device would improve performance by an order of magnitude.
Image enhancement techniques are application specific so that visual interpretation of the result is more suitable than the original image. Over time, researchers have developed many algorithms which have shown that certain predictions can be made from the acquired imagery alone. The choice of algorithms to use depends on the goals of each individual project. The samples of selected algorithms in this paper can be found in the reference section and thus are not discussed here in details.
Figure 3. shows a montage of the results of image enhancement algorithms as computed by the Galaxy Nexus smartphone. A brief explanation for each sub image is given below:
Show a normal color image of San Diego harbor area as a baseline for discussion. Figure 3a.
b. CIR (Color IR, False-Color)
Shows much of this area was over developed with housing and commercial buildings so there’s not much of vegetation land covers left in the area. Vegetation is traditionally highlighted as red.
• A useful tool for indicating where to crop scout (this reduces man hours in the field).
c. Color NDVI (Normalized Difference Vegetation Index)
Explores the significant differences in reflectance and absorption of vegetation. This analysis shows an enhanced color of healthy vegetation (saturated red) against non-vegetation areas (green). Various color schemes are used depending on application.
• NDVI is a good indicator of the relative healthiness of the plant. By noting the color of the chlorophyll, it usually tells how well the plant is doing and if the plant is under stress.
• Producing applications maps for nitrates (especially helpful in irrigated corn and wheat).
Enhances the color separation of an image with significant band-to-band correlation. The exaggerated colors provide good visual clues for visual interpretation and make feature discrimination easier. Figure 3d.
• Remove the high correlation commonly found in multispectral datasets.
• Separate the choroidal vascular patterns from the retinal blood vessels in clinical research
• Very useful for remote sensing to reveal camouflage, faded features in cave paintings.
Determines the natural spectral groupings present in a data set that automatically groups the pixels in the image into separate clusters depending on their spectral features. K-means is iterative and computational intensive.
Figure 3e show the cluster Index image.
Figure 3f shows the spectral data associated with mostly man made structures such as roads, building, and mapping routes.
Figure 3g shows spectral data associated with healthy vegetation such as grass, trees, and parks with open vegetation land covers.
Figure 3h shows mostly spectral data for water bodies.
• In an unsupervised classification of image segmentation, it gives a high discriminative power of clusters present in the image. Each cluster will then be assigned a land cover type by the image analyst.
• Determine the natural spectral groupings present in a data set.
• Forensic use to isolate blood stains, detect counterfeit paintings and art collectibles.
The demonstrated results showed that recent smartphones offer a promising platform for the development of challenging mobile CIR image analysis applications. Taking CIR imagery requires a special camera that sees in the Visible and NIR wavelengths but does not have to be prohibit expensive since smartphones or consumer level cameras could be modified depending on the objectives of the project.
The significant limitation we have discovered was the inability for one algorithm to enhance all the spectral features we would prefer to detect in one image. For example, NDVI is good for spotting healthy vegetation but it's not effective for revealing very similar anomalies like the decorrelation algorithm. A potential way to overcome this major problem is to tailor the algorithm with the appropriate optical filter design for the type of spectral contents to be discriminated. Combining detected features from each separate algorithm into one image is another one of many options.
Even though CIR imagery has been around for a while, most people are probably not very familiar with it, and it may seem a bit abnormal to view these images the first few times. But, with the right training and the convenience of a modified smartphone, CIR imagery can be leveraged to help the naked eyes see things that may not have been visible before which could open the door for interesting mobile multi-spectral applications.