A popular algorithm for hyperspectral image interpretation is the automatic target generation process (ATGP). ATGP creates a set of targets from image data in an unsupervised fashion without prior knowledge. It can be used to search a specific target in unknown scenes and when a target's size is smaller than a single pixel. Its application has been demonstrated in many fields including geology, agriculture, and intelligence. However, the algorithm requires long time to process due to the massive amount of data. To expedite the process, the graphics processing units (GPUs) are an attractive alternative in comparison with traditional CPU architectures. We propose a GPU-based massively parallel version of ATGP, which provides real-time performance for the first time in the literature. The HYDICE image data (307 * 307 pixels and 210 spectral bands) are used for benchmark. Our optimization efforts on the GPU-based ATGP algorithm using one NVIDIA Tesla K20 GPU with I/O transfer can achieve a speedup of 362× with respect to its single-threaded CPU counterpart. The algorithm on Airborne Visible/InfraRed Imaging Spectrometer (AVIRIS) WTC dataset (512 * 614 * 224 of 224 bands) and Cuprite dataset (35 * 350 * 188 of 188 bands), the speedup was 416× and 320× were also tested, respectively, when the target number was 15.
The research results were published in IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING and Computers &Geosciences.
This research was co-finished by Assistant Researcher, Xiaojie Li and UWMD of America and it was supported by National Natural Sciences Foundation of China.
Paper information:
[1] Xiaojie Li, Bormin Huang, and Kai Zhao, “Massively Parallel GPU Design of Automatic Target Generation Process in Hyperspectral Imagery,” IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 8, NO. 6, JUNE 2015
[2] Xiaojie Li, Changhe Song, Sebastian López, Yunsong Li, JoséF.López. “Fast computation of bare soil surface roughness on a Fermi GPU,” Computers &Geosciences,VOL.82, 2015, 38–44
[3] Melin Huang, Bormin Huang, Xiaojie Li, Allen H.-L. Huang, Mitchell D. Goldberg, and Ajay Mehta,“Massive Parallelization of the WRF GCE Model Toward a GPU-Based End-to-End Satellite Data Simulator Unit,” IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 8, NO. 5, MAY 2015。
Link of the publication: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6891117&pageNumber%3D131227
http://www.sciencedirect.com/science/article/pii/S009830041500117X http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=7110544&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel7%2F4609443%2F4609444%2F07110544.pdf%3Farnumber%3D7110544