Calendar Event Details

613 Seminar: Hironobu Iwabuchi

Affiliation: Graduate School of Science, Tohoku University
Event Date: Monday, December 17, 2018

Location: Building 33, Room H114
Time: 1:30 PM

Deep-learning cloud retrieval based on three-dimensional radiative transfer

**NOTE DIFFERENT DAY**   MONDAY, DECEMBER 17, 2018


Hironobu Iwabuchi
Graduate School of Science, Tohoku University
Sendai, Miyagi, Japan


Three-dimensional (3-D) radiative transfer effects are a major source of retrieval errors in optical remote sensing of clouds. Radiative interactions operate across spatial elements of cloudy atmosphere at a wide range of spatial scales. Radiance measured at each pixel of image taken by satellite- or aircraft-based imager or ground-based camera is influenced by not only cloud density and microphysical properties along the line of sight but also spatial distribution of cloud in a wide domain surrounding of the line of sight. Retrieval of cloud properties is usually based on the independent pixel approximation assuming a plane-parallel, homogeneous cloud for each image pixel. However, the 3-D radiative interaction makes it difficult to retrieve cloud properties at pixel level if one uses single-pixel radiances solely. Thus, use of multiple pixels looks attractive, offering a great potential to accurately retrieve cloud properties from an image. Variational and Karman filter approaches have been proposed for remote sensing of cloud based on the 3-D radiative transfer, although they requires iterative calculations of 3-D radiative transfer that is very computationally intensive. Accurate retrieval is not necessarily assured even if using such high-cost methods.

A different approach is statistical one using regression or neural network models. Recently the deep learning using neural network with many layers is rapidly developing in wide areas of industrial applications and scientific researches. The deep neural network generally offers high capabilities of automatic feature extraction from multi-variate, multi-modal data and a good approximation of non-linear function. Many researches show significantly high accuracy compared to other existing methods. We showed a feasibility of deep learning application to multi-spectral, multi-pixel retrieval of cloud from satellite imagery. Convolutional neural networks (CNNs) may naturally trace the 3-D radiative effects that appear across image pixels, which traditional single-pixel approaches cannot capture. Training data of radiances are made from simulations using a Monte Carlo 3-D radiative transfer model for cloud fields simulated by a large eddy simulation model with high spatial resolution. In this case, the deep learning model is trained to learn cloud spatial structure in addition to the nonlinear relationship between cloud properties and radiances. Deep CNNs show high retrieval accuracy for cloud optical thickness and effective droplet radius from multi-spectral images of visible, near-infrared, and shortwave infrared channels. The CNN model efficiently derives the spatial distribution of cloud properties at multiple pixels all at once from radiances at multiple pixels, recovering the information lost in 3-D radiative transfer by using multi-scale features in the images. Deep CNN models are also developed to retrieve cloud optical thickness spatial distribution from ground-based camera and show promising results with significantly better accuracy compared with 1-D radiative transfer approaches.

Seminar Series Coordinators
Yuekui.Yang@nasa.gov
Lauren.M.Zamora@nasa.gov


 

Posted or updated: Wednesday, December 5, 2018

Print-friendly page (new window)   •   Back to the Calendar