Abstracts

All the posters are available here.


 

Nazanin Asadi (University of Waterloo): "Improving the accuracy of high resolution sea ice analysis"


Accurate estimates of sea ice concentration and thickness are critical for weather forecasting and navigation in ice-covered regions. According to recent records of sea ice concentration and thickness, over the past two decades, there has been a significant decline in Arctic ice extent and thinning of the ice cover, which would enable easier marine navigation over a longer period of the year. However, the safety of these navigations depends on the availability of accurate estimate of ice pressure, which is closely related to the thickness. Moreover, recent work has shown that ice-ocean models have difficulty estimating the ice thickness distribution. While data assimilation can help ice-ocean models to produce more accurate results, estimating sea ice thickness is a challenging problem due to the multiscale nature of the ice cover. At large scales, sea ice behaves as a nonrigid continuum, while at small scales it includes important sharp features or discontinuities such as leads and ridges. Due to the lack of observations and existence of errors in observations, data assimilation is generally an ill-conditioned inverse problem. The solutions of ill-posed problems can be improved by adding regularization terms to the defined minimization cost functions to enforce additional constraints. Following previous studies on l1-norm regularization and its edge-preserving ability in image restoration problems, Freitag et al. recently suggested using an additional l1-norm regularization term of the analysis gradient in the conventional l2-norm regularization framework to improve estimates of sharp atmospheric fronts. Ebtehaj et al. also illustrated that l1-norm regularization produces more accurate results when the state of interest can be well approximated by the family of Laplace densities in a transformed (wavelet or derivative) domain. Considering the need to have high resolution ice information at small scales and existence of sharp features at those scales, an ice thickness assimilation experiment has been carried out to evaluate the impact of using an l1-norm regularization in a variational data assimilation framework. The l1-norm regularization impact is observed under different background and observation error correlation length scales (Fig 2). To the best of our knowledge, this is the first application of the l1-l2 regularization to a problem with real data. The high resolution ice thickness data was obtained using an airborne electromagnetic (AEM) sensor during an aircraft survey, that took place April 2015 from an area in the Beaufort Sea where the ice is highly heterogeneous with many ridges and leads.

 


 

Alex Ayet (Ecole Normale Supérieure): "Predicting solar irradiance using large satellite dataset and Analog Data Assimilation"


A key element for microgrid Energy Management Systems (EMS) with storage batteries and Photovoltaic panels (PV) is the forecast of the solar Global Horizontal Irradiance (GHI), directly related to the PV production. In this work, we use a novel data-driven method named the Analog Data Assimilation (AnDA, Lguensat et al., 2017) for the nowcasting of GHI. We use 5 years of the EUMETSAT geostationary satellite data, with a resolution of 1h and 5km. The physical model is emulated using a nearest neighbors algorithm on the satellite database, containing both the images to be compared to the current meteorological conditions and their successors at one hour of interval that produce the forecasts. These analog forecasts are then combined with a Kalman filter, in order to assimilate in-situ observations. This method requires no post processing, unlike Lagrangian methods, and is highly flexible, since the physics of the system is contained in the Analogs database and in the chosen metric for the nearest neighbors algorithm (Atencia and Zawadzki 2015). We test this methodology to a region in northern Africa.

 


 

Suzanna Bonnet (University of Rio de Janeiro): "An evaluation of temperature forecasting performance using a recurrent neural network model"


Many studies on meteorological variables forecasting have been made from machine learning techniques. Recurrent Neural Network (RNN) model is the most used model for this purpose, since they can recognize complex temporal patterns with better performance than standard neural networks. However, the predictability of nonlinear time series is still a problem in machine learning models and in dynamic equation models, specially for long-term forecasts. In order to evaluate the performance of a RNN in providing hourly and monthly air-temperature predictions, two time series with different length and time-discretization (5-year hourly data and 30-year monthly data) were tested for the Tom Jobim International Airport, Rio de Janeiro, Brazil. For that purpose, a pre-processing composed of scale analyses was applied to time series before using them in the model. Additionally, Long Short Term Memory (LSTM), a type of RNN that prevents the vanishing and exploding gradient problems, is compared with the previous results. Preliminary results show the low memory and poor correlation among the target and auxiliary series (sea level pressure and dew-point temperature), leading to a deficient prediction model. This study is an initial effort in the structuring of a RNN model for medium-term forecasts of meteorological variables. The study also aims to improve the understanding of relationships that lead to different performances between time series of distinct frequencies.

 


 

Dorian Cazau (ENSTA Bretagne): "Environmental monitoring of the Indian Ocean based on underwater ambient sound from hydrophone bio-logged elephant seals"


The underwater ambient sound field contains quantifiable information about the physical and biological marine environment. In particular, measurement of underwater acoustic spectra at various periods and locations can be used to draw acoustic maps of ocean ambient noise, and also infer the above surface meteorological conditions (e.g., wind speed, rainfall rate, glacier melting). Since 2011, we have been annually collecting underwater data over the migratory routes of bio-logged Southern Elephant Seal (SES), between the months of October and December. Every October, two postbreeding females of similar body conditions are captured and equipped with data loggers on Kerguelen Island (49° 200S, 70° 200E). These data loggers include an Argos-GPS satellite tag (SPLASH10 FastLoc GPS, Wildlife Computers, Redmond, WA), a TDR-accelerometer data logger (MK10-X, Wildlife Computers) and an Acousonde device (Greenridge Sciences). Overall, these data allow a high-resolution quantification of SES-dependent physiological signals (e.g., SES local movements and dive phases) as well as acoustic surrounding signals. With a similar function to classical underwater gliders, we expect to extract from these data very high resolution (approximately 30 mins / 400 m) ocean ambient noise measurements. Currently, a total of 10 hours of acoustic recordings (measurement campaigns of 2011 and 2012) is used in our project. In this conference, we will first present our approach of using such a “migrating single-hydrophone” to provide an overall picture of the low-to-medium frequency (10-6000 Hz) ambient noise distribution, and its variability in time and space at a regional scale within the Indian Ocean, where acoustic studies are very scarce in comparison to Northern Oceans. We will detail our methodology to extract robustly standard acoustic metrics characterizing ocean ambient noise and to analyse inter-depency relations (through multivariate time series analysis such as multilinear regression analysis) between these metrics and auxiliary data collected on the environment. Especially, past studies have shown that the different mechanisms of sound generation associated with wind, rain and shipping produce different shaped acoustic spectra. In particular, because of the different spectral character of the underwater sound produced by different air-sea processes, the sound field from 500 Hz to 25 kHz can be used to classify weather type. We then evaluated the use of classical metrics, e.g. the ratio of sound pressure levels at 1 and 6 kHz, to assess the speed of wind. In complement, we present preliminary efforts on testing different supervised classification methods, especially through deep learning approaches, to estimate the presence of different classes of winds (e.g., below and above 10 m/s) from underwater ocean noise.

 


 

Romain Chailan (Twin Solutions): "Combining multiple classifiers to address environmental and open-world classification problems: A Bayesian perspective"


In real data classification problem, various initiatives have been put together to benefit from both the human expertise and the huge computational power of nowadays infrastructures, as the so-called Zooniverse platform. This area is known as crowdsourced machine learning and is widely represented to address environmental classification-related questionings. In such applications, one combines the recommendation of imperfect classifiers to reach a better classification accuracy. Where one would use a standard (weighted) average model, the best performances have been obtained from Bayesian approaches (Simpson et al., (2013), Servajean et al., (2017b)). These models workout the relationship between each classifiers' output and the possible true labels (classes). We show here how to use the Bayesian framework to address such a problem both in a parametric fashion (such as Simpson et al., (2013)) as well as in a non-parametric way (such as Servajean et al., (2017a)) for open-world problems. We describe in particular the models and approximate inference methods that can be used to learn the model. We illustrate the performances of both approaches on datasets related to plant classification. Our experiments show that a non-parametric model is not only able to determine the true number of labels, but also largely outperforms the parametric classifier combination by modelling more complex confusions, in particular when few or no training data are available.

 


 

Trang Chau (University of Rennes I): "Non parametric state-space model for missing-data imputation"


Missing data are present in many environmental data-sets and this work aims at developing a general method for imputing them. State-space models (SSM) have already extensively been used in this framework. The basic idea consists in introducing the true environmental process, which we aim at reconstructing, as a latent process and model the data available at neighboring sites in space and/or time conditionally to this latent process. A key input of SSMs is a stochastic model which describes the temporal evolution of the environmental process of interest. In many applications, the dynamic is complex and can hardly be described using a tractable parametric model. Here we investigate a data-driven method where the dynamical model is learned using a non-parametric approach and historical observations of the environmental process of interest. From a statistical point of view, we will address various aspects related to SSMs in a non-parametric framework. First we will discuss the estimation of the filtering and smoothing distributions, that is the distribution of the latent space given the observations, using sequential Monte Carlo approaches in conjunction with local linear regression. Then, a more difficult and original question consists in building a non-parametric estimate of the dynamics which takes into account the measurement errors which are present in historical data. We will propose an EM-like algorithm where the historical data are corrected recursively. The methodology will be illustrated and validated on an univariate toy example.

 


 

Angelina El Ghaziri (University of Nantes): "Fishing professional activities study using sampling survey analysis to reduce data collection"


Fisheries nowadays are dealing with a continuous growth of offshore activities due to an increasing population demands (e.g. aggregate extractions and aquaculture) and the rise of new projects based on renewable marine energies (e.g. offshore wind and sea currents energies). The situation is more critical for small-scale fisheries that are not well represented and could be led to limit or change their practices. Therefore a precise data collection of fishers activities is needed in order to (i) map fishing grounds at a relevant time-space scale, (ii) give opportunities to fishers to emphasize their interests and finally (iii) have a better share of marine space. In this context, VALPENA project has been launched in 2010 as part of collaboration between scientists from University of Nantes and the professional fishers’ representatives. Today, it holds seven out of the ten Regional Committee of Maritime Fisheries (CRPMEM) at the national metropolitan level. Although VALPENA objectives are clear and essential, it proved to be difficult to ask the fishers each year about their activities since they are frequently requested for surveys. To remedy this problem, VALPENA decided to alternate between two years of exhaustive survey and one year of sampling survey. Different sampling strategies have been tested based on Monte Carlo simulation. In order to increase the homogeneity and the representability of the selected samples, several stratifications of the fishing fleet has been proposed based on registration ports, types of gear or ‘métier’, and vessels length… Tests have also been made on optimal and proportional sampling. The results of these tests led to the choice of a sampling plan strategy implemented in 2016 in four CRPMEM. We will focus in this presentation on covering (i) the challenges of using sampling survey analysis in VALPENA, (ii) the construction of sampling plan based on the testing results of several sampling strategies and (iii) an illustration of the implementation and the analysis of an elaborated sampling plan being done currently.

 


 

Youssef El Habouz (University of Agadir): "Automatic recognition of otoliths based on a combination of features"


In recent years, technological development of automatic systems for otolith’s shape analysis, has been expanded significantly due to the advancement of technologies in various fields; as for example: image processing and computer sciences. Thus new research areas have been opened, with the evolution of pattern recognition’s techniques. Furthermore, shape analysis of otoliths has attracted much attention, due to its ease of implementation, efficiency and robustness. The main topic of this work is: shape analysis of fish otolith for the identification and stock discrimination of the fish. In this context, we used several databases of fish otolith, especially databases created within many collaborations between some worldwide laboratories working on the same theme. Including our scientific project with the Norwegian Institute of Marine Research, Tromso department (IMR). And, our cooperation with biological experts of Moroccan institute (INRH), where we extend local otolith database. This national database is composed of 450 images otoliths for 15 different fish species. Also we used another national database of Sardina pilachardus species, aimed for stock discrimination. That contains 150 images of the same species from 4 different fishery zones. These databases were used to evaluate and test the results of our approaches for species identification and stock discrimination. In This work, we propose two main approaches for a high performed automatic recognition of fish otoliths. That have contributed to improve the classification performance. The first approach is based on the median distances of the contour using internal contour characteristics to recognize the otolith. And the second approach is based on the Fourier descriptors of contour’s normal angles to identify different otoliths. Both approaches have good performance on the otolith database used in this study. However, the methods have limitations to differentiate between otoliths with high otolith shape similarity. In order to overcome this limitation, we developed an automatic otolith recognition system based on the combination of these two approaches. This combination showed high performance on the tested otoliths databases. The work carried out focused on our contribution to improve the techniques of making the decision by fisheries managers for the correct management of fisheries resources. Mainly, our proposed approaches aim the efficient identification of fish species using the otolith shape analysis. In order to determine the diet and the food web of a species of fish, and hence to help the inventory of fish stocks.

 


 

Bijan Fallah Hassanabadi (Free University of Berlin): "A cheap data assimilation approach for expensive numerical simulations"


The concerns about the climate change require deeper learning about the uncertainties in future predictions of climate system. However, our cutting-age science appears helpless to describe processes which take place beyond the yearly cycle of the climate system. Available observational climate data sets are the most accurate source of knowledge about the climate system. However they suffer from ill structural conditions: (i) time span of these data sets are usually less than a century, (ii) they change their accuracy in time (environmental changes in station location, usage of different measuring devices, urbanization), (iii) they change their density in time (more stations are available online in recent years), (iv) more numbers of stations over land than over oceans. Using a very cheap Data Assimilation (DA) method within a perfect model experiment, I investigate the usage of an alternative approach to classical DA for numerical climate models and time-averaged observations (seasonal means). The problematic features of a state-of-the-art high resolution Regional Climate Model are highlighted. One of the shortcomings is the sensitivity of such models to the slightly different initial and boundary conditions which could be corrected by assimilating scattered observational data. This method might help to reduce the bias of numerical climate models based on available observations within the model domain, especially for the time-averaged observations and the long-term climate simulations to create cheap climate Reanalysis data-sets. By applying a fast and cheap DA method, I demonstrated that despite the easiness of the method, it can significantly reduce the bias of the regional model for monthly averaged values of near surface temperature.

 


 

Piyush Garg (University of Illinois): "Observed structure and characteristics of cold pools over tropical oceans using scatterometer vector wind retrievals"


Precipitation driven cold pools over oceans contribute to the air-sea exchange of heat and moisture. The cold pools generated in the wake of convective activity can enhance the surface sensible heat flux, latent heat flux, and also changes in evaporation. Recent studies have highlighted the important role that cold-pool outflow boundaries, and their intersections, play in initiating and maintaining oceanic convective clouds; however, large-scale observations of oceanic cold pools and their roles in initiating, organizing, and maintaining oceanic convection remain elusive. The primary goal of this study is to understand the extent to which the structure and characteristics of cold pools in the tropical oceans can be determined using routinely available ocean vector winds, as well as understanding cold-pool characteristics and their meteorological environments. Using ASCAT vector wind retrievals, regions of large gradients in winds were identified as gradient features (GFs). The regional variations in GF properties in the tropics are described. Corresponding to these GFs, surface sensible and latent heat fluxes were calculated using winds from ASCAT as well as thermodynamic and wind conditions from MERRA-2 data products. In addition, using NOAA CMORPH and Remote Sensing Systems microwave precipitation product, rainfall characteristics in the vicinity of the observed GFs is estimated. To examine the detectability, as well as physically understand the thermodynamic and kinematic characteristics of observed cold pools, simulations were carried out using the Weather Research and Forecasting model on a nested grid of 27-9-3 km with convection being resolved explicitly on 3-km domain during the DYNAMO field experiment over northern Indian Ocean. The areas of cold pools were identified in the model using virtual temperature (Tv), which is a direct measure of air density, and GFs were identified using model-simulated winds. It is anticipated that improved understanding of cold pools, which are a primary triggering mechanism of oceanic shallow and deep convection, will improve understanding and prediction of this important part of the climate system.

 


 

Priscila Gianoni (University of Rio de Janeiro): "Data-driven modeling of storm surges and tidal currents in Guanabara Bay, Rio de Janeiro, Brazil"


In the last decade, accurate forecasts of storm surges and tidal currents in coastal regions have become the subject of many studies because of their vulnerability and socioeconomic relevance. Floodings, abrupt changes in the shoreline due to the intensification of tidal flows, destruction of bars and structures, and the remobilization of large amounts of sediment are some examples of the impacts caused by high-intensity storms. The physical mechanism behind the storm surges is still poorly understood, partly because of the complex interaction between astronomical tides, waves and meteorological forcing; partly because of the dynamical behavior of the weather. Since the conventional hydrodynamic models still fail to reproduce weather effects in coastal environments - due to the complexity of nonlinear processes involved and the high number of variables - the neural network modeling may represent a promissing alternative. This study used a Nonlinear AutoRegressive with eXogenous inputs (NARX) neural network in order to reproduce the meteorological effects on water levels and tidal velocities within Guanabara Bay, Rio de Janeiro, Brazil. Preliminary results show that accurate predictions of nonlinear fluctuations of water levels can be obtained for up to 48h. Forecasts of 24h, 36h and 48h presented a correlation of 99.8%, 99.7% e 93.6% with the respective measured data. The good performance of the configured NARX models seems to be related to the time series analysis and transformations before they were used in the neural network models.

 


 

Nina Kargapolova (Institute of Computational Mathematics and Mathematical Geophysics of Russia): "Stochastic model of precipitation indicators, daily maximum and minimum air temperature non-stationary time-series"


For solution of different applied problems in such scientific areas as hydrology, agricultural meteorology and population biology, it is quite often necessary to take into account statistical properties of different meteorological processes. For example, it may be necessary to estimate probability of occurrence of meteorological elements combinations conducive to the spread of forest fires, probability of frost occurrence in spring and summer, average number of dry days, etc. Real data samples are usually small and this leads to the statistical unreliability of the estimates. Therefore, instead of small real data samples it is necessary to use samples of simulated data. In this regard, in recent decades a lot of scientific groups all over the world work at development of so-called "stochastic weather generator". At its core, "generators" are software packages that allow numerically simulate long sequences of random numbers having statistical properties, repeating the basic properties of real meteorological series. Most often series of surface air temperature, daily minimum and maximum temperatures, precipitation and solar radiation are simulated. In this talk a stochastic parametric simulation model that provides daily values for precipitation indicators, maximum and minimum temperature at a single site is presented. The model is constructed on the assumption that these weather elements are non-stationary random processes and their one-dimensional distributions vary from day to day. Parameters of the model (parameters of one-dimensional distributions, auto- and cross-correlation functions) are chosen for each location on the basis of real data from a weather station situated in this location. A case in which real data give unreliable statistical estimates of the model parameters is also considered. Several examples of model applications are given. It is shown that simulated data may be used for estimation of probability of extreme weather events occurrence (e.g. sharp temperature drops, extended periods of high temperature and precipitation absence).

This work was supported by the Russian Foundation for Basis Research (grants No 15-01-01458-a, 16-31-00123-mol-a, 16-31-00038-mol-a) and the President of the Russian Federation (grant No MK-659.2017.1).

 


 

Riwal Lefort (ENSTA Bretagne): "Coherence loss in passive underwater acoustics source localization: Combining sub-antenna techniques and sparse model"


In passive underwater acoustics, the inversion techniques remain the reference baselines to deal with source localization. These techniques are all based on the assumption that the sound propagation properties are perfectly known, so that a replica model of the captured signal is proposed. In practice however, the environmental properties are difficult to fully assess because of their spatio-temporal dynamics. For instance, both seabed and temperature profiles are constantly evolving. A consequence is that the replica model of the captured signal does not correspond to what is observed in situ. The difference between the captured signal and the replica model results in what is called a “coherence loss”. In this work, we consider a linear antenna composed of acoustics sensors. In that case, the coherence loss can take the form of a correlated phase perturbation. More precisely, the phase perturbation between two close sensors only depends on the antenna geometry, while it also becomes dependent on the propagation environments in the case of remote sensors. The maximal distance where two sensors are still statistically dependent is called the “coherence length”. Formally, the loss of coherence is modeled by a multiplicative colored noise. Given such a noise, with a given coherence length, we investigate new methods for source localization. On the first hand, sub-antenna processing seems to be a good strategy to deal with such multiplicative colored noise. A set of shorter sub-antennas is built from the main antenna, the length of each antenna segment being shorter than the coherence length. Consequently, a local assumption of coherence holds on each sub-antenna. However, this method significantly worsens the antenna resolution. On the second hand, source localization has been recently tackled by means of sparse techniques, taking the form of constrained optimization problems. These approaches drastically limit the source position possibilities, the consequence being a much finer resolution. In return, as such, sparse techniques are not designed to be robust to coherence loss. In other words, it increases the antenna resolution, but decreases the localization performance. In this work, we propose a new formalism to combine both sub-antennas and sparse techniques. Doing so, we wish to benefit from positive potentials of both methods. More precisely, we expect a new localization method that is robust to coherence loss and that has a finer antenna resolution. The proposed algorithm considers first a sparse solver to each sub-antenna, and then, each sparse solution is combined from a mixed norm. From numerical simulation of plane waves, we demonstrate that these objectives are achieved.

 


 

Redouane Lguensat (IMT-Atlantique): "Analog data assimilation for short space-time scales in mapping along-track ocean altimetry"


In this work, we investigate the utility of historical datasets to along-track sea level altimetry mapping. We state the problem as a missing data interpolation issue and present a data-driven strategy that enhances mesoscale data. Based on the Multiscale Analog Data Assimilation paradigm, our data-driven strategy starts by considering the Optimal Interpolation solution for the large-scale component of the field, then uses the Analog Data Assimilation framework to estimate the fine-scale component of the field.

 


 

Veronica Martin Gomez (University of the Republic of Uruguay): "A complex network perspective of the past and future coupling of the tropical oceans and precipitation over southeastern South America"


Several previous studies have already shown that Southeastern South America (SESA) rainfall is impacted by SST anomalies in the tropical Pacific, Atlantic and Indian oceans. In addition, these tropical oceans can interact with each other inducing SST anomalies in remote basins through atmospheric and oceanic teleconnections. However, nowadays it is not clear how these tropical oceans can interact with each other, and in turn, collectively induce rainfall variability over SESA, and neither how this “collective behavior” among the tropical oceans and SESA precipitation could change during the next century as a consequence of an anthropogenic forcing. We address these issues from a complex network perspective. We construct a climate network considering as nodes different indices that characterized the SST variability over the tropical oceans (El Niño3.4, The Tropical North Atlantic (TNA) and the Indian Ocean Dipole (IOD)) as well as an index that represents the precipitation (PCP) variability over SESA. We investigate their collective behavior focusing on the detection of synchronization periods, which are defined through the mean network distance and can be understood as those periods of time in which the network’s nodes were more connected. Results show that during the last century there were two synchronization periods (30s and 70s) characterized by different interactions among the network’s nodes: while during the 30s the interacting nodes were El Niño3.4, the TNA and the PCP, during the 70s they were El Niño3.4, the IOD and the PCP. An analysis of the network behavior under a global warming scenario suggests that an anthropogenic forcing would increase the number of synchronization periods, their time length and the nodes connectivity. The stronger connectivity of SESA PCP under a global warming scenario would suggest an increase of the tropical ocean’s influence on SESA PCP as a consequence of an anthropogenic forcing. These results are based on the grand ensemble mean of seven CMIP5 models and should be taken with caution because of the large disparity in individual model behavior.

 


 

Platon Patlakas (University of Athens): "Studying the upper and lower tail of wind speed probability distribution: Extreme value analysis"


The highly competitive framework in research and industry activities, in sectors associated to environmental science, climatology and meteorology requires advanced information that can be considered to be a key factor for decision makers and stakeholders. Among the different options that can contribute towards this direction, atmospheric datasets of high accuracy along with a statistical analysis beyond the conventional standards can make a difference. More specifically, the estimation of the magnitude and the probability of occurrence of extreme wind conditions are important for plenty activities such as wind farm siting, marine applications, pollutant dispersion (associated to accidents) etc. In this work, a stochastic approach based on the Extreme Value Theory principles is tested. The approach focuses both on the upper and lower tail of wind speed probability distribution. The concept of Intensity-Duration-Frequency (IDF) curves is applied in order to demonstrate the results. These depict the relation between wind speed and the duration of the event for different return periods. The obtained results are also compared to other methodologies and evaluated using different sensitivity tests. At the same time, various tools and techniques are employed for the fine tuning of the proposed methodologies and the quantification of the associated uncertainties. For the needs of the study, the database of the Atmospheric Modeling and Weather Forecasting Group of the University of Athens constructed within the framework of the FP7 European Program “MARINA Platform” is utilized. The database consists of atmospheric data resulting from a 10-year hindcast simulation of the numerical atmospheric model Skiron coupled with the wave model WAM. The use of the database is critical for the analysis as it gives us the opportunity to monitor the results covering wide areas in high resolution. The study area is the Mediterranean and the Atlantic coastline of Europe, focusing especially on regions with increased interest for renewable energy activities.

 


 

Patrick Raanes (NERSC/CEREA): "Gaussian scale mixtures and adaptive inflation for the EnKF"


The EnKF-N addresses the issue of sampling error in the EnKF, including the resulting covariance bias, nullifying the need for inflation in the perfect model scenario. Moreover, its implementation requires only minor additions to existing square-root EnKF’s, and the additional computational costs are negligible. However, the idealism of this assumption means that it still reliant on ad-hoc inflation tuning in real-world, operational use. The EnKF-N is also highly attractive from a theoretical perspective: its derivation originates in the rejection of a simplifying assumption of the standard EnKF – that the sample covariance matrix is the true forecast covariance. However, its hierarchical, Bayesian nature has made it challenging to disseminate. The obscurity of the link between sampling error and nonlinearity also poses an obstacle to its wider adaption. This work aims to address the above problems. Firstly, the issues of sampling error is elucidated in a telling example. The EnKF-N is then derived and explained from an inflation-centric perspective. Further, by building on the form of its adaptive inflation, the EnKF-N is hybridized with adaptive inflation methods aimed at contexts where model error is present and/or imperfectly parameterized. Benchmarks from twin experiments indicate significant promise in comparison with the adaptive inflation techniques currently in use.

 


 

Ayda Regueb (IFREMER): "Spatial statistical processing of extreme sea-states"


In ocean engineering, estimating the probability of occurrence of extreme sea-states is crucial to the conception of Marines Renewable Energy structures (MRE). This probability is usually estimated for a return period which is considerably larger than periods of observation. This issue leads to an uncertain estimates on a single site. In order to reduce these uncertainties we propose a regional statistical analysis which should take into account events observed on neighboring sites with identical or close extreme climate. To estimate the probability of extreme significant wave height in cyclonic areas we use Extreme Value Theory (EVT) which models behavior of the tail of distribution since extreme events are very rare there. we used a synthetic data base simulated with ADCIRC-SWAN model where a total of 686 cyclones were isolated corresponding to a period of 3200 years and covering a total of 127670 grid locations near Guadeloupe island. A first approach is to use POT (Peak Over Threshold) point-wise technique to model exceedances over a threshold. Generally, results obtained from this technique are very rough. We propose here a spatial smoothing to the statistical model fitted on in order to improve return values evaluation. At the beginning we need to homogenize the climate of extremes by taking into account co-variables effects (seasonality, depth, wave direction, geographic position). Then, the goal is to estimate distribution parameters (Generalized Pareto Distribution) by the means of a Maximum Penalized Likelihood Estimator (MPLE). This estimator uses cubic splines and try to recover a smooth function which maximize the penalized likelihood. This maximization establishes a sort of compromise between our desire to stay close to the given data and our desire to obtain a smooth spatial evolution of the extreme climate. However, implementation of this method faces many challenges, including the choice of the threshold, estimating method and smoothing constant. Especially, the choice of the smoothing constant depends on which of these two conflicting goals we accord the greater importance. The described method of spatial smoothing is then compared with point-wise POT results to prove the quality and the efficiency of the current practices.

 


 

Stéphane Saux-Picart (Météo France): "A machine learning approach for MSG/SEVIRI SST bias estimation"


It is increasingly important for applications such as data assimilation or climate studies to have some knowledge about the uncertainties associated with the data being used. The GHRSST has for a long time recommended SST data producers to include Single Sensor Error Statistics (SSES) within their SST products. However there is recommendation as to which method may be used to provide SSES. They are usually understood as the mean and standard deviation of the difference between satellite retrieval and a reference. This work is an attempt at using advanced statistical methods of machine learning to predict the bias between Ocean and Sea Ice (OSI SAF) Meteosat Second Generation (MSG) SST products and ground truth considered to be drifting buoy measurements. OSI SAF MSG current product is elaborated using a multilinear algorithm using 10.8 and 12μm channels to which a correction is applied in the case of high concentration of atmospheric Saharan dusts. An algorithm correction method based on radiative transfer simulation is also used to account for seasonal and regional biases. A complete description of the retrieval methodology can be found in Le Borgne et al. (2011). However, for this study, the two corrections mentioned above have been removed. This was done to simplify interpretation of the results of statistical models for predicting bias in retrieved SST. Here we present the results obtained using four different statisticals methods : Linear regression, Least Absolute Shrinkage and Selection Operator (LASSO), Random Forest and Generalized Additive Model (GAM).

 


 

Xiao Tang (Institute of Atmospheric Physics, Chinese Academy of Sciences): "Estimating emission change of CO and NOx during the victory day military parade with Ensemble Kalman Filter"


During the China Victory Day Parade in 2015, temporary emission control measures were conducted over Beijing and surrounding regions to guarantee the air quality of Beijing. This offers a great opportunity to explore the ability and limitations of the top-down emission estimation. In this study, we employed an ensemble Kalman filter (EnKF) in coupling with a Nested Air Quality Prediction Modeling System (NAQPMS) to establish a high temporal and spatial resolution emission inversion estimation scheme. The scheme enables to assimilate more than 400 surface observations of carbon monoxide (CO) and nitrogen dioxides (NO2) into a 5km×5km resolution model to inversely adjust the a priori emission inventory based on the Multi-resolution Emission Inventory for China (MEIC) for the base year of 2010. Fifty ensemble members and an offline hourly inverse analysis were employed during the four-week inverse period. Results suggested that the inverse estimation scheme significantly reduced the biases in the a prior emission inventory. Therefore, a new emission inventory was obtained and served as the base to compare with the inverse emission inventory during the China Victory Day Parade. Comparison between the new base emission inventory and the inverse emission inventory during the China Victory Day Parade revealed the temporal and spatial characters of the emission control measures over Beijing and surrounding areas. Significant emission reductions were found in Beijing-Tianjin-Hebei and surrounding areas. Meanwhile, NOx showed more reductions in areas around Beijing due to more rigorous vehicle control. This study highlighted the advantages and limitations of the EnKF-based emission estimation scheme. The uncertainties related to observation network, sampling strategy, and meteorological errors were also discussed.

 


 

Christos Tsalis (University of Athens): "Non-stationary processes: Application to wind speed design values"


Several approaches for the estimation of extreme values of random quantities have been developed, the most fundamental of which is considered the block maxima (BM) approach, which is closely related to the generalized extreme value (GEV) distribution. One of the important issues on describing the GEV distribution is the estimation of the relevant distributional parameters. Specifically, for a stationary processes the probability characteristics do not change systematically in time, while for non-stationary processes, the parameters of the GEV distribution are allowed to vary through time. Studying environmental processes, non-stationarity is often necessary, due to seasonal effects or longer-term climate changes behavior. When incorporating non-stationarity into GEV parameters, one point of concern is what time dependent model to select, what characteristics must the parametric model satisfy in order to describe adequately the extreme event, and if all parameters truly vary in time. In this work, an assessment of various parametric models for the description of the GEV parameters is made, considering a linear, quadratic and cubic trend through time (for the location and scale parameters) and a time independent model for the shape parameter. The most common method used for the estimation of the time dependent GEV parameters is maximum likelihood (ML) method. The likelihood-ratio test, Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) are used for comparing the relative quality of the statistical models that have been selected. Effective return levels for specific values of the covariates is estimated and compared with the stationary case. For this aim, the non-parametric Mann-Kendall (M-K) test will be implemented using observational data from the NOAA’s buoy network. This aim of the (M-K) test is to examine the hypothesis that there is a monotonic upward or downward trend of the associated models variable. The benefit on modeling trends in the parameters of the distribution of the extremes is that the original data no longer have to be de-trended and can be used directly. This study anticipates to contribute in practical applications in meteorology, ocean and coastal engineering, climate change, due to the lack of a universal robust theory on working with extremes when the stochastic process is non stationary.

 


 

Chen Wang (IFREMER): "Preliminary analysis of Sentinel-1A SAR images for marine atmospheric phenomena study"


Marine atmospheric phenomena (MAP) modifies the local roughness of ocean surface, so that it could be observed by high-resolution Synthetic Aperture Radar (SAR) images. Sentinel-1A wave mode provides global acquisitions over open ocean, which contain many sorts of MAP, e.g., wind streaks, atmospheric boundary layer, rain cells, atmospheric gravity waves, atmospheric fronts and cold-air outbreaks. The SAR images of swath width 20km with 5m resolution can give us an opportunity to study statistical characteristics of MAP globally, and further may benefit the development of high-resolution atmosphere boundary layer model. In addition, influence of MAP on ocean wave can also be investigated through SAR measurement. Therefore, how to discriminate the SAR images with and without MAP become a top priority. Preliminarily, normalized variance of smoothed sigma-nought (Nvs) is exploited to describe the homogeneity of Sentinel-1A wave mode SAR images. To better interpret the global distribution of Nvs, ancillary data such as wind speed, wind divergence, wind curl and rain rate are examined qualitatively. We found that large Nvs values corresponding to heterogeneous images, mainly results from movements of atmosphere. Thus, with this parameter, heterogeneous and homogeneous images can be roughly distinguished.

 


 

Huangjian Wu (Institute of Atmospheric Physics, Chinese Academy of Sciences): "An automated quality control method used for the surface air quality observations in China"


With the increasing concern on air quality and the advance in sensor technologies, the amount of air quality observational data has been increasing rapidly. In the past, quality control was conducted manually. However, manual control is becoming more challenging due to the increasing data size. And it is even more difficult to provide quality assurance for real time publication and data assimilation. By analyzing features of anomalous observation data from 2014 to 2016 in China, we present a fully automated quality control system for air quality observations of PM2.5, PM10, SO2, NO2, CO, O3. Most of the anomalies can be identified in the spatio-temporal continuity check which uses a low-pass filter and index of agreement to calculate estimates based on time and space continuity. Variances of estimation errors are calculated using sliding window, so that the confidence interval can automatically adjust for different periods, regions, and pollutants. For the limitations of spatio-temporal continuity check, we design other algorithms to reduce false removal resulting from drastic concentration changes, and to recognize abnormal observations that appear regularly or remain unchanged. Applying this method in China ambient air quality monitoring network for 3-year monitoring, the deleted abnormal data accounts for about 1% of the original data. There are significant differences in the annual average value and diurnal variation between the data sets with and without quality control in some stations.