Completed Projects
People
Modern power grids are exposed to an increasing level of uncertainty due to the integration of intermittent renewable energy sources. Sensitivity analysis is a crucial component necessary for quantifying the uncertainty and thus increasing secure grid operations 1,2 . In this project new modules within the Verified Exascale Computing for Multiscale Applications (VECMA) toolkit 3 will be implemented. These modules will enable sensitivity analysis of the applications with correlated input parameters, which is usual practice in the power system applications. The validation of the new modules is based on two components; (i) analyzing the convergence and error of the VECMA toolkit using the new modules and (ii) qualitative comparison of the two developed methods addressing the correlation. The computation comprises of thousands of simultaneous and independent model evaluations, which can thus scale up to an arbitrary number of the computing cores at supercomputing systems. We will use pilot job mechanisms and meta-schedulers for the management and execution of these independent processes.
People
People
Efficient energy trading relies on high-fidelity price prediction systems with short response times. This project will advance state-of-the-art computational tools for the market pricing mechanism for meeting real-time responses in smart power grid analysis.
People
Progress in modern computing platforms and storage systems, electronic devices, and monitoring equipment has resulted in an exponential growth of the volume of data produced in several areas of science and engineering. These areas comprise of environmental sciences, biology= and medicine, satellite imaging, geospatial data, climate data, and transaction data among many others. Data processing commonly employs sophisticated statistical methods aiming to enrich the mechanisms governing the underlying physical processes and improve statistical models. Statistical analysis of such models traditionally has been carried out using Markov chain Monte Carlo methods (MCMC) used to represent complex dependency structures in data. MCMC methods provide a relatively simple approach to compute large hierarchical models requiring integration over several thousands of unknown parameters. Although MCMC methods are asymptotically exact they have slow convergence, do not scale well, and may fail for some complex models. It was soon realized that MCMC will not be able to meet modern and future big data challenges. In particular, we need to focus on extending the RINLA software ecosystem by advancing direct sparse linear solvers designed for Bayesian inference statistical computing. The sparse matrix algorithms and software implementations will be done in a codesign with data science applications in mind. By combining accelerated matrix algorithms and Bayesian inference at large scale, we plan to develop an algorithmic tool serving as part of a virtual laboratory for spatial and spatio-temporal models. This will pave the way for the next generation of data science applications based on INLA in ways not possible before. Our goal is to make our software ecosystem as productive and sustainable as possible by simultaneously focusing on algorithmic improvements to increase quality and speed, while at the same time evaluating potential benefits in various data science applications. This research project will therefore focus on solving all these fundamental challenges imposed by large-scale analytics, deep analysis and precise predictions by advancing and preparing the foundation for the next generation of RINLA.
People
In this research project we plan to analyze possible economic responses to climate change in a heterogeneous-agents, multi-region, stochastic general equilibrium model. Climate change, as well as carbon taxation, will have drastically different effects on aggregate production and consumption across different regions. Moreover, a lack of international risk sharing as well as high costs to migration imply that the predicted global warming can have much larger adverse effects than it would appear from a single-region model. An obvious policy response to global warming is a carbon tax which will naturally hurt some individuals and help others. We plan to compute the optimal carbon tax through time, as well as region- and cohort-specific side payments needed to make carbon taxation a global, that is to say, an inter-temporal, and inter-regional win-win. To do so in an accurate quantitative fashion, we will need to i) develop large-scale, multi-region dynamic stochastic economic models with overlapping generations that incorporate state-of-the-art climate physics, and ii) develop high-performance computing codes that are capable of solving such models on a human time scale. An essential aspect of the research project is to develop economic models that can help us to understand how researchers and society can tackle the significant uncertainties associated with climate change. In this context, we also plan to address the question of how new financial assets or new forms of social insurance systems can help to share climate risks and mitigate climate uncertainties. Our project lies at the intersection of economics, climate science, and computational science. The main questions we ask are economic questions. However, to model climate change appropriately, in particular in order to quantify regional differences and uncertainties associated with climate change we need to engage and interact heavily with the climate modeling community. To compute the effects of taxes and climate risks on individuals’ welfares we plan to develop a modular code framework, with one module to model the evolution of climate, one module that links changes in climate to economic damages, and one module that solves for prices and quantities in the economy. For this, we need to interact heavily with the computational science community.
People
At the turn of the 21st century scientists have come to realise that a major ingredient in many modern economic, epidemiological, ecological and biological questions is to understand the network structure of the entities they study; for example, interbank lending is crucial for oiling the global economy and modern transport networks are facilitating the spread of infectious diseases. Unfortunately, even in the era of big data, computational bottle-necks have meant that only the simplest analyses have been applied to these large datasets, whereas methodological bottle-necks prevented an integrative view of complex phenomena. In short, inferring and analyzing complex networks have proven extremely difficult. Rather than simplifying the methodology prior to seeing the data, modern techniques from high-dimensional inference allow the data to select the appropriate level of complexity. The aim of this project is to integrate these techniques to the field of network analysis.
People
Statistical Computing Laboratory (Prof. Wit)