Master of

Computational Science Projects

MSc internship in industry

On a regular basis positions are available to do an internship as part of the MSc program CSL at ING Bank, E&Y, Ortec Finance, ABN AMRO Bank, KPMG, etc. The topics vary from Monte Carlo and PDE based methods for pricing and risk measurement of derivatives, to the use of network and advanced analytics methods for liquidity, systemic risk and trading algorithms.

Keywords : Computational Finance,

Track : Computational Finance

Contact 1 : Drona Kandhai b.d.kandhai@uva.nl

Valid Date : Dec. 31, 2019

What is the role of vasoconstriction in initial platelet aggregation?

Our blood vessels contain smooth muscles cells (in circumferential direction) to regulate the diameter of the blood vessel. The vessels can in this way regulate the blood pressure in our body. Moreover, the regulation of the diameter of a blood vessel is also important when a vessel is damaged. To reduce the loss of blood smooth muscle cells will contract, a process which is called vasoconstriction. This contraction of the vessel is part of hemostasis, the process of blood clot formation. Vasoconstriction causes also an increase in shear rate that could be of value during initial platelet aggregation. Under high shear rates a protein called von Willebrand factor is important in the binding of platelets to the vessel wall and to each other. In our current study we found the appearance of a cell-free layer (CFL) at the place where a platelet aggregation is initialized. Additionally, we think that this CFL is needed for the von Willebrand factor to uncoil and bind platelets. Therefore, in this study we are interested in the relationship between the amount of vessel constriction and the thickness of the CFL. In addition, we want to know if the vasoconstriction is only important in hemostasis or also in thrombus formation. During this project you will work with the cell-based model Hemocell on this topic. If you are interested, please contact Alfons Hoekstra or Britt van Rooij.

Keywords : Computational Biomedicine,

Track : Computational Bio-Medicine

Contact 1 : Alfons Hoekstra A.G.Hoekstra@uva.nl
Contact 2 : Britt van Rooij b.j.m.vanrooij@uva.nl

Valid Date : Dec. 31, 2019

Shear Thickening Simulations Using Model Cornstarch Particles

Shear thickening is the phenomenon in which the viscosity of a suspension increases as with the shear rate. Two types shear thickening, continuous shear thickening (CST) and discontinuous shear thickening (DST) are observed in experiments. During CST, the viscosity of the suspension increases continuously with imposed shear rate, while in discontinuous shear thickening, viscosity of the suspension increases by orders of magnitude with a small increment in shear rate. Cornstarch and water suspension, commonly known as Oobleck, exhibits DST and CST. This suspension exhibits CST at lower cornstarch volume fraction in suspension, and exhibits DST at higher volume fractions (~0.4). The DST volume fraction for cornstarch suspensions is significantly lower than the DST volume fraction for spherical particle suspensions (~0.56 ). This has been assumed to be due to the non spherical shapes of cornstarch particles Our suspension simulation model (SuSi) can handle different particle shapes and simulate large numbers of particles immersed in any fluid medium. Using the system currently implemented in SuSi, one can create almost any shape by using overlapping spheres. This technique shall be used to model the shapes of cornstarch particles. In this project, we propose to model the cornstarch particles by observing their shapes from the microscopic images. The shear thickening that arises from the cornstarch particle model will then be studied using SuSi.

Keywords : shear thickening, microstructure, simulation, modeling,

Track : Others

Contact 1 : Alfons Hoekstra A.G.Hoekstra@uva.nl
Contact 2 : Vishnu Sivadasan v.sivadasan@uva.nl

Valid Date : Dec. 31, 2019

Holographic projector

When mentioning the word ‘hologram’ many people think of ‘Star Wars like’ technology that creates projections that float in free space. This is actually not too far off from what is theoretically possible. In reality, for viewing, the projector, the projection, and the observer must be in line. In daily life holograms are static images that can be found in some museums and exhibitions. By making use of the wave like nature of light, by means of interference, it is possible to create intensity distributions in free space. A holographic projector can be made with a ‘screen’ in which each ‘pixel’ emits light with precisely set intensity and phase. The light from all these pixels result in a 3 dimensional intensity distribution that represents the object to be projected. A difficulty in generating holograms is the required dense pixel pitch. A ‘conventional’ holographic projector would require pixels of less than 200 nanometer. With larger pixels artefacts occur due to spatial under sampling and the useful angle for projection becomes less (only a few degrees for state of the art systems). A pixel pitch of 200 nm or less is difficult, if not, impossible, to achieve, especially for larger areas. Another challenge is the massive amount of computing power and data handling that would be required to control such a dense pixel matrix. A new holographic projection method has been developed that reduces ‘conventional’ under sampling artefacts, regardless of pixel density. The trick is to create 'pixels' at random but known positions, resulting in a ‘pixel matrix’ that lacks (or has strongly suppressed) spatial periodicity. As a result a holographic emitter can be built with a significantly lower sample density and less required computing power. This could bring holography in reach for many applications like display, lithography, 3D printing, metrology, etc... A short description (in Dutch) can be found through the following link: https://www.nikhef.nl/nieuws/prototype-holografische-videoprojector-in-de-maak/ Our goal is to quantify the relation between hologram ‘quality’ and the density and positioning of the ‘pixels’. The quality of a hologram is defined by factors like: Noise, contrast, suppression of under sampling artefacts, and resolution. One of the challenges a student can work on, is to make a proper simulation model of this system. The programming language and the method(s) for modelling depend, to a large extend, on the preference of the student(s). The purpose of this simulation model is to determine the requirements of a holographic projector for a given set of requirements for the holograms (like, for example a contrast ratio of better than 1:100). In addition a proof of concept holographic emitter is being built. This set-up will be used to verify simulation results (and, of course, to project some cool holograms). If you want additional information, please contact Martin Fransen: martinfr<at>nikhef.nl

Keywords : High Performance Computing, Computational geometry, Image Processing, Monte Carlo simulation, modeling,

Track : Others

Contact 1 : Michael Lees M.H.Lees@uva.nl

Valid Date : Sept. 1, 2019

Network Modelling in Finance

In the past few years, our group has been involved in several theoretical and applied research projects involving network modelling in finance. As a follow up we would like to further explore network reconstruction algorithms and early warning signals.

Keywords : Big Data, Agent Based Model, Machine Learning, Information Theory, Economics, Risk Management, Network-based modelling, Computational Finance,

Track : Computational Finance

Contact 1 : Sumit Sourabh s.sourabh@uva.nl
Contact 2 : Drona Kandhai b.d.kandhai@uva.nl
Contact 3 : Ioannis Anagnostou i.anagnostou@uva.nl

Valid Date : Dec. 31, 2019

Agent Based Models in Economics and Finance

Agent Based Models have been successfully applied to understand macro behaviour in financial markets as a result of their micro constituents and their interaction. More recently models have been developed by the Medical University of Vienna. We would like to understand and extend these methods to model prepayment risk in retail lending.

Keywords : Agent Based Model, Economics, Network-based modelling, Computational Finance,

Track : Computational Finance

Contact 1 : Drona Kandhai b.d.kandhai@uva.nl

Valid Date : Dec. 31, 2019

Surrogate modelling of damage stability of ships

This internship position is offered by the SARC.nl company, which develops models and software tools for ship design, fairing and on-board load calculations. ABSTRACT: The ship design process is governed by computation-intensive analyses. Well-known are CFD and FEM, but there are also specific topics, such as damage stability analysis. At SARC, some 3D models and simulation software is available for this task. However, a problem is that the damage stability properties cannot be assessed in an early stage of ship design, because modelling the ship and making the computations are too time-consuming. So, the aim of this project is to develop a surrogate model for damage stability prediction at the early stage of ship design, by employing existing 3D computational models and machine learning methods.

Keywords : Machine Learning, simulation, modeling, FEM,

Track : Others

Contact 1 : Valeria Krzhizhanovskaya V.Krzhizhanovskaya@uva.nl
Contact 2 : Herbert Koelman H.J.Koelman@sarc.nl

Valid Date : Aug. 31, 2019

Decision-support for patient after-care choices for burn wounds

The Brandwondenstichting (brandwondenstichting.nl, burn wounds foundation) has a large amount of longitudinal patient data, from the moment of injury throughout the hospitalization and other health care. In the after-care patients still have to make many decisions. For instance, will the patient choose plastic surgery to reduce scar tissue, and if so which type? This is a complex decision with many factors at play, such as fysiology but also what the patient finds important, whether or not it creates discomfort, the risks associated with the surgery, and what is and is not possible to achieve. The goal of this project is to analyze the data and create an optimal decision-tree model, based on the data as well as on expert opinions. The goal is to let the patient answer questions which allow the system to find 'similar' patients, who can then be used to provide historic evidences for the choice of the patient. The outcome of this tool will be discussed between the patient and a doctor. The goal of this project is to create a proof-of-principle which will subsequently be implemented.

Keywords : Machine Learning, modeling,

Track : Complex Sytems Theory

Contact 1 : Rick Quax r.quax@uva.nl

Valid Date : June 29, 2019

Predictive Simulation of Punctuality in Public Transport

Influencing punctuality is often a difficult task, as many agents are influencing this outcome. We aim for this project to move from result-driven indicators to performance-driven indicators. Punctuality is therefore seen a complex system with dynamic agents, whether having virtuous influence or negative. The focus is on understanding these factors as well as on using them for predictive capability. The predictive emphasis is to help steer the KPI decision-makers in taking the right, and in such, preemptive measures to ensure the ultimate goal of public transportation, optimal punctuality. In this project, the student will be looking at the system dynamics behind the capability of a public transport vehicle to meet optimal punctuality. The student will investigate two main points: (1) Detecting outliers and the influencing factors: Develop a formula to detect & describe outliers; Create dynamic table to show relevant factors playing an active role for punctuality. (2) Simulation of the system dynamics: Map of feedback loops & key modifiers; Visualization using simulation software (e.g. MatSim); Calculation of a decision impact on punctuality. The goal is to produce a working predictive tool/module that can be used to simulate punctuality starting from predefined constraints (e.g. traffic, ridership, etc.). Data sources: Vehicle data from public transport companies (e.g. ridership, maintenance, planning, etc.) Publicly available data sources (e.g. weather, traffic, etc.) Contact: Mert Dekkers m.dekkers@zight.nl

Keywords : Mobility, Smart City, Big Data, Transportation, Agent Based Model,

Track : Complex Sytems Theory Urban Complex System

Contact 1 : Valeria Krzhizhanovskaya V.Krzhizhanovskaya@uva.nl

Valid Date : Aug. 31, 2019

Care after burn wounds: shared decision making tool as part of a patient portal for burn survivors

Aim: In this project, complex medical information regarding scar tissue of burn survivors is transformed into a shared decision making (SDM) tool that will be available to burn survivors in a patient portal. Burn survivors often face difficult decisions regarding the treatment and management of (one of) their scars, for example decisions on whether to treat the scar by means of (multiple occasions of) reconstructive surgery or not. The aim of the SDM tool is to provide burn survivors who face such a decision with information that can aid them in making a decision that suits their personal context. This can be done by 1) providing information on burn patients that have similar (clinical) characteristics as the burn survivor facing the decision (i.e. wound size, type of wound, etc.) and to inform them on what treatment these similar patients have received and what the outcomes of these treatments were (e.g. scar quality and quality of life). Subsequently, 2) this information can be more personalized by selecting a subset of these similar patients and their received treatment that also have patient-related factors similar to the burn survivor, such as quality of life expectations, personal values, etc. The basis of the shared decision making tool is an algorithm based upon registry data from the three Dutch burn care facilities, if needed supplemented with data from clinical research studies and if needed priors may be obtained based on expert opinion and previous research. This data contains clinical and etiological attributes (such as total burn surface area (TBSA) at time of admission, type of burn, location, depth of burn, scar size, nr of surgery’s , whether the wound was infected with bacteria etc) as well as patient reported outcomes (PROMS) on quality of life, scar quality, mental health, social role functioning, physical function. The PROMS registry has started recently; gathered data on this matter might contain too little power for the algorithm at the moment, yet will grow overtime. Ideally, machine learning is included in the final shared decision making tool; the choices and outcomes of the burn survivors using the tool are included in the algorithm to optimize quality and accuracy. Deliverables: Basic SDM tool: Algorithm for SDM tool that can perform step 1 and step 2. The algorithm needs to be accessible in a SDM tool interface. The SDM interface is created by a third party together with the Dutch Burns Foundation in order to integrate it into the patient portal. Final SDM tool: Algorithm for SDM tool performing step 1 and step 2 and machine learning, likewise integrated in the patient portal by a third party and the Dutch Burns Foundation. Planning: This project is funded by the National Healthcare Institute for a period of two years, 2018-2020. By September 2020 the SDM tool, including the patient portal, needs to be implemented in all three Dutch burn care facilities. By mid 2019 a first version of an SDM is anticipated.

Keywords : Bioinformatics, Machine Learning, Data Engineering, Data quality, modeling,

Track : Others Computational Bio-Medicine Complex Sytems Theory

Contact 1 : Rick Quax r.quax@uva.nl

Valid Date : Dec. 31, 2019

Mapping and understanding energy poverty in The Netherlands through statistical modelling and machine learning

The availability of energy in developed countries like the Netherlands is almost taken for granted. In reality there are households that, while having access to modern energy infrastructure, struggle to pay their electricity and gas bills. Such situations may be overcome for example by providing suitable subsidies and or customized energy tariffs. It is however very difficult for policy makers and energy companies to target the right slices of population with, respectively, subsidy schemes and special tariffs. This occurs because of a fundamental lack of ability to detect in time which households might soon become unable to pay their bills. The main objective of this internship is to explore to which extent machine learning and statistical modelling could help us figure out where and when energy poverty is most likely to occur in the Netherlands. Specific objectives are: - Build an indicator to track energy poverty in the Netherlands based on freely available bottom-up statistical data; - Create a machine learning workflow to link the energy poverty indicator with “measurable” features (e.g. satellite images, smart-meter data); - Use statistical analysis to determine dependencies between parameters and gain insight into the main drivers for energy poverty; - (Optional) Create a machine learning workflow that links past data (statistics or measurable data) with present energy poverty levels. As a starting point, a simple indicator can be build, at neighborhood level, based on income, energy consumption and energy prices data. More nuance can then be added by factoring in additional information, such as income distribution, building stocks, demographics, etc. The student is expected to spend some time familiarizing her/himself with the literature on energy poverty in the Netherlands (and developed countries in general), and come up with her/his own ideas on how to improve the initial indicator. The student will create machine learning workflows that link energy poverty with features that can be easily obtained (measured) without having to rely on bottom-up statistical data. This can for example be based on the analysis of features from satellite images or smart-meter data (if available). Then, the student will develop and perform a range of statistical analyses to gain deep insight into the drivers of energy poverty. The basic datasets will be provided by TNO, however the student is encouraged to look broadly and explore alternative data sources. We are looking for a highly motivated individual who has: - affinity with the energy sector and sustainable development - excellent programming skills (Python, R) - experience with statistics and big data analytics - experience with machine learning toolkits (e.g. sklearn, caffe, tensorflow) - experience working on Linux - ability to work independently to drive innovative research solutions

Keywords : Smart City, Big Data, Machine Learning, Housing,

Track : Others Urban Complex System

Contact 1 : Bob van der Zwaan bob.vanderzwaan@tno.nl

Valid Date : Dec. 23, 2019

Supporting energy poverty eradication in the developing world through statistical analysis and machine learning

Context and objectives: - The 7 th of the Sustainable Development Goals (SDGs) aims to ensure affordable, reliable and modern energy for all. - Roughly 1 billion people have no access to electricity, while another billion have limited access to electricity or other energy-related resources like non-polluting cooking fuel. - In large parts of the world it is unclear where energy poverty is most prevalent and most severe. - This hinders the development of effective strategies by governments, NGOs and commercial parties that seek to eradicate energy poverty. The increasing availability of high-quality satellite imagery has raised interest in applications that are able to estimate SDG indicators from a variety of remotely sensed data. We are looking to develop a product that combines ML-driven satellite image feature extraction, NOAA nightlights and large-scale survey data to estimate energy poverty. For this purpose we are looking for an intern to work on some or all of the following topics: - Building a ML pipeline to combine satellite feature extraction with other data sources to estimate on-the-ground energy poverty in developing countries; - Conduct statistical analyses to determine the main drivers of energy poverty in developing countries, and (optionally) assess the effectiveness of policies that aim to fight energy poverty; - Incorporating data-driven energy poverty insights into advise for electrification programs; - Building an application to visualize and distribute data and insights on energy poverty from the other work packages We are looking for a highly motivated individual who has: - affinity with the energy sector and sustainable development - excellent programming skills (Python, R) - experience with statistics and big data analytics - experience with machine learning toolkits (e.g. sklearn, caffe, tensorflow) - experience working on Linux - ability to work independently to drive innovative research solutions

Keywords : Smart City, Big Data, Machine Learning, Housing,

Track : Others Urban Complex System

Contact 1 : Bob van der Zwaan bob.vanderzwaan@tno.nl

Valid Date : Dec. 23, 2019