The 2018 Annual Meeting of the SIAM UKIE Section took place on Thursday, 11th of January 2018 at the National Oceanography Centre in Southampton. The meeting featured five invited speakers which covered a broad range of industrial and applied mathematics. With more than 70 participants, the meeting was one of the best attended SIAM UKIE Annual Meetings ever.
A key factor for the good attendance was the generous travel support of PhD students and postdocs provided by SIAM. Travel awards were given based on the submitted poster abstracts and thirteen students from all over the UK and Ireland benefited from this support. The poster session was started off with a one-hour poster blitz, where each of the 37 presenters was given 50 seconds to draw attention to their poster. The poster blitz was well received and is likely to become a fixed component of future meetings.
The following students and postdocs received prizes for their poster presentations: Joe Wallwork (Imperial College), Nikoleta E Glynatsi (Cardiff), Yuji Nakatsukasa (Oxford), Georgia Kouyialis (Imperial College), Danny Groves (Cardiff)
Click here for photos of the prize winners (courtesy of Nick Higham)
The meeting concluded with a guided tour of the National Oceanography Centre, which is the UK’s primary centre for oceanographic science. The underwater vehicles and robots were particularly interesting, and “Boaty McBoatface” got most of the attention. In the evening some participants joined for an all-you-can-eat buffet in the COSMO Restaurant.
We would like to thank the local organiser Dr Alain Zemkoho and his helpers for all the work that has gone into organising such an enjoyable meeting. We are grateful to the National Oceanographic Centre for their hospitality and the guided tour. Finally, we thank both SIAM and the IMA for their sponsorship of this meeting.
Ivan Graham, Stefan Güttel, and John Mackenzie
Below are reports on the talks of the five invited speakers. These reports have been contributed by Silvia Gazzola, James Hook, Alastair Spence, Alain Zemkoho, and Ivan Graham.
Frederic Dias: Recent progress in the evaluation of impact pressures
This talk presented a range of practical and theoretical problems related to impact pressures arising in fluid dymanics applications. Some of the material covered is included in the speaker’s recent joint paper with Ghidaglia, which appeared in the Annual Review of Fluid Mechanics (2017). Applications of note included “fluid slamming” (the violent impact of fluid on solid structures) and “sloshing” (e.g. fluid behaviour in tanks of liquified gas). Three classical idealised problems (Rankine-Hugoninot 1887, Wagner 1932 and Bagnold-Mitsuyasu, 1937) were presented and discussed. Then a variety of mathematical models were given, and the problem of identifying scaling laws was discussed. A survey of numerical methods, including interface tracking, potential flow and smoothed particle hydrodynamics, and the corresponding need for numerical benchmarks was discussed. In the final part of the talk experiments on wave impact on a structure and on wave energy converters were presented, some of which was joint work with Queen’s University Belfast. An exciting new project on wave impact on cliffs, with pictures from the Aran Islands (Galway) was outlined. Future prospects for work on elasticity, phase transition, scaling laws, extreme phenomena and statistical methods were discussed.
Aretha Teckentrup: Surrogate models in large-scale Bayesian inverse problems
This talk focused on novel Bayesian approaches for the solution of inverse problems involving the estimation of unknown model parameters from observed data. Problems like these are of core importance in a variety of applications (such as inverse seismology and non-destructive testing) and they are challenging to solve for two main reasons: firstly, they are ill-posed (in the Hadamard sense); secondly, they may be computationally quite demanding (e.g., they may require the repeated numerical solution of large-scale ODEs or PDEs). In this setting, given a prior distribution on the unknown model parameters, the Bayesian approach to inverse problems consists in computing their so-called posterior distribution, i.e., the probability distribution for the unknown model parameters conditioned on the observed data. In order to reduce the computational cost associated to the solution of Bayesian inverse problems, a variety of deterministic or stochastic approximation strategies can be devised. This talk considered the use of surrogate models (i.e., simplified mathematical models) to approximate the posterior distribution. After introducing a generic framework based on bounds of the Hellinger distance for the analysis of the approximation errors associated to the use of surrogate models, the properties of specific surrogate models, such as randomized misfit models and Gaussian process emulators, were discussed.
Gunnar Martinsson: Randomized methods for accelerating matrix factorization algorithms
This talk concerned a topic of huge current interest with a host of interesting applications in, for example, data mining, principal component analysis and model reduction. Much of this interest was generated by the article in SIAM Rev, vol 53 (2011), entitled “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions” by Halko, Martinsson and Tropp. A key idea is that much of the important information about a very large dimensional matrix (e.g. rank or singular values) is contained in a much smaller matrix constructed by projection into a low-rank random subspace. The talk was wide-ranging, starting with randomised SVD and ending with very recent work on accelerated full factorisation of matrices using a randomised column pivoted QR algorithm.
Ruth Misener: Optimisation for gradient boosted trees with risk control
Gradient boosted trees are part of a machine learning technique which is commonly used to learn highly non-linear functions from noisy data. Although these trees can perform very well in regression and classification tasks they can be difficult to work with in any subsequent computation. In particular it is very difficult to find a value that minimizes the output of a gradient boosted tree. In her talk, Ruth Misener presented a study of large-scale, industrially-relevant mixed-integer quadratic optimization techniques to solve the problem. The approach involves: (i) gradient-boosted pre-trained regression trees modeling catalyst behavior, (ii) penalty functions mitigating risk, and (iii) penalties enforcing composition constraints. Other developments of the work discussed during the talk include heuristic methods and an exact, branch-and-bound algorithm leveraging structural properties of gradient-boosted trees and penalty functions.
Jacek Gondzio: Continuation in optimization: From interior point methods to big data
This talk surveyed a variety of second-order optimization methods suitable for linear and quadratic convex programming appearing in large-scale applications. The main focus was on matrix-free inexact interior point methods (IPM) and on a preconditioned Newton conjugate gradient for sparse approximation: these algorithms are applied to minimization problems involving a 1-norm penalty term, and arising in binary classification and in compressive sensing. Well-established ways of overcoming the non-differentiability of the 1-norm penalization term were presented, which leverage splitting or “huberization” together with continuation techniques. In particular, for problems arising in compressive sensing and associated to measurement operators that enjoy the restricted isometry property (RIP), the devised IPM and corresponding linear solvers make a smart use of the RIP to incorporate efficient and effective preconditioning techniques. The theory underlying these interior point optimization methods is supported by extensive testing: their performance is excellent when compared to other popular first-order methods on problems collected in the SPARCO toolbox, but they are also extremely effective when used to solve 1-norm-regularized least squares problems with different conditioning and sparsity, and of increasing dimensions (up to one trillion).