Jaco van de Pol received his PhD on termination of higher-order rewriting in 1996 from Utrecht University. After positions at LMU Münich and TU Eindhoven, he became senior researcher at CWI (2000-2007). He was full professor, heading the chair in formal methods and tools at the University of Twente (2007-2018), where he was head of department from 2014-2017. He was invited professor in Paris (2016-2019). Since 2018, he is full professor in Computer Science at Aarhus University. Van de Pol publishes regularly in conferences like TACAS, CAV and CONCUR. He is member of the steering committees of SPIN and FMICS, and in the editorial boards for the journals SCP and STTT. His research interests are in distributed concurrent systems and automated reasoning. In particular, he investigated parallel and symbolic algorithms for efficient model checking. He and his team built the award-winning toolset LTSmin for high-performance model checking. Application domains include safety and performance of embedded and distributed systems, security protocols and risk analysis, and signaling pathways in biological systems.
Talk: Concurrent Algorithms and Data Structures for Model Checking
Model checking is a successful method for checking properties on the state space of concurrent, reactive systems. Since it is based on exhaustive search, scaling the method to industrial systems has been a challenge since its conception. Research has focused on clever data structures and algorithms, to reduce the size of the state space or its representation; smart search heuristics, to reveal potential bugs and counterexamples early; and high-performance computing, to deploy the brute force processing power of clusters of compute-servers.
The main challenge is to combine a brute force approach with clever algorithms: brute force alone (when implemented carefully) can bring a linear speedup in the number of processors. This is great, since it reduces model-checking times from days to minutes. On the other hand, proper algorithms and data structures can lead to exponential gains. Therefore, the parallelization bonus is only real if we manage to speedup clever algorithms.
There are some obstacles: many linear-time graph algorithms depend on a depth-first exploration order, which is hard to parallelize. Examples include the detection of strongly connected components (SCC), and the nested depth-first-search (NDFS) algorithm. Both are used in model checking LTL properties. Symbolic representations, like binary decision diagrams (BDDs), reduce model checking to “pointer-chasing”, leading to irregular memory-access patterns. This poses severe challenges on achieving actual speedup in (clusters of) modern multi-core computer architectures.
This talk will present some of the solutions found over the last 10 years, leading to the high-performance model checker LTSmin. These include parallel NDFS (based on the PhD thesis of Alfons Laarman), the parallel detection of SCCs with concurrent Union-Find (based on the PhD thesis of Vincent Bloemen), and concurrent BDDs and other decision diagrams (based on the PhD thesis of Tom van Dijk). This functionality is provided in a specification-language agnostic manner, while exploiting the locality typical for a-synchronous distributed systems (based on the PhD thesis of Jeroen Meijer).
Finally, I will sketch a perspective on moving forward from high-performance model checking to high-performance synthesis algorithms. Examples include parameter synthesis for stochastic and timed systems, and strategy synthesis for (stochastic and timed) games.
Holger Hermanns is full professor at Saarland University, Saarbrücken, Germany, holding the chair of Dependable Systems and Software on Saarland Informatics Campus, and Distinguished Professor at Institute of Intelligent Software, Guangzhou, China. He has previously held positions at Universität Erlangen-Nürnberg, Germany, at Universiteit Twente, the Netherlands, and at INRIA Grenoble Rhône-Alpes, France, and is former Dean of the Faculty of Mathematics and Computer Science at Saarland University. His research interests include modeling and verification of concurrent systems, resource-aware embedded systems, compositional performance and dependability evaluation, and their applications to energy informatics. Holger Hermanns has authored or co-authored more than 200 peer-reviewed scientific papers (ha-index 92, h-index 51). He co-chaired the program committees of major international conferences such as CAV, CONCUR, TACAS and QEST, serves on the steering committees of ETAPS and TACAS, is president of the association Friends of Dagstuhl, and vice president of the ETAPS association. Holger Hermanns is member of Academia Europaea and holds an ERC Advanced Grant.
Talk: Power in Low Earth Orbit. Verified.
There is an increasing interest across the space industry in deploying large- scale Low-Earth Orbit (LEO) satellite constellations for the purpose of traffic observation, Earth monitoring, and for offering communication services across the globe. Current in-orbit technology demonstrators, such as the GomX-4A and GomX-4B satellites from GomSpace, make it obvious that the main operational bottleneck in such missions is the electric power budget.
This keynote provides a survey of past and ongoing work to master operational limitations of LEO satellites and satellite constellations using formal methods. After presenting the major technological characteristics and challenges of the LEO domain, I will discuss how optimal reachability analysis in priced timed automata can be combined with stochastic battery kinetics to best exploit a satellite’s operational capabilities in orbit while keeping the risk of draining the on-board battery provably low. We then turn to the question how to attack the problem for constellations with dozens of communicating satellites. This requires profound support for extrapolating the electric power budget as part of the inter-satellite and satellite-to-ground communication design. I present a solution embedded in the construction process of contact plans in delay tolerant networking, and finally address scalability aspects of the proposed solution, important for upcoming mega constellations consisting of hundreds of satellites.