The current trend toward fast, byte-addressable non-volatile memory (NVM) with latencies and write resistance closer to SRAM and DRAM than flash positions NVM as a possible replacement for established volatile technologies. While on the one hand the non-volatility and low leakage capacity make NVM an attractive candidate for new system designs in addition to other advantageous features, on the other hand there are also major challenges, especially for the programming of such systems. For example, power failures in combination with NVM to protect the computing status result in control flows that can unexpectedly transform a sequential process into a non-sequential process: a program has to deal with its own status from earlier interrupted runs.

If programs can be executed directly in the NVM, normal volatile main memory (functional) becomes superfluous. Volatile memory can then only be found in the cache and in device/processor registers („NVM-pure“). An operating system designed for this can dispense with many, if not all, persistence measures that would normally otherwise be implemented and thereby reduce its level of background noise. Considered in detail, this enables energy requirements to be reduced, computing power to be increased and latencies to be reduced. In addition, the elimination of these persistence measures means that an „NVM-pure“ operating system is leaner than its functionally identical twin of conventional design. On the one hand, this contributes to better analysability of non-functional properties of the operating system and, on the other hand, results in a smaller attack surface or trustworthy computing base.

The project follows an „NVM-pure“ approach. A threatening power failure leads to an interrupt request (power failure interrupt, PFI), with the result that a checkpoint of the unavoidable volatile system state is created. In addition, in order to tolerate possible PFI losses, sensitive operating system data structures are secured in a transactional manner analogous to methods of non-blocking synchronisation. Furthermore, methods of static program analysis are applied to (1) cleanse the operating system of superfluous persistence measures, which otherwise only generate background noise, (2) break up uninterruptible instruction sequences with excessive interruption latencies, which can cause the PFI-based checkpoint backup to fail and (3) define the work areas of the dynamic energy demand analysis. To demonstrate that an „NVM-pure“ operating system can operate more efficiently than its functionally identical conventional twin, both in terms of time and energy, the work is carried out with Linux as an example.


ANTILLAS  (Automated Network Telecom Infrastructure with Intelligent Autonomous Systems) [1] is one of three sub-projects of the AI-NET research project [2]. The project as a whole investigates challenges and solutions for modern communication networks.

The ANTILLAS project in particular has two research focuses. On the one side monitoring and control of system parameters of network components. The dynamic and automated (re-)allocation of hardware resources requires adaptive (network) operating-system components. A prerequisite therefore is a comprehensive system monitoring (e.g., load, bandwith allocation, latency, power demand). Machine learning, for example deep neural networks, are a powerful tool to analyse the incoming monitoring data and make fast and anticipatory decisions to improve functional and non-functional properties of the system.

On the other side new hardware technologies like non-volatile memory technologies enable novel implementation paradigms and systems, which can accomplish, for example,  better availability guarantees and more efficient runtime properties (e.g., latency, power demand). However, such systems require fundamentally new operating-system components to ensure data integrity and consistency due to the persistency  of the underlying main-memory technology.



(zusammen unter „Other Projects“)

Dagstuhl Seminar PEACHES

Power and Energy-aware Computing on Heterogeneous Systems (PEACHES) is a Dagstuhl seminar [1], which takes places at Schloss Dagstuhl on August, 21-26, 2022 and is organised by

Julian De Hoog (The University of Melbourne, AU)

Kerstin I. Eder (University of Bristol, GB)

Timo Hönig (Ruhr-Universität Bochum, DE)

Daniel Mosse (University of Pittsburgh, US)

There is an urgent need to understand how computing fits into the broader picture of our energy consumption, and what role there is for computing to reduce our carbon footprint and help to accelerate the transition to renewables. This requires new ways of thinking across different domains, and highly energy-efficient hardware and software designs that adapt to changing operating conditions. Collaboration is increasingly required across the entire system stack – from system designers to programmers and operators. This Dagstuhl Seminar aims to bring together experts from computer science and computer engineering that share a common vision for power and energy efficient computing. Five principal topic areas will be discussed:

Energy Transparency from Hardware to Software 

Energy Optimisation and Management

Computing for Sustainability

Saving Joules: “Green computing” Hackathons

Disruptive Paradigms

The seminar aims to (i) identify the intellectual challenges to significantly lower the energy consumed by computing, (ii) create new cross-domain collaborations to address these challenges, and (iii) generate new knowledge and understanding of how computational processes may interact to reduce the carbon footprint of computing.

(Creative Commons BY 4.0)

(Copyright Julian De Hoog, Kerstin I. Eder, Timo Hönig, and Daniel Mosse)




The Transregional Collaborative Research Center Invasive Computing (InvasIC) investigates novel design and programming paradigms for resource-aware programming of future parallel many-core computing systems [1]. It explores the challenges and requirements to efficiently utilise hundreds, thousands, or even more cores on a single chip, where traditional system software and programming techniques reach their limits.

Subproject C1 [2], which includes groups at the Friedrich-Erlangen-Universität Erlangen-Nürnberg (FAU), the Karlsruhe Institute of Technology (KIT), and this group, investigates the enforcement of quality criteria of mixed criticalty as to timing and energy consumption at system-software level. This includes, for example, the enforcement of resource corridors (e.g., time, power), which are enforced by the invasive run-time support system (iRTSS). An effective enforcement requires accurate and efficient resource models and measurements, which introduce no or only little operating-system noise and are developed in the context of this project.