Distributed Systems Research Areas
Distributed Information Systems are concerned with the delocalization of computation on several hosts and their coordination via message passing. Looking at today's information systems, one notices that most of them, if not all, have some form of distribution. The key issues that emerge for research become those of addressing heterogeneity, scalability, and run-time adaptation.
In the context of distributed systems, the group focuses on a number of sub-areas:
- Service-Oriented and Cloud Computing,
- Pervasive Computing,
- Middleware for Pervasive Computing and Adaptive Communications,
- Network-centric Real-time Analytics, and
- Sensor Networks.
While interesting applications areas for the group are:
- Domotics, and
- Smart Energy Systems.
For information on our research projects, please visit our project overview page. For our specific lines of research, see below:
Service-Oriented and Cloud Computing
Service-Oriented Computing (SOC) is a popular computing paradigm for building distributed information systems in which the concepts of distribution, openness, asynchronous messaging and loose coupling take a leading role. In this context, applications are built out of individual services that expose functionalities by publishing their interfaces into appropriate repositories, abstracting entirely from the underlying implementation. Published interfaces may be searched by other services or users and subsequently be invoked. The interest in SOC is a consequence of the shift from a vision of a Web based on the presentation of information to a vision of the Web as computational infrastructure, where systems and services can interact in order to fulfill users' requests. Web Services (WS), the best-known example, are the realization of service-oriented systems based on open standards and infrastructures, extending the XML syntax. The `servicization' of software envisioned with SOC has brought to the idea of Cloud Computing. In the latter approach, services are further abstracted and clustered in opaque and remote ``clouds'' of computational and storage services. This allows for virtually infinite scalability from the service consumer perspective, while promoting the `offering' of underutilized resources on the producer's side. Our group is active on five main lines of research: (1) Artificial Intelligence (AI) planning for taking advantage of the dynamicity of SOC and Cloud frameworks; (2) Cloud provisioning; and (3) Service-based business process management.
Distributed and network-centric data analytics
Many applications in the Internet of Things depend on analyzing insights from data in real-time. As part of this research theme we research on methods how to execute real-time analytics fulfilling a wide range of challenging requirements. Exemplary research problems deal with:
- Accelerating real-time data analytics in networks (e.g., MAKI)
- Energy efficient in-network processing (e.g., Cognigron)
- Principles for privacy engineering (e.g., Parrot)
Automation and Decision Making in Buildings
The vision of a future in which environments support people occupying them requires computing facilities that could help evolve these environments. At the same time, the overall objective to save energy as much as possible should be maintained. All this can be accomplished by bringing a degree of sophistication to the processing of and reasoning over the information provided by a network of diverse devices and sensors. The processing refers to the achievement of some goal (or performing a task), and the selection and combination of tasks at run-time. In this way, the goal achievement can result in different solutions depending on the current state of the environment.
Modern indoor spaces form an environment that is particularly suitable for the application of techniques such as constraint satisfaction and automated planning. While the environment is well structured and usually well defined, it is also partially controllable, which makes the added value gained by non-trivial automated composition and monitoring of operations a feasible and realistic task. Given such setting, constraint satisfaction and planning can perform powerful reasoning for complex tasks which considerably advance the level of environment intelligence. For example, HTN planner in our own office building Bernoulliborg, computes a plan as a sequence of actions that control restaurant lamps to maximally use natural light level and taking into the presence of people in the restaurant, allowing considerable energy saving. While our dynamic constraint satisfaction rule maintenance engine, deployed in the Potentiaal building of the University of Eindhoven, successfully recognizes different context situations and adapts the environment to ensure maximum comfort adherence, while choosing the most energy efficient way to satisfy it.
Digital Twins for Smart Buildings
Digital twin is a virtual model of a physical environment or an object, which synchronizes the state of a physical counterpart with the virtual representation of it using various internet of things (IoT) sensors and actuators. This link can be continuously maintained, making it possible to experiment, simulate and optimize the properties of a physical counterpart at any time. We aim at reducing the complexity of creation and maintenance of digital twins for smart environments, such as home and office buildings or manufacturing facilities, by developing a dynamic and scalable infrastructure platform to explicitly define the quality attributes of the (trained) models and of the gathered data itself. This way, we can: (1) reconstruct digitally existing real-world conditions and assess whether actual sensors provide adequate coverage of the real-world context situation. Inadequate information will necessarily lead to an uncertainty during simulations of a constructed digital twin model, decreasing (potentially severely) the accuracy of results; (2) propose necessary adjustments to existing sensors coverage; (3) automatically identify the best model for a given data science question for given data streams in production; (4) automatically adjust the chain of data models according to changes in the environment (e.g. updated assets or building conditions, new sensors); (5) realise enhanced remote or automated actuation based on the full context-aware digital environment reconstruction, e.g. by means of declarative constraint satisfaction and AI planning techniques. Declarative automation, in particular, is necessary to avoid hard-coding decision logic, which often hinders complexity growth in automation.
Model Checking for Business Process Variability and Compliance
Business processes are collaborations between actors which aim at achieving a specific value-added goal. When automated, a business process is modeled as interlinked sets of tasks with decision points allowing for different or parallel execution paths. Different tasks can be assigned to actors which can be either fully automated services or require human input. Business process management aims to increase the performance of companies by managing these business processes. Originally designed to support rigid and repetitive work, business processes have evolved to support loosely-coupled service compositions. In service compositions, each task is implemented as an independent, self-contained, and well-defined modular service. Although tasks in service compositions are implemented in a modular fashion, the composition itself remains rigid without any possible change. Compliance verification (i.e., confirming whether a business process is compliant with a set of rules imposed on that process) can be imposed by national law or international regulations. As a result, companies must ensure that their business processes remain compliant with regulations - regardless of process modularity or complexity - or face severe penalties and law suits. At the same time, the increasing demand for customization at both smart industry, as well as service compositions in cloud configurations, demonstrates the need for a next generation of customizable business processes. When customization in compositions is defined as ‘compliance to sets of rules describing all possible options’ - it will enable this next generation of customizable business processes that simultaneously remain compliant with regulations.
Pervasive Computing and Sensor Networks
Pervasive computing envisions a future in which computers seamlessly blend into the fabric of daily life and eventually ``disappear'' in the environment. Domotics and building automation consider indoor environments daily used by humans where large numbers of small, inexpensive and networked processing units are embedded into everyday objects. These units are organised and interoperate in Wireless Sensor Networks (WSNs). Applications range from support for healthy ageing, for people needing medical assistance, or for saving energy by means of automatic control.
Energy Savings in Buildings
We strive to apply our smart solutions to new buildings to make them even more energy-efficient. To realise such solutions, we believe that buildings should be able to understand not only their outside and inside environment conditions (e.g., temperature, natural light, etc.) but also their occupants' activity, thus they can adapt their operations in order to reduce their energy consumption while maintaining the same level of service. For the sake of environment monitoring and context information gathering, we rely on wireless sensor networks (WSNs) to monitor and collect context information as well as for actuating and controlling the environment accordingly.
Existing buildings are responsible for more than 40% of the world's total primary energy consumption. Office buildings are responsible for a significant fraction of the energy consumption and greenhouse gas emissions worldwide. Moreover, current building management systems fail to reduce unnecessary energy consumption while preserving user comfort because they are unable to cope with dynamic changes due to users' interaction with the environment. Therefore, to cope with these dynamic changes of building's environment and users' activity, we study, both theoretically and practically, middleware for sensing and controlling buildings with emphasis on human activity recognition, occupancy detection, and adaptive control to reach savings of up to 35% in building energy consumption.
We further explore the opportunity of utilizing mobile phones and power meters as context sources. These sources are chosen as their availability for various purposes (e.g., mobile phones to support the productivity of employees and power meters for energy monitoring in a building). Specifically, the mobile phone is used to explore room-level localization and to infer occupancy of particular rooms based on received signals, while wireless power meter nodes are used to measure electricity consumption as occupancy signs. We work on room-level power meter, which is the measurement of aggregated consumption of all individuals in a room, to reduce sensor deployment costs as well as the level of intrusiveness. Moreover, we are studying and implement fusion approaches combining the two aforementioned modalities for improving occupancy recognition. The improved occupancy recognition (i.e., in terms of accuracy and information detail) is useful in raising energy efficiency adjusting the optimal configuration of electrical devices based on user preference and analyzing movement pattern to adjust Heating, Ventilation, and Air Conditioning (HVAC) system in a building.
Smart Energy Infrastructures
Our focus is on studying the effects of varying topologies by evaluating them on the test feeders based on the Monte Carlo method. From a topological point of view, we compare the performance of different topology models based on the previously defined optimization solutions. We evaluate a complete graph, a random graph, a small-world graph, and a radial graph. This research direction provides further evidence to the potential of constructing practical and efficient distribution networks for prosumers.
In the future work, it is our intention to introduce distributed energy storage systems (DESS, such as home batteries and electric vehicles) and to add the optimization of charge/discharge cycle to the model. With DESS, prosumers are enabled to decide whether the excess energy should be sold for an immediate profit or stored for later use.
Modern Multi-Energy Systems
To date, electricity and natural gas systems have been mostly planned and operated independently. However, today's ambitious energy policies motivate a more integrated study of the energy system, in order to achieve energy efficiency and low-carbon footprint, while increasing the security and reliability of the system. Natural gas, indeed, has unquestionable advantages, both by an economic and environmental point of view, if compared with other fossil fuels. At the same time, an increasing amount of medium and small scale renewable plants is connected to the electricity network, urging to focus on a physical and operative integration of natural gas and power sectors. A new vision of the energy system is required, where different infrastructures are integrated, such as natural gas and electricity grids, heating devices and renewable (solar) small-scale plants, while taking into account end-users preferences and behavior.
In 2019, we developed a model for the optimal scheduling of both thermal and electrical loads in large communities of smart homes. The model solves a multi-objective problem that aims at minimizing CO2 emissions and/or energy costs, depending on the preferences of the users, by integrating the use of multiple energy carriers to supply residential energy demands. These goals can be achieved by promoting hybrid appliances and hybrid thermal loads, with the former using multiple energy carriers alternatively and the latter being satisfied by diverse technologies.
Moreover, we investigated the complexity of sustainable energy choices in daily tasks and the impact of using different CO2 emission intensities. Starting from a simple example, cooking pasta, we evaluated the environmental impact, in terms of CO2 emissions, of several alternative solutions, i.e., starting from cold or hot water, and using different (combination of) appliances. Although the emission savings are small for a single instance of an activity, simple daily choices that many people make can have a major sustainability impact. Yet, the complexity of such choices is overwhelming and our cognitive resources are too limited to cope with it. Automated systems and technologies are key elements to help and support the end-user in those decisions. Yet, those systems can achieve the desired goal only if they are accepted and used as intended by large groups of users.
In collaboration with the University of Stuttgart, we investigated the design and development of a service-oriented architecture for the control and management of an intelligent microgrid. Microgrids are operated by energy management systems whose development is driven mainly by the contributions of technologies for distributed energy resources and techniques for optimal planning of loads. Yet, little attention is given to the design of such systems as software tools, whose capabilities are often limited because of a monolithic structure. We postulate that such systems need to intertwine both perspectives if microgrids are going to become a realized potential. We, therefore, proposed a novel design of an energy management system that considers both perspectives and is based on the principles of service-orientation. We implemented the proposed design into a service-oriented energy management system prototype and we run simulations with real data to test its feasibility.
Modern data analysis platforms all too often rely on the fact that the application and underlying data flow are static. That is, such platforms generally do not implement the capabilities to update individual components of running pipelines without restarting the pipeline, and they rely on data sources to remain unchanged while they are being used. However, in reality these assumptions do not hold: data scientists come up with new methods to analyze data all the time, and data sources are almost by definition dynamic. Companies performing data science analyses either need to accept the fact that their pipeline goes down during an update, or they should run a duplicate setup of their often costly infrastructure that continues the pipeline operations.
In this research we present the Evolutionary Changes in Data Analysis (ECiDA) platform, with which we show how evolution and data science can go hand in hand. ECiDA aims to bridge the gap that is present between engineers that build large scale computation platforms on the one hand, and data scientists that perform analyses on large quantities of data on the other, while making change a first-class citizen. ECiDA allows data scientists to build their data science pipelines on scalable infrastructures, and make changes to them while they remain up and running. Such changes can range from parameter changes in individual pipeline components to general changes in network topology. Changes may also be initiated by an ECiDA pipeline itself as part of a diagnostic response: for instance, it may dynamically replace a data source that has become unavailable with one that is available. To make sure the platform remains in a consistent state while performing these updates, ECiDA uses a set of automatic formal verification methods, such as constraint programming and AI planning, to transparently check the validity of updates and prevent undesired behavior.
Large-Scale Combinatorial Optimisation
Finding the optimal solution to solve a given problem is an important objective that arises within many different domains, ranging from making the most optimal use of resources for a set of computational tasks to transporting packages in the least amount of time with the lowest possible cost. Constraint satisfaction problems offer a powerful mathematical framework to define such optimisation problems without the need to specify explicitly how the problem must be optimised. Unfortunately, these problems are NP-complete, meaning that, in the worst case, an algorithm must consider all possible combinations of potential solutions. This computational complexity makes it, in general, impossible to apply to large-scale domains. However, sometimes problems have a particular structure that can be used to find a solution far more efficiently than the worst-case exponential complexity. This research focuses on a class of problems with such a structure, which is a natural model for certain large-scale domains such as smart building systems, and an algorithm that can solve problems of this class efficiently. After the principles of the algorithm have been established and the initial performance has been tested on synthetic problems, the aim is to apply the optimisation techniques on real-world problems to verify the results. Developing an algorithm that is able to determine whether a given constraint satisfaction problem belongs to the previously mentioned class of problems is another useful direction of research, as this makes it possible to make a more informed decision about which algorithm is most suitable to solve a given problem.