MODELS 2019 Accepted Papers
Foundations Track
Technical Papers
Authors: Erwan Bousse and Manuel Wimmer
Executable Domain-Specific Languages (DSLs) are commonly defined with either operational semantics (i.e. interpretation) or translational semantics (i.e. compilation). An interpreted DSL relies on domain concepts to specify the possible execution states and steps, which enables the observation and control of executions using the very same domain concepts. In contrast, a compiled DSL relies on a transformation to an arbitrarily different target language. This creates a conceptual gap, where the execution can only be observed and controlled through target domain concepts, to the detriment of experts or tools that only understand the source domain. To address this problem, we propose a language engineering pattern for compiled DSLs that enables the observation and control of executions using source domain concepts. The pattern requires the definition of the source domain execution steps and states, along with a feedback manager that translates steps and states of the target domain back to the source domain. We evaluate the pattern by applying it to two different compiled DSLs, and show that it does enable domain-level observation and control while multiplying the execution time by 2,0 in the worst observed case.
Authors: Valentin Besnard, Ciprian Teodorov, Frédéric Jouault, Matthias Brun and Philippe Dhaussy
The increasing complexity of embedded systems renders verification of software programs more complex and may require applying monitoring and formal techniques, like model-checking. However, to use such techniques, system engineers usually need formal experts to express software requirements in a formal language. To facilitate the use of model-checking tools by system engineers, our approach consists of using a UML model interpreter with which the software requirements can directly be expressed as observer automata in UML as well. These observer automata are synchronously composed with the system, and can be used unchanged both for model verification and runtime monitoring. Our approach has been evaluated on the user interface model of a cruise control system. The observer verification results are in adequation with the verification of equivalent LTL properties. The runtime overhead of the monitoring infrastructure is 6.5%, with only 1.2% memory overhead.
Authors: Phuong Nguyen, Juri Di Rocco, Davide Di Ruscio, Alfonso Pierantonio and Ludovico Iovino
Manual classification methods of metamodel repositories require highly trained personnel and the results are usually influenced by subjectivity of human perception. Therefore, auto-mated metamodel classification is very desirable and stringent. In this work, Machine Learning techniques have been employed for metamodel automated classification. In particular, a tool implementing a feed-forward neural network is introduced to classify metamodels. An experimental evaluation over a datasetof 555 metamodels demonstrates that the technique permits to learn from manually classified data and effectively categorize incoming unlabeled data with a considerably high prediction rate: the best performance comprehends 95.40%as success rate, 0.945 as precision, 0.938 as recall, and 0.942 as F1 score.
Authors: Nelly Bencomo and Luis Garcia-Paucar
A model at runtime can be defined as an abstract representation of a system, including its structure and behaviour, which exist alongside with the running system. Runtime models provide support for decision-making and reasoning based on design-time knowledge but, also based on information that may emerge at runtime and which was not foreseen before execution. [Questions/Problems] A challenge that persists is the update of runtime models during the execution to support up-to-date information for reasoning and decision-making. New techniques based on machine learning (ML) and Bayesian inference offer great potential to support the update of runtime models during execution. Runtime models can be updated using these new techniques to, therefore, offer better-informed decision-making based on evidence collected at runtime. The techniques we use in this paper are a novel implementation of Partially Observable Markov Decision Processes (POMDPs).[Contribution] In this paper, we demonstrate how given the requirements specification, a Requirements-aware runtime modelbased on POMDPs (RaM-POMDP) is defined. We study in detail the nature of such runtime models coupled with consideration of the Bayesian inference algorithms and tools that provide evidence of changes in the environment. We show how the RaM-POMDPs and the MAPE-K loop offer the basis of the software architecture presented and how the required casual connection of runtime models is realized. Specifically, we demonstrate how according to evidence of changes in the systems, collected by the monitoring infrastructure and using Bayesian inference, the runtime models are updated and inferred (i.e. the first aspect of the causal connection). We also demonstrate how the running system changes its runtime model, producing therefore the corresponding self-adaptations. These self-adaptations are reflected on the managed system (i.e. the second aspect of the causal connection) to better satisfice the requirements specifications and improve conformance to its service level agreements (SLAs). The experiments have been applied to a real case study for the networking application domain.
Authors: Assylbek Jumagaliyev and Yehia Elkhatib
Multi-tenancy enables efficient resource utilization by sharing application resources across multiple customers. In cloud applications, the data layer is often the prime candidate for multi-tenancy, and usually comprises a combination of different cloud storage solutions such as relational and non-relational databases, and blob storage. These storage types are quite divergent, and thus each requiring its own partitioning schemes to ensure tenant isolation and scalability. Currently, multi-tenant data architectures are implemented through manual coding techniques that tend to be time consuming and error prone. In this paper, we propose a domain-specific modeling language, CadaML, that provides concepts and notations to model a multi-tenant data architecture in an abstract way, and also provides tools to validate the data architecture and automatically produce application code. Using an experiment of re-architecting the data layer of an industrial business process analysis application, we observe that developers using CadaML were more productive by a factor of 3.5x. We also report on the benefits gained in development effort, reliability of generated code, and usability.
Authors: Parsa Pourali and Joanne M. Atlee
Model-Driven Engineering has been proposed to increase the productivity of developing a software system. Despite its benefits, it has not been fully adopted in the software industry. Research has shown that modelling tools are amongst the top barriers for the adoption of MDE by industry. Recently, researchers have conducted empirical studies to identify the most-severe cognitive difficulties of modellers when using UML model editors. Their analyses show that users’ prominent challenges are in remembering the contextual information when performing a particular modelling task; and locating, understanding, and fixing errors in the models. To alleviate these difficulties, we propose two Focus+Context user interfaces that provide enhanced cognitive support and automation in the user’s interaction with a model editor. Moreover, we conducted two empirical studies to assess the effectiveness of our interfaces on human users. Our results reveal that our interfaces help users 1) improve their ability to successfully fulfil their tasks, 2) avoid unnecessary switches among diagrams, 3) produce more error-free models, 4) remember contextual information, and 5) reduce time on tasks.
Authors: Esther Guerra, Juan De Lara and Jesús Sánchez Cuadrado
Ensuring the correctness of model transformations is crucial to obtain high-quality solutions in model-driven engineering. Testing is a common approach to detect errors in transformations, which requires having methods to assess the effectiveness of the test cases and improve their quality. Mutation testing permits assessing the quality of a test suite by injecting artificial faults in the system under test. These emulate common errors made by competent developers and are modelled using mutation operators. Some researchers have proposed sets of mutation operators for transformation languages like ATL. However, their suitability for an effective mutation testing process has not been investigated, and there is no automated mechanism to generate test models that increase the quality of the tests. In this paper, we use transformations created by third parties to evaluate the effectiveness of sets of ATL mutation operators proposed in the literature, and other operators that we have devised based on empirical evidence on real errors made by developers. Likewise, we evaluate the effectiveness of commonly used test model generation techniques. For the cases in which a test suite does not detect an injected fault, we propose an automated method to synthesize new test models able to detect it. Finally, as a technical contribution, we make available a framework that automates this process for ATL
Authors: Alexandru Burdusel, Steffen Zschaler and Stefan John
Recently there has been increased interest in combining the fields of Model-Driven Engineering (MDE) and Search-Based Software Engineering (SBSE). Such approaches use meta-heuristic search guided by search operators (model mutators and sometimes breeders) implemented as model transformations. The design of these operators can substantially impact the effectiveness and efficiency of the meta-heuristic search. Currently, designing search operators is left to the person specifying the optimisation problem. However, developing consistent and efficient search-operator rules requires not only domain expertise but also in-depth knowledge about optimisation, which makes the use of model-based meta-heuristic search challenging and expensive. In this paper, we propose a generalized approach to automatically generate atomic consistency preserving search operators (aCPSOs) for a given optimisation problem. This reduces the effort required to specify an optimisation problem and shields optimisation users from the complexity of implementing efficient meta-heuristic search mutation operators. We evaluate our approach with a set of case studies, and show that the automatically generated rules are comparable to, and in some cases better than, manually created rules at guiding evolutionary search towards near-optimal solutions.
Authors: Byron Devries and Betty Cheng
Non-functional goals specify a quality attribute of the functional goals for the system-to-be (e.g., cost, performance, security, and safety). However, non-functional goals are often cross-cutting and do not naturally fit within the default decomposition expressed by a functional goal model. Further, any functional mitigations that ensure the satisfaction of a non-functional goal, or occur in the event a non-functional goal is violated, are conditionally applicable to the remainder of the system-to-be. Rather than modeling non-functional goals and their associated mitigations as a part of the system-to-be goal model, we introduce a method of modeling and analyzing non-functional goals and their associated mitigation as separate models. We illustrate our approach by applying our method to model non-functional goals related to an industry-based automotive braking system and analyze for non-functional violations.
Authors: Antonio Garcia-Dominguez, Nelly Bencomo, Juan Marcelo Parra-Ullauri and Luis Garcia Paucar
Models are not static entities: they evolve over time due to changes. Changes may inadvertently violate constraints imposed. Therefore, the models need to be monitored for compliance. On the one hand, in traditional design-time applications, new and evolving requirements impose changes on a model overtime. These changes may accidentally break design rules. Further, the growing complexity of the models may need to be tracked for manage ability. On the other hand, newer applications use models at runtime; building runtime abstractions that are used to control a system. Adopters of these approaches will need to query the history of the system to check if the models evolved as expected, or to find out the reasons for a particular behavior. Changes over models at runtime are more frequent than changes overdesign models. To cover these demands, we argue that a flexible and scalable approach for querying the history of the models is needed to study the evolution and for compliance sake. This paper presents a set of extensions to a model query language inspired in the Object Constraint Language (the Epsilon Object Language) for traversing the history of a model, and for making temporal assertions that will allow the elicitation of historic information. As querying long histories may be costly, the paper presents an approach that annotates versions of interest as they are observed, in order to provide efficient recalls in possible future queries. The approach has been implemented in a model indexing tool, and is demonstrated through a case study from the autonomous and self-adaptive systems domain.
Authors: Evgeny Kusmenko and Bernhard Rumpe
The field of deep learning has become more and more pervasive in the last years as we have seen varieties of problems being solved using neural processing techniques. Image analysis and detection, control, speech recognition, translation are only a few prominent examples tackled succesfully by neural networks. Thereby, the discipline imposes a completely new problem solving paradigm requiring a rethinking of classical software development methods. The high demand for deep learning technology has led to a large a mount of competing frameworks mostly having a Python interface – a quasi standard in the community. Although, the existing tools often provide great flexibility and high performance, they still lack a completely domain oriented view of the problem. Furthermore, using neural networks as reusable building blocks with clear interfaces in productive systems is still a challenge. In this work we propose a domain specific modeling methodology tackling design, training, and integration of deep neural networks. Thereby, we distinguish between the three main modeling concerns: architecture, training, and data. We integrate our methodology in a component-based modeling toolchain allowing one to employ and reuse neural networks in large software architectures.
Authors: Sven Peldszus, Katja Tuma, Daniel Strüber, Riccardo Scandariato and Jan Jürjens
During the development of security-critical software, the system implementation must capture the security properties postulated by the architectural design. This paper presents an approach to support secure data-flow compliance checks between design models and code. To iteratively guide the developer in discovering such compliance violations we introduce automated mappings. These mappings are created by searching for correspondences between a design-level model (Security Data Flow Diagram) and an implementation-level model (Program Model). We limit the search space by considering name similarities between model elements and code elements as well as by the use of heuristic rules for matching data-flow structures. The main contributions of this paper are three-fold. First, the automated mappings support the designer in an early discovery of implementation absence, convergence, and divergence with respect to the planned software design. Second, the mappings also support the discovery of secure data-flow compliance violations in terms of illegal asset flows in the software implementation. Third, we present our implementation of the approach as a publicly available Eclipse plugin and its evaluation on five open source Java projects (including Eclipse secure storage).
Authors: Beatriz A. Sanchez, Dimitris Kolovos, Richard Paige, Athanasios Zolotas and Horacio Hoyos
MATLAB/Simulink is a tool for dynamic system modelling widely used across industries such as aerospace and automotive. Model management languages such as OCL, ATL and the languages of the Epsilon platform enable the validation, model-to-model, model-to-text transformation of models but tend to focus on the Eclipse Modelling Framework (EMF), a de facto standard for domain specific modelling. As Simulink models are built on an entirely different technical stack, the current solution to manipulate them using such languages requires their transformation into an EMF-compatible representation. This approach is expensive as (a) the cost of the transformation can be crippling for large models, (b) it requires the synchronization of the native Simulink model and its EMF counterpart, and(c) the EMF-representation may be an incomplete copy of the model potentially hampering model management operations. In this paper we propose an alternative approach that uses the MATLAB API to bridge Simulink models with existing model management languages that relies on the “on-the-fly” translation of model management language constructs into MAT-LAB/Simulink commands. Our approach not only eliminates the cost of the transformation and of the co-evolution of the EMF-compatible representation but also enables full access to all the aspects of Simulink models. We evaluate the performance of both approaches using a set of model validation constraints executed on a sample of the largest Simulink models available on GitHub. Our evaluation suggests that the on-the-fly translation approach can reduce the model validation time by up to 80%.
New Ideas and Vision
Authors: Loli Burgueño, Jordi Cabot and Sebastien Gerard
Model transformations are a key element in any model-driven engineering approach. But writing them is a time-consuming and error-prone activity that requires specific knowledge of the transformation language semantics.
We propose to take advantage of the advances in Artificial Intelligence d and, in particular Long Short-Term Memory Neural Networks (LSTM), to automatically infer model trans-formations from sets of input-output model pairs. Once the transformation mappings have been learned, the LSTM system is able to autonomously transform new input models into their corresponding output models without the need of writing any transformation-specific code. We evaluate the correctness and performance of our approach and discuss its advantages and limitations
Authors: Thomas Hartmann, Assaad Moawad, Cedric Schockaert, Francois Fouquet and Yves Le Traon
Although artificial intelligence and machine learning are currently extremely fashionable, applying machine learning on real-life problems remains very challenging. Data scientists need to evaluate various learning algorithms and tune their numerous parameters, based on their assumptions and experience, against concrete problems and training data sets. This is a long, tedious, and resource expensive task. Meta-learning is a recent technique to overcome, i.e. automate this problem. It aims at using machine learning itself to automatically learn the most appropriate algorithms and parameters for a machine learning problem. As it turns out, there are many parallels between meta-modelling – in the sense of model-driven engineering—and meta-learning. Both rely on abstractions, the meta data, to model a predefined class of problems and to define the variabilities of the models conforming to this definition. Both are used to define the output and input relationships and then fitting the right models to represent that behaviour. In this paper, we envision how a meta-model for meta-learning can look like. We discuss possible variabilities, for what types of learning it could be appropriate for, how concrete learning models can be generated from it, and how models can be finally selected. Last but not least, we discuss a possible integration into existing modelling tools.
Authors: Thomas Brand and Holger Giese
A structural runtime model is a causally connected abstract representation of a system that allows monitoring the system and adapting its configuration. Systems are often constructed to operate continuously. Thus, the corresponding runtime model instances need to be long-living and available without interruptions, too. Interruptions occur, e.g., if a model needs to be re-instantiated with a new version of the modeling language to support new kinds of domain specific information. Adaptive runtime models render such interruptions unnecessary and enable changing information demands at runtime. They support multiple abstraction levels and allow adjusting overtime which details of different system or environment parts are represented. This helps to focus the attention for effective and efficient decision making. In this vision paper we present the fundamental idea for a generic modeling language for structural runtime models and propose requirements and quality characteristics as criteria for its evaluation.
Authors: Márton Búr and Daniel Varro
Recent approaches in runtime monitoring and live data analytics have started to use expressive graph queries at runtime to capture and observe properties of interest at a high level of abstraction. However, in a critical context, such applications often require timeliness guarantees, which have not been investigated yet for query-based solutions due to limitations of existing static worst-case execution time (WCET) analysis techniques. One limitation is the lack of support for dynamic memory allocation, which is required by the dynamically evolving runtime models on which the queries are evaluated. Another open challenge is to compute WCET for asynchronously communicating programs such as distributed monitors. This paper introduces our vision about how to assess such timeliness properties and how to provide tight WCET estimates for query execution at runtime over a dynamic model. Furthermore, we present an initial solution that combines state-of-the-art parametric WCET estimations with model statistics and search plans of queries.
Authors: Aren Babikian, Csaba Hajdu, István Majzik, Kristóf Marussy, Zoltan Micskei, Oszkár Semeráth, Zoltán Szatmári, Daniel Varro and András Vörös
Since safety-critical autonomous vehicles need to interact with an immensely complex and continuously changing environment, their assurance is a major challenge. While systems engineering practice necessitates assurance on multiple levels, existing research focuses dominantly on component-level assurance while neglecting complex system-level traffic scenarios. In this paper, we aim to address the system-level testing of the situation-dependent behavior of autonomous vehicles by combining various model-based techniques on different levels of abstraction. (1) Safety properties are continuously monitored in challenging test scenarios (obtained in simulators or field tests) using graph query and complex event processing techniques. To precisely quantify the coverage of an existing test suite with respect regulations of safety standards, (2) we provide qualitative abstractions of causal, temporal, or geospatial data recorded in individual runs into situation graphs, which allows to systematically measure system-level situation coverage (on an abstract level) wrt. safety concepts captured by domain experts. Moreover, (3) we can systematically derive new challenging (abstract) situations which justifiably lead to runtime behavior which has not been tested so far by adapting consistent graph generation techniques, thus increasing situation coverage. Finally, (4) such abstract test cases are concretized so that they can be investigated in a real or simulated context.
Practice & Innovation Track
Authors: Mauricio Alferez, Fabrizio Pastore, Mehrdad Sabetzadeh, Lionel Briand and Jean-Richard Riccardi
Acceptance criteria (AC) are implementation agnostic conditions that a system must meet to be consistent with its requirements and be accepted by its stakeholders. Each acceptance criterion is typically expressed as a natural language statement with a clear pass or fail outcome. Writing AC is a tedious and error-prone activity, especially when the requirements specifications evolve and there are different analysts and testing teams involved. Analysts and testers must iterate multiple times to ensure that AC are understandable and feasible, and accurately address the most important requirements and workflows of the system being developed.
In many cases, analysts express requirements through models, along with natural language, typically in some variant of the UML. AC must then be derived by developers and testers from such models. In this paper, we bridge the gap between requirements models and AC by providing a UML-based modeling methodology and an automated solution to generate AC. We target AC in the form of Behavioral Specifications in the context of Behavioral-Driven Development (BDD), a widely used agile practice in many application domains. More specially we target the well-known Gherkin language to express AC, which then can be used to generate executable test cases.
We evaluate our modeling methodology and AC generation solution through an industrial case study in the financial domain. Our results suggest that (1) our methodology is feasible to apply in practice, and (2) the additional modeling effort required by our methodology is outweighed by the benefits the methodology brings in terms of automated and systematic AC generation and improved model precision.
Authors: Damiano Torre, Ghanem Soltana, Mehrdad Sabetzadeh, Lionel C. Briand, Yuri Auffinger and Peter Goes
The General Data Protection Regulation (GDPR) harmonizes data privacy laws and regulations across Europe. Through the GDPR, individuals are able to better control their personal data in the face of new technological developments. While the GDPR is highly advantageous to individuals, complying with it poses major challenges for organizations that control or process personal data. Since no automated solution with broad industrial applicability currently exists for GDPR compliance checking, organizations have no choice but to perform costly manual audits to ensure compliance. In this paper, we share our experience building a UML representation of the GDPR as a first step towards the development of future automated methods for assessing compliance with the GDPR. Given that a concrete implementation of the GDPR is affected by the national laws of the EU member states, GDPR’s expanding body of case law and other contextual information, we propose a two-tiered representation of the GDPR: a generic tier and a specialized tier. The generic tier captures the concepts and principles of the GDPR that apply to all contexts, whereas the specialized tier describes a specific tailoring of the generic tier to a given context, including the contextual variations that may impact the interpretation and application of the GDPR. We further present the challenges we faced in our modeling endeavor, the lessons we learned from it, and future directions for research.
Authors: Sam Procter and Lutz Wrage
Advances in model-based system engineering have greatly increased the predictive power of models and the analyses that can be run on them. At the same time, designs have become more modular and component-based. It can be difficult to manually explore all possible system designs due to the sheer number of possible architectures and configurations; design space exploration has arisen as a solution to this challenge. In this work, we present the Guided Architecture Trade Space Explorer (GATSE), software which connects an existing model based engineering language (AADL) and tool (OSATE) to an existing design space exploration tool (ATSV). GATSE, AADL, and OSATE are all designed to be easily extended by users, which enables relatively straightforward domain-customizations. ATSV, combined with these customizations, lets system designers “shop” for candidate architectures and interactively explore the architectural trade space according to any quantifiable quality attribute or system characteristic. We evaluate GATSE according to an established framework for variable system architectures, and demonstrate its use on an avionics subsystem.
Authors: Antonio Bucchiarone, Antonio Cicchetti and Annapaola Marconi
Gamification is increasingly used to build solutions for driving the behaviour of target users’ populations. Gameful systems are typically exploited to keep users’ involvement in certain activities and/or to modify an initial behaviour through game-like elements, such as awarding points, submitting challenges and/or fostering competition and cooperation with other players. Gamification mechanisms are well-defined and composed of different ingredients that have to be correctly amalgamated together; among these we find single/multi-player challenges targeted to reach a certain goal and providing an adequate award for compensation. Since the current approaches are largely based on hand-coding/tuning, when the game grows in its complexity, keeping track of all the mechanisms and maintaining the implementation can become error-prone and tedious activities. In this paper, we describe a multi-level modelling approach for the definition of gamification mechanisms, from their design to their deployment and runtime adaptation. The approach is implemented by means of JetBrains MPS, a text-based metamodelling framework, and validated using two gameful systems in the Education and Mobility domains.
Authors: Brice Morin and Nicolas Ferry
A recurring issue in generative approaches, in particular if they generate code for multiple target languages, is logging. How to ensure that logging is performed consistently for all the supported languages? How to ensure that the specific semantics of the source language, e.g. a modeling language or a domain-specific language, is reflected in the logs? How to expose logging concepts directly in the source language, so as to let developers specify what to log? This paper reports on our experience developing a concrete logging approach for ThingML, a textual modeling language built around asynchronous components, statecharts and a first-class action language, as well as a set of “compilers” targeting C, Go, Java and JavaScript.
Authors: Muhammad Zohaib Iqbal, Hassan Sartaj, Muhammad Uzair Khan, Fitash Ul Haq and Ifrah Qaisar
Avionics are highly critical systems that require extensive testing governed by international safety standards. Cockpit Display Systems (CDS) are an essential component of modern aircraft cockpits and display information from the user application (UA) using various widgets. A significant step in the testing of avionics is to evaluate whether these CDS are displaying the correct information. A common industrial practice is to manually test the information on these CDS by taking the aircraft into different scenarios during the simulation. Such testing is required very frequently and at various changes in the avionics. Given the large number of scenarios to test, manual testing of such behavior is a laborious activity. In this paper, we propose a model-based strategy for automated testing of the information displayed on CDS. Our testing approach focuses on evaluating that the information from the user applications is being displayed correctly on the CDS. For this purpose, we develop a profile for capturing the details of different widgets of the display screens using models. The profile is based on the ARINC 661 standard for Cockpit Display Systems. The expected behavior of the CDS visible on the screens of the aircraft is captured using constraints written in Object Constraint Language. We apply our approach on an industrial case study of a Primary Flight Display (PFD) developed for an aircraft. Our results showed that the proposed approach is able to automatically identify faults in the simulation of PFD. Based on the results, it is concluded that the proposed approach is useful in finding display faults on avionics CDS.
Authors: Nadia Hammoudeh Garcia, Ludovic Delval, Mathias Lüdtke, André Santos, Björn Kahl and Mirko Bordignon
Authors: Cesar Augusto Ribeiro dos Santos, Amr Hany Saleh, Tom Schrijvers and Mike Nicolai
It is difficult to maintain consistency between artifacts produced during the development of mechatronic systems, and to ensure the successful integration of independently developed parts. The difficulty stems from the complex, multidisciplinary nature of the problem, with multiple artifacts produced by each engineering domain, throughout the design process, and across supplier chains. In this work, we develop a methodology and a tool, CONDEnSe, that given a set of Assume/Guarantee (A/G) contracts that capture the system requirements, and a high-level decomposition of the system model, automatically generates design variants that respect the requirements and exports those variants to different engineering tools for analysis. Our methodology makes use of a contract-based design algebra to ensure that all generated artifacts for all design variants are consistent by construction, even when the process is modularized and independently developed parts are only later integrated. In contrast with earlier work, our approach reduces the search space to models that comply with the captured design requirements.
Authors: Philipp Obergfell, Stefan Kugele and Eric Sax
Automotive software architectures describe distributed functionality through an interplay of software components. One drawback of today’s architectures is their strong integration into the onboard communication network based on predefined dependencies at design-time. To foster independence, the idea of service-oriented architecture (SOA) provides a suitable prospect as network communication is established dynamically at run-time. Aim: We target to provide a model-based design methodology for analysing and synthesising hardware resources of automotive service-oriented architectures. Approach: For the approach, we apply the concepts of design space exploration and simulation to analyse and synthesise deployment configurations at an early stage of development. Result: We present an architecture candidate for an example function from the domain of automated driving. Based on corresponding simulation results, we gained insights about the feasibility to implement this candidate within our currently considered next E/E architecture generation. Conclusion: The introduction of service-oriented architectures strictly requires early run-time assessments. In order to get there, the usage of models and model transformations depict reasonable ways by additionally accounting quality and development speed.
Authors: Gopi Krishnan Rajbahadur, Gustavo Oliva, Ahmed Hassan and Juergen Dingel
Data science pipelines are a sequence of data processing steps that aim to derive knowledge and insights from raw data. Data science pipeline tools simplify the creation and automation of data science pipelines by providing reusable building blocks that users can drag and drop into their pipelines. Such a graphical, model-driven approach enables users with limited data science expertise to create complex pipelines. However, recent studies show that there exist several data science pitfalls that can yield spurious results and, consequently, misleading insights. Yet, none of the popular pipeline tools have built-in quality control measures to detect these pitfalls. Therefore, in this paper, we propose an approach called Pitfalls Analyzer to detect common pitfalls in data science pipelines. As a proof-of-concept, we implemented a prototype of the Pitfalls Analyzer for KNIME, which is one of the most popular data science pipeline tools. Our prototype is model-driven engineered, since the detection of pitfalls is accomplished using pipelines that were created with KNIME building blocks. To showcase the effectiveness of our approach, we run our prototype on 11 pipelines that were created by KNIME experts for 3 Internet-of-Things (IoT) projects. The results indicate that our prototype flags all and only those instances of the pitfalls that we were able to flag while manually inspecting the pipelines.
Authors: Ennio Visconti, Christos Tsigkanos, Zhenjiang Hu and Carlo Ghezzi
Technological advances enable new kinds of smart environments exhibiting complex behaviors; smart cities are a notable example. Smart functionalities heavily depend on space and need to be aware of entities typically found in the spatial domain, e.g. roads, intersections or buildings in a smart city. We advocate a model-based development, where the model of physical space, coming from the architecture and civil engineering disciplines, is transformed into an analyzable model upon which smart functionalities can be embedded. Such models can then be formally analyzed to assess a composite system design. We focus on how a model of physical space specified in the CityGML standard language can be transformed into a model amenable to analysis and how the two models can be automatically kept in sync after possible changes. This approach is essential to guarantee safe model-driven development of composite systems inhabiting physical spaces. We showcase transformations of real CityGML models in the context of scenarios concerning both design time and runtime analysis of space-dependent systems.
Authors: Dennis Priefer, Peter Kneisel, Wolf Rost, Daniel Strüber and Gabriele Taentzer
Content Management Systems (CMSs) such as Joomla and WordPress dominate today’s web. Enabled by standardized extensions, administrators can build powerful web applications for diverse customer demands. However, developing CMS extensions requires sophisticated technical knowledge, and the highly schematic code structure of an extension gives rise to errors during typical development and migration scenarios. Model-driven development (MDD) seems to be a promising paradigm to address these challenges, however it has not found adoption in the CMS domain yet. Systematic evidence of the benefit of applying MDD in this domain could facilitate its adoption; however, an empirical investigation of this benefit is currently lacking.
In this paper, we present a mixed-method empirical investigation of applying MDD in the CMS domain, based on an interview suite, a controlled experiment, and a field experiment. We consider three scenarios of developing new (both independent and dependent) CMS extensions and of migrating existing ones to a new major platform version. The experienced developers in our interviews acknowledge the relevance of these scenarios and report on experiences that render them suitable candidates for a successful application of MDD. We found a particularly high relevance of the migration scenario. Our experiments largely confirm the potentials and limits of MDD as identified for other domains. In particular, we found a productivity increase up to factor 17 during the development of CMS extensions. Furthermore, our observations highlight the importance of good tooling that seamlessly integrates with already used tool environments and processes.
SoSyM first paper
Authors: Sabine Wolny, Alexandra Mazak, Christine Carpella, Verena Geist and Manuel Wimmer
Authors: Jenny Ruiz, Estefanía Serral and Monique Snoeck
Authors: Patrick Leserf, Pierre de Saqui-Sannes and Jérôme Hugues
Authors: Iván Ruiz-Rube, Tatiana Person, Juan Manuel Dodero, José Miguel Mota and Javier Merchán Sánchez-Jara
Authors: Alvaro Miyazawa, Pedro Ribeiro, Wei Li, Ana Cavalcanti, Jon Timmis and Jim Woodcock
Authors: Yentl Van Tendeloo, Simon Van Mierlo and Hans Vangheluwe
Authors: Fazilat Hojaji, Tanja Mayerhofer, Bahman Zamani, Abdelwahab Hamou-Lhadj and Erwan Bousse