Programmübersicht

Die exakten Zeiten können noch leicht angepasst werden.
Montag
5.3.18

13:00
14:45
Kaffee
15:15
17:15
Dienstag
6.3.18
9:00
10:15
Kaffee
10:45
12:15
Mittag
13:45
15:15
Kaffee
15:45
17:15
9:00
10:00
Kaffee
10:30
11:30
11:30
12:30
Mittag
13:30
15:00
Kaffee
15:30
17:00
18:30
19:30
Empfang
Fußweg
20:00
Dinner
Freitag
9.3.18
9:00
10:00
10:00
10:30
Vortrag Preisträger
Kaffee
11:00
12:30
12:30
SE 19
Closing

Akzeptierte Papiere

Liste der akzeptierten Papiere der SE
Liste der akzeptierten Papiere des Special Tracks "Erklärbare Software"
Liste der akzeptierten Papiere der SWM

Montag, 5.3.2018

13:00 SEERTS Workshop
15:15 Kaffeepause
15:45 SEERTS Workshop

Dienstag, 6.3.2018

9:00 ISEE Workshop
EMLS Workshop
SEELS Workshop
10:15 Kaffeepause
10:45 ISEE Workshop
EMLS Workshop
SEELS Workshop
12:15 Mittagspause
13:45 ISEE Workshop
EMLS Workshop
CSE Workshop (ab 14:00)
15:15 Kaffeepause
15:45 ISEE Workshop
EMLS Workshop
CSE Workshop

Mittwoch, 7.3.2018

9:00 9:30 GI informatiCup Begrüßung und Aufgabe
9:15 Keynote Brian Fitzgerald (Chair: Matthias Tichy)
10:00 GI informatiCup Präsentation Team 1
10:15 Kaffeepause
10:45
  • SE Session 1 - Software Product Lines (Chair: Grischa Liebel)
    • Gabriele Taentzer, Rick Salay, Daniel Strüber and Marsha Chechik. Transformation of Software Product Lines: A Generalizing Framework based on Category Theory

      We present our paper from the proceedings of the 2017 edition of the MODELS conference. Software product lines are used to manage the development of highly complex software with many variants. In the literature, various forms of rule-based product line modifications have been considered. However, when considered in isolation, their expressiveness for specifying combined modifications of feature models and domain models is limited. In this paper, we present a formal framework for product line transformations that is able to combine several kinds of product line modifications presented in the literature. Moreover, it defines new forms of product line modifications supporting various forms of product lines and transformation rules. Our formalization of product line transformations is based on category theory, and concentrates on properties of product line relations instead of their single elements. Our framework provides improved expressiveness and flexibility of software product line transformations while abstracting from the considered type of model.

    • Alexander Knüppel, Thomas Thüm, Stephan Mennicke, Jens Meinicke and Ina Schaefer. Is There a Mismatch between Real-world Feature Models and Product-line Research?

      Feature modeling has emerged as the de-facto standard to capture variability of a software product line in a compact and understandable fashion. Multiple feature modeling languages that evolved over the last decades to manage industrial-size product lines have been proposed. However, less expressive languages, solely permitting require and exclude constraints, are permanently and carelessly used in product-line research. We address the problem whether those less expressive languages are suicient for industrial product lines. We developed an algorithm to eliminate complex cross-tree constraints in a feature model, enabling the combined usage of tools and algorithms working with diferent feature model dialects in a plug-and-play manner. However, the scope of our algorithm is limited. Our evaluation on large feature models, including the Linux kernel, gives evidence that require and exclude constraints are not suicient to express real-world feature models. Hence, we promote that research on feature models needs to consider arbitrary propositional formulas as cross-tree constraints prospectively.

    • Philipp Hohl, Javad Ghofrani, Juergen Muench, Michael Stupperich and Kurt Schneider. Searching for Common Ground: Existing Literature on Automotive Agile Software Product Lines

      Automotive development processes are significantly influenced by digital transformation and need to be adapted. Agile methods are a promising approach but they are not tailored to the specific characteristics of the automotive domain like product line development. Although, there have been efforts to apply agile methods in the automotive domain, widespread adoptions have not yet taken place. This literature review gives an overview of agile methods for embedded software development in the automotive domain, especially with respect to software product lines (SPLs). A mapping study was conducted to analyze the relation between agile software development (ASD), automotive embedded software development and SPLs. Three research questions were defined and 68 papers were evaluated. The study shows that ASD and SPL approaches tailored for the automotive domain are not yet fully explored in the literature. Only few approaches for combining ASD and SPLs in the automotive domain were found, these findings were valuable for identifying research gaps and provide insights into how existing approaches can be combined, extended and tailored to suit the characteristics of the automotive domain.

  • SE Session 2 - Security (Chair: Uwe Zdun)
    • Zoltan Mann and Andreas Metzger. Optimized Cloud Deployment of Multi-tenant Software Considering Data Protection Concerns - Abridged Version

      Concerns about protecting personal data and intellectual property are major obstacles to the adoption of cloud services. To ensure that a cloud tenant’s data cannot be accessed by malicious code of another tenant, critical software components of different tenants are traditionally deployed on separate physical machines. However, such physical separation limits hardware utilization, leading to cost overheads due to inefficient resource usage. Secure enclaves offer mechanisms to protect code and data from potentially malicious code deployed on the same machine, thereby offering an alternative to physical separation. We show how secure enclaves can be employed to address data protection concerns of cloud tenants during resource optimization in software deployment. We provide a model, formalization and experimental evaluation of an efficient algorithmic approach to compute an optimized deployment of software components and virtual machines, taking into account data protection concerns and the availability of secure enclaves. Our experimental results show that even if only 20% of the physical machines offer secure enclaves, savings of energy consumption (a major cost driver) may be as high as 47.5%.

    • Björn Mathis, Vitalii Avdiienko, Ezekiel Soremekun, Marcel Böhme and Andreas Zeller. Detecting Information Flow by Mutating Input Data

      Analyzing information flow is central in assessing the security of applications. However, static and dynamic analyses of information flow are easily challenged by non-available or obscure code. We present a lightweight mutation-based analysis that systematically mutates dynamic values returned by sensitive sources to assess whether the mutation changes the values passed to sensitive sinks. If so, we found a flow between source and sink. In contrast to existing techniques, mutation-based flow analysis does not attempt to identify the specific path of the flow and is thus resilient to obfuscation. In its evaluation, our MUTAFLOW prototype for Android programs showed that mutation-based flow analysis is a lightweight yet effective complement to existing tools. Compared to the popular FLOWDROID static analysis tool, MUTAFLOW requires less than 10% of source code lines but has similar accuracy; on 20 tested real-world apps, it is able to detect 75 flows that FLOWDROID misses.

    • Qusai Ramadan, Mattia Salnitri, Daniel Strüber, Jan Jürjens and Paolo Giorgini. Integrating BPMN- and UML-based Security Engineering via Model Transformation

      We present our paper from the proceedings of 2017 edition of the MODELS conference. Tracing and integrating security requirements throughout the development process is a key challenge in security engineering. In socio-technical systems, security requirements for the organizational and technical aspects of a system are currently dealt with separately, giving rise to substantial misconceptions and errors. In this paper, we present a model-based security engineering framework for supporting the system design on the organizational and technical level. The key idea is to allow the involved experts to specify security requirements in the languages they are familiar with: business analysts use BPMN for procedural system descriptions; system developers use UML to design and implement the system architecture. Security requirements are captured via the language extensions SecBPMN2 and UMLsec. We provide a model transformation to bridge the conceptual gap between SecBPMN2 and UMLsec. Using UMLsec policies, various security properties of the resulting architecture can be verified. In a case study featuring an air traffic management system, we show how our framework can be practically applied.

  • SWM Session 1 - Wissenschaftliches Programm (Chair: Marco Kuhrmann)
    • Sixten Schockert and Georg Herzwurm. Agile Software Quality Function Deployment

      User Stories repräsentieren das wesentliche Artefakt der Kommunikation von Anforderungen in einer agilen Entwicklung. Und unabhängig davon, ob sie sich als präzise Anforderungen für die Entwickler eignen, auf Basis der User Stories im Product Backlog wird entschieden, was in der nächsten Iteration umgesetzt wird und was nicht. Von daher muss ein Agiles Requirements Engineering Wege aufzeigen, gute User Stories zu finden, zu entwerfen und die gemäß Business Value vielversprechendsten für die Implementierung in der nächsten Iteration auszuwählen. Das ist entscheidend für eine nicht nur effiziente, sondern auch effektive agile Entwicklung, die an den wichtigsten Anforderungen ansetzt und nicht „nur“ plausible User Stories zügig umsetzt.
      Dieser Beitrag stellt dazu das Agile Software Quality Function Deployment (QFD) vor. Es basiert auf 27 Gestaltungsanforderungen, abgeleitet aus den Prinzipien und Werten der agilen Softwareentwicklung, dem Umgang mit Anforderungen in agilen Entwicklungsmodellen und empirischen Quellen des agilen Requirements Engineering. Den Vorschlag zum Agilen Software QFD kennzeichnen die nahtlose Einbettung in den agilen Iterationszyklus und besondere methodische Merkmale wie die inkrementell wachsende Priorisierungsmatrix und die Priority Map. Bewertet gegen die Gestaltungsanforderungen und verglichen mit weit verbreiteten Techniken des agilen Requirements Engineering kann das Agile Software QFD durch die konsequente Ausrichtung an den wichtigsten Stakeholderbedürfnissen, der Suche nach alternativen und besseren Lösungen sowie der engen Zusammenarbeit mit den Kunden/Nutzern einen Mehrwert für die agile Entwicklung darstellen. Agiles Software QFD verkörpert damit den Gestaltungsanspruch des Requirements Engineering in einer agilen Softwareentwicklung und ist Ausdruck eines am Business Value orientierten agilen Requirements Engineering.

    • Felix Schönhofen, Sixten Schockert and Georg Herzwurm. Das Business Model House of Quality: Bewertung plattformbasierter Geschäftsmodelle mit Quality Function Deployment

      Companies in the IT industry are exposed to permanent change. These changes relate in particular to emerging markets and business models. To ensure the company’s long-term success, their business models have to be evaluated continuously. Due to the high amount of different stakeholders in multisided markets, this challenge becomes even greater for companies which business models are based on multisided platforms. The following paper shows the possibility to evaluate business models in multisided markets using Quality Function Deployment (QFD). In cooperation with companies in the sharing economy QFD workshops were held to reach this goal and to evaluate the QFD approach to examine business models in multisided markets.

    • Andreas Kaufmann and Dirk Riehle. The QDAcity-RE Method for Structural Domain Modeling Using Qualitative Data Analysis

      The creation of domain models from qualitative input relies heavily on experience. An uncodified ad-hoc modeling process is still common and leads to poor documentation of the analysis. In this article we present a new method for domain analysis based on qualitative data analysis (QDA). The method helps identify inconsistencies, ensures a high degree of completeness, and inherently provides traceability from analysis results back to stakeholder input. These traces do not have to be documented after the fact. We evaluate our approach using four exploratory studies.

10:30 GI informatiCup Präsentation Team 2
11:00 Pause
11:30 GI informatiCup Präsentation Team 3
12:00 GI informatiCup Präsentation Team 4
12:15 Mittagspause 12:30 Mittagspause
13:00 SE 50 Jahre Panel 14:00 GI informatiCup Jurysitzung
14:15
  • SE Session 3 - Processes und Evolution of Software Engineering (Chair: Wilhelm Hasselbring)
    • Andreas Metzger and Philipp Bohn. Predictive Business Process Monitoring unter Berücksichtigung von Prognoseverlässlichkeit und Risiko

      Wir stellen Techniken und Experimentergebnisse für die Berücksichtigung der Prognoseverlässlich-keit und des Risikos beim Predictive Business Process Monitoring vor. Die Berücksichtigung dieser Größen bietet zusätzliche Entscheidungsgrundlagen für die proaktive Prozessanpassung.

    • Kim Lauenroth. Softwareentwicklung braucht mehr Gestaltungkompetenz: Digital Design als neues Rollenideal im Software Engineering

      In der Softwareentwicklung geschieht Gestaltung meist implizit, bspw. durch die Formulierung von Anforderungen eines sogenannten Fachbereichs / Product Owners als Vorgabe für eine umsetzende IT-Organisationen / ein Team. In klassischen Entwicklungssituationen (d.h. der Übertragung verstandener analoger Prozesse in IT- Systeme) war diese Form der Arbeitsteilung zweckmäßig und sinnvoll. Die Herausforderung bestand im Wesentlichen darin, die verstandene Fachlichkeit geeignet einem Entwicklungsprozess zuzuführen. Für neue Entwicklungssituationen, gerne durch Digitalisierung oder digitale Transformation umschrieben, funktioniert dies nicht mehr (vgl. [La17]): Bei der Digitalisierung existieren zwar analoge Vorbilder, aber es ist keinesfalls sichergestellt, dass Nutzer das Digitale dem Analogen vorziehen. Beispielsweise war nicht klar, dass Nutzer den Bücherkauf im Internet dem Buchladen vorziehen. Folglich müssen digitale Prozesse nicht nur technisch gut gemacht sein, die Nutzer müssen diese auch akzeptieren. Bei der digitalen Transformation existieren keine analogen Vorbilder, da vollkommen neue Strukturen entstehen. Damit ist die Akzeptanz mindestens genauso unsicher. Ein Beispiel hierfür ist Second Life. Dieses Projekt war in den Anfangsjahren revolutionär, hat aber im Laufe der Zeit die Erwartungen nicht erfüllt und ist mittlerweile im Vergleich zu anderen Entwicklungen uninteressant geworden. Digitalisierung oder digitale Transformation sind folglich eine Herausforderung für viele Unternehmen [Br17]. Die Verantwortlichen müssen nicht nur Software, sondern meist ein ganzes Ökosystem gestalten. Weiterhin ist das Risiko des Scheiterns wesentlich größer, da Digitalvorhaben von ihrer Natur wesentlich stärker auf Vermutungen und Annahmen über den Erfolg und die Auswirkungen des Vorhabens beruhen.

  • Erklärbare Software 1 - Understandable Verification (Chair: Michael Striewe)
    • Eric Bodden and Lisa Nguyen Quang Do. Explainable Static Analysis

      Static code analysis is an important tool that aids in the early detection of programming mistakes, including functional aws, performance bottlenecks and security vulnerabilities. Past research in static analysis has mainly focused on the precise and e cient detection of programming mistakes, allowing new analyses to return more accurate results in a shorter time. However, end-user experience in industry has shown high abandonment rates for static analysis tools. Previous work has shown that current analysis tools are ill-adapted to meet the needs of their users, taking a long time to yield results and causing warnings to be frequently misinterpreted. This can quickly make the overall bene t of static analyses deteriorate.
      In this work, we argue for the need of developing a line of research on aiding users of static analysis tools, e.g., code developers, to better understand the findings reported by those tools. We outline how we plan to address this problem space by a novel line of research that ultimately seeks to change static analysis tools from being tools for static analysis experts to tools that can be mastered by general code developers. To achieve this goal, we plan to develop novel techniques for formulating, inspecting and debugging static analyses and the rule sets they validate programs against..

    • Julia Padberg, Alexander Schlaefer and Sibylle Schupp. Ein Ansatz zur nachvollziehbaren Verifikation medizinisch-cyber-physikaliser Systeme

      Medizinische cyberphysikalische Systeme erfordern einerseits die Adaption an patientenindividuelle Parameter während einer Behandlung und andererseits den Nachweis eines sicheren Systemverhaltens. Wir schlagen vor, Nachweisbarkeit mittels Online Model-Checking und Nachvollziehbarkeit durch Anwendung von regelbasierten Transformationen zu verbinden.

    • Florian Auer and Michael Felderer. Shifting Quality Assurance of Machine Learning Algorithms to Live Systems

      A fundamental weakness of existing solutions to assess the quality of machine learning algorithms is the assumption that test environments sufficiently mimic the later application. Given the data dependent behavior of these algorithms, only limited reasoning about their later performance is possible. Thus, meaningful quality assurance is not possible with test environments. A shift from the traditional testing environment to the live system is needed. Thus, costly test environments are replaced with available live systems that constantly execute the algorithm.

  • SWM Session 2 - Fallstudien in der Industrie (Chair: Regina Hebig)
    • Andreas Rösel. Innovationsschub – Erfahrungen am Fallbeispiel "IT Products"

      Das IT Umfeld und die Herausforderungen haben sich verändert. Die Erwartungshaltung der Nutzer hat sich verändert, das zeigt sich in Bereichen wie Nutzerfreundlichkeit, sowie Verfügbarkeit der Anwendung auf mobilen Geräten. Dies ist für die die typische innerbetriebliche Software eine große Herausforderung, die zu meistern einen Kultur"shift" erfordert. Ein Schub hin zu einer neuen , einer stärkeren Innovationskultur. Das Drückt sich schlussendlich in einer höheren -Geschwindigkeit von Innovationszyklen aus und ist oft mit ein einer feineren Granularität von Liefereinheiten verbunden. Wie kommen aber die vielen möglichen Elemente wie agile Methoden, Design-Thinking, Intrapreneurship etc. in einem Unternehmen tatsächlich zur Umsetzung und wie spielen sie zusammen? Diese Aspekte werden im einem aktuellen Fallbeispiel aus einem IT Unternehmen beleuchtet. Dabei liegt unser Fokus auf die firmeninterne Ausrichtung und Innovationskultur, einer Herausforderungen der sich viele Unternehmen heute gegenübersehen.

    • Sebastian Klepper and Bernd Bruegge. Impact of Hypothesis-Driven Development on Effectiveness, Quality, and Efficiency in Innovation Projects

      We evaluate the impact of hypothesis-driven development in innovation projects dealing with complex problems, focusing on decision making processes instead of experimentation methods. Our findings show that this type of empirical research used for decision support can enhance effectiveness and quality without negatively impacting efficiency.

15:45 Kaffeepause
16:15
  • SE Session 4 - Testing (Chair: Michael Felderer)
    • Helge Spieker, Arnaud Gotlieb, Dusica Marijan and Morten Mossige. Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration

      Selecting the most promising test cases to detect bugs is hard if there are uncertainties on the impact of committed code changes or if traceability links between code and tests are not available. This paper introduces Retecs, a new method for automatically learning test case selection and prioritization in CI with the goal to minimize the round-trip time between code commits and developer feedback on failed test cases. Retecs uses reinforcement learning to select and prioritize test cases according to their duration, previous last execution and failure history. In a constantly changing environment, where new test cases are created and obsolete test cases are deleted, the Retecs method learns to prioritize error-prone test cases higher under the guidance of a reward function and by observing previous CI cycles. By application on three industrial case studies, we show for the first time that reinforcement learning enables fruitful automatic adaptive test case selection and prioritization in CI and regression testing.

    • José Miguel Rojas, Thomas White, Benjamin Clegg and Gordon Fraser. Code Defenders: Crowdsourcing Effective Tests and Subtle Mutants with a Mutation Testing Game

      This paper was presented at the 39th International Conference on Software Engineering (ICSE 2017), where it received an ACM SIGSOFT Distinguished Paper Award: Writing good software tests is difficult and not every developer's favorite occupation. Mutation testing aims to help by seeding artificial faults (mutants) that good tests should identify, and test generation tools help by providing automatically generated tests. However, mutation tools tend to produce huge numbers of mutants, many of which are trivial, redundant, or semantically equivalent to the original program; automated test generation tools tend to produce tests that achieve good code coverage, but are otherwise weak and have no clear purpose. In this paper, we present an approach based on gamification and crowdsourcing to produce better software tests and mutants: The Code Defenders web-based game lets teams of players compete over a program, where attackers try to create subtle mutants, which the defenders try to counter by writing strong tests. Experiments in controlled and crowdsourced scenarios reveal that writing tests as part of the game is more enjoyable, and that playing Code Defenders results in stronger test suites and mutants than those produced by automated tools.

    • Lutz Prechelt, Holger Schmeisky and Franz Zieris. Quality Experience: A Grounded Theory of Successful Agile Projects Without Dedicated Testers

      Context: While successful conventional software development regularly employs separate testing staff, there are successful agile teams with as well as without separate testers.
      Question: How does successful agile development work without separate testers? What are advantages and disadvantages?
      Method: A case study, based on Grounded Theory evaluation of interviews and direct observation of three agile teams; one having separate testers, two without. All teams perform long-term development of parts of e-business web portals.
      Results: Teams without testers use a "quality experience" work mode centered around a tight field-use feedback loop, driven by a feeling of responsibility, supported by test automation, resulting in frequent deployments.
      Conclusion: In the given domain, hand-overs to separate testers appear to hamper the feedback loop more than they contribute to quality, so working without testers is preferred. However, Quality Experience is achievable only with modular architectures and in suitable domains.

  • Erklärbare Software 2 - Understandable Decisions (Chair: Matthias Riebisch)
    • Verena Klös, Thomas Göthel and Sabine Glesner. Comprehensible Decisions in Complex Self-Adaptive Systems

      To cope with uncertain and statically unforeseen environment behaviour of complex systems, self-adaptivity has gained wide acceptance. While adaptation decisions are required to be close to optimal decisions, they at the same time should be efficient, comprehensible, and reusable. To achieve this, we have developed an engineering and analysis approach for self-learning self-adaptive systems based on our notion of timed adaptation rules. Through continuous evaluation and learning, inaccurate rules can be improved and new rules can be learned at run-time to cope with changing environments and system goals. A separate verification phase enables us to provide offline and online guarantees of evolving adaptation logics based on human-comprehensible formal models. Our approach, which incorporates the precise retracing of previous adaptation decisions, enables the understanding of the contexts in which certain adaptation decisions have been made, and assessing whether they have gained their expected effect in time within the system. This comprehensibility of complex decisions in self-adaptive systems enables the precise understanding and reuse of adaptation logics and provides trust in autonomous decision making.

    • Kurt Schneider. Erklärungen (nur) nach Bedarf

      Software ist kompliziert. Es ist schwierig, zu verstehen, wie sie funktioniert und wie sie bestimmte Resultate erzielt oder wie sie zu einer automatisierten Entscheidung gelangt. In diesem Beitrag wird die These vertreten, dass es dennoch nicht sinnvoll ist, möglichst viele Informationen für mögliche spätere Erklärungen zu sammeln oder zu generieren, denn ein großer Teil davon wird nie benötigt. Erklärbarkeit mag ein wichtiger Software­qualitätsaspekt sein; sie ist aber dennoch eine „Sekundärtugend“. In sie sollte man nur Aufwand und Zeit investieren, wenn der absehbare Nutzen dies auch rechtfertigt. Es ist wichtig, zunächst den angestrebten Nutzen genau zu verstehen.

    • Daniel Braun and Florian Matthes. Generating Explanations for Algorithmic Decisions of Usage-Based Insurances using Natural Language Generation

      Usage-based insurances are becoming more and more popular, especially for cars. These so called telematics insurances use different sensors installed in a car to track the individual driving style of the driver. Instead of calculating insurance premiums based on statistical risk groups, insurance companies can use these data to create individual risk profiles and calculate insurance premiums accordingly. We present an approach to use Natural Language Generation (NLG) in order to explain customers which aspects of their behaviour influenced the assessment of the algorithm. In this way, we can not only increase the acceptance of customers regarding such systems, but also positively influence their future behaviour.

  • SWM Session 3 - Neue Ideen (Chair: Jürgen Münch)
    • Sebastian Klepper, Christian Grimm and Bernd Bruegge. Continuous Innovation and Experimentation in Complex Problem Domains: Problem Solving and Decision Support as a Starting Point for a Unified Process Framework

      Continuous software engineering enables experimentation and empirical research in complex problem domains. Existing process models describe different approaches for exploration, innovation, and refinement. We propose a new approach based on problem solving techniques and focused on decision support to serve as the starting point for a unified process framework.

    • Christopher Jud and Georg Herzwurm. Herausforderungen für das IT-Produktmanagement durch externe Plattformen

      Plattformen bilden die Grundlage für viele Softwareprodukte. Häufig müssen Unternehmen mehrere Plattformen bedienen. um relevante Kunden und Marktanteile zu erreichen. Verschiedene Einflussfaktoren wirken dabei auf Softwareprodukte der Unternehmen und erhöhen die Komplexität in Entwicklung und Vertrieb. Das Produktmanagement bietet eine Möglichkeit, den Einfluss von Plattformen zu adressieren. Im Rahmen diese Vortrages wird eine Ergänzung des Produktmanagements eingeführt und diskutiert, welche Unternehmen erlaubt, Herausforderungen durch Plattformen zu begegnen.

    • Katrin Kahle and Alexander Götze. Agile Produktentwicklung bei Software-Spin-Offs an der Universität

      Am Beispiel von Software-Spin-Offs der TU Dresden stellen wir vor, wie agiles Projektmanagement genutzt werden kann, um den gesamten Vorgründungsprozess zu beschleunigen. Genauer geht es darum, unter begrenzter Zeit mit begrenzten Mitteln so schnell wie möglich ein Minimal Viable Product zu entwickeln, dessen erstes Featureset echten Nutzen für die vielversprechendste Kundengruppe liefert. Um dieses Ziel zu erreichen müssen unzählige Entscheidungen in vielfältigen Themen wie Produktkonzeption, Produktentwicklung, Geschäftsmodellentwicklung, Kundenentwicklung oder Finanzmitteleinwerbung von drei bis vier Teammitgliedern in kurzer Zeit getroffen werden. Bei diesen Entscheidungen wird das Team vom Softwareinkubationsprogramm DeltaHochDrei der TU Dresden bei Aufstellung und Priorisierung des Backlogs und des Sprint Plannings mit wirtschaftlichem und unternehmerischem Hintergrundwissen unterstützt. Es werden typische Entscheidungssituationen und bisherige Erfahrungen mit der Anwendung agiler Managementprinzipien vorgestellt.

17:45

Donnerstag, 8.3.2018

9:00 Keynote Sven Apel (Chair: Eric Bodden)
10:00 Kaffeepause
10:30 Keynote Julien Siebert (Chair: Stefan Wagner)
11:30
  • SE Session 5 - Empirical 1 (Chair: Birgit Demuth)
    • Arne Johanson and Wilhelm Hasselbring. Empirical Evaluation of a domain-specific language for high-performance marine ecosystem simulation

      In this paper, we report on the empirical evaluation of domain-specific languages by evaluating the Sprat Ecosystem DSL for its effectiveness and efficiency.

    • Fabian Fagerholm, Marco Kuhrmann and Juergen Muench. Guidelines for using empirical studies in software engineering education

      Software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. Educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. Industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. Real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. A way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. In this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. We give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. Furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. Having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life.

  • SE Session 6 - Modeling (Chair: Sibylle Schupp)
    • Hamed Shariat Yazdi, Lefteris Angelis, Timo Kehrer and Udo Kelter. A framework for capturing, statistically modeling and analyzing the evolution of software models

      In this work, we report about a recently developed framework for capturing, statistically modeling and analyzing the evolution of software models, published in the Journal of Systems and Software, Vol-118, Aug-2016. State-of-the-art approaches to understand the evolution of models of software systems are based on software metrics and similar static properties; the extent of the changes between revisions of a software system is expressed as differences of metrics values, and statistical analyses are based on these differences. Unfortunately, such approaches do not properly reflect the dynamic nature of changes. In contrast to this, our framework captures the changes between revisions of models in terms of both low-level (internal) and high-level (developer-visible) edit operations applied between revisions. Evolution is modeled statistically by using ARMA, GARCH and mixed ARMA-GARCH time series models. Forecasting and simulation aspects of these time series models are thoroughly assessed, and the suitability of the framework is shown by applying it to a large set of design models of real Java systems. A main motivation for, and application of, the resulting statistical models is to control the generation of realistic model histories which are intended to be used for testing model versioning tools. Further usages of the statistical models include various forecasting and simulation tasks.

    • Regina Hebig, Djamel Eddine Khelladi and Reda Bendraou. Reporting on a Survey on Approaches to Co-Evolution of Metamodels and Models

      Modeling languages, just as all software artifacts, evolve. This poses the risk that legacy models of a company get lost, when they become incompatible with the new language version. To address this risk, a multitude of approaches for metamodel-model co-evolution were proposed in the last 10 years. However, the high number of solutions makes it difficult for practitioners to choose an appropriate approach. In this paper, we present a survey on 31 approaches to support metamodel-model co-evolution. We introduce a taxonomy of solution techniques and classify the existing approaches. To support researchers, we discuss the state of the art, in order to better identify open issues. Furthermore, we use the results to provide a decision support for practitioners, who aim to adopt solutions from research.

  • SE Session 14 - Software Architekture (Chair: Michael Goedicke)
    • Kiana Busch, Robert Heinrich, Axel Busch and Ralf Reussner. Automated Analysis of the Co-evolution of Software Systems and Business Processes

      Software systems are an essential part of business processes. As business processes and the corresponding software systems mutually affect each other, they co-evolve during their life cycle. Thus, to adequately predict the impact of a change, their mutual dependencies have to be considered. However, existing approaches to change propagation analysis consider one domain in isolation and neglect the mutual dependencies between the domains. In this paper, we propose the Karlsruhe Architectural Maintainability Prediction for Business Processes (KAMP4BP) to analyze the change propagation in business processes and the corresponding software systems.

    • Axel Busch and Anne Koziolek. Using Architecture Knowledge to Improve Automated Software Architecture Design Space Exploration

      Die Qualität eines Software-Systems hängt zum großen Teil von der Güte seiner zugrundeliegenden Software-Architektur ab. Während des Software-Entwicklungsprozesses müssen Entwickler häufig Kompromisse zwischen verschiedenen Qualitätsattributen eingehen. In aller Regel stehen einige dieser Qualitätsattribute miteinander in Konflikt. Beispielsweise kann häufig ein besseres Antwortzeitverhalten des Systems nur mit höheren Kosten bei der Entwicklung oder bei der verwendeten Hardware erreicht werden. Entwickler können sich bei Abwägungsentscheidungen dieser Art bereits von verschiedenen Lösungen unterstützen lassen und so ihre Software-Architektur optimieren. Dabei besteht die Möglichkeit entweder quantifizierte oder nicht-quantifizierte Qualitätsattribute auszuwerten. Sollen gut automatisierbare Ansätze verwendet werden ist es in aller Regel nötig alle Qualitätsattribute zu quantifizieren. Die hohen Kosten dieses Ansatzes zur Anwendung der dafür nötigen Metriken machen eine vollständige Quantifizierung aller nötigen Qualitätsattribute allerdings häufig nicht möglich.
      Der Ansatz dieser Arbeit kombiniert daher Methoden, die quantifizierte Qualitätsattribute voraussetzen, mit qualitativen Ansätzen, um den Aufwand zu reduzieren alle tatsächlich nötigen Qualitätsattribute bei der Entscheidungsunterstützung berücksichtigen zu können. Dabei kann der Entwickler entscheiden welche Qualitätsattribute sich im Einzelfall lohnen (aufwendig) zu quantifizieren und bei welchen eine qualitative Abschätzung ausreichend ist. Der genannte Ansatz erlaubt es anschließend sowohl quantifizierte, als auch nicht-quantifizierte Qualitätsattribute in Kombination zu verarbeiten und Entwurfsenscheidungen zu unterstützen. Der Ansatz wurde an zwei Systemen demonstriert und das Laufzeitverhalten als quantifizierte Qualität zusammen mit Sicherheit und Bedienbarkeit als nicht-quantifizierte Qualität betrachtet.

12:30Mittagspause
13:30
  • SE Session 7 - Empirical 2 (Chair: Thorsten Berger)
    • Vincent Bertram, Shahar Maoz, Jan Oliver Ringert, Bernhard Rumpe and Michael von Wenckstern. Component and Connector Views in Practice: An Experience Report

      We have presented an experience report of applying Component and Connector (C&C) view verification with an industrial partner, at the MoDELS 2017 conference. C&C views are a means for formal yet intuitive structural specification of C&C models. We report on our experience how C&C views and their verification help to address challenges of traceability and evolution in automotive industry. We analyzed the development process at Daimler AG and evaluated our C&C views verification tool on five Simulink models with more than 7700 subsystems in total and C&C views created for 183 textual requirements provided by Daimler AG. We describe our experience in detail and discuss a list of lessons learned, including, e.g., a missing abstraction concept in C&C models and C&C views that we have identified and added to the views language and tool, that engineers can create graphical C&C views quite easily, and how verification algorithms scale on real-size industry models.
      Furthermore, we report on the non-negligible technical effort needed to translate Simulink block diagrams to C&C models.
      We make all materials mentioned and used in our experience electronically available for inspection and further research.

    • Marcel Böhme, Ezekiel Soremekun, Sudipta Chattopadhyay, Emamurho Ugherughe and Andreas Zeller. Wo ist der Fehler und wie wird er behoben? Ein Experiment mit Softwareentwicklern.

      Dieser Beitrag ist eine gekürzte, deutsche Version unseres Artikels "Where is the Bug and How is it Fixed? An Experiment with Practitioners" publiziert in dem Berichtsband des elften gemeinsamen Treffens der European Software Engineering Conference und des ACM SIGSOFT Symposium on the Foundations of Software Engineering.

    • Marco Kuhrmann, Philipp Diebold, Juergen Muench, Paolo Tell, Vahid Garousi, Michael Felderer, Kitija Trektere, Fergal Mc Caffery, Oliver Linssen, Eckhart Hanser and Christian Prause. Hybrid Software and System Development in Practice: Waterfall, Scrum, and Beyond

      Im vorliegenden Paper wird die Frage untersucht, welche Software- und Systementwicklungsansätze in der Praxis eingesetzt werden. Die zugrundeliegende HELENA-Studie (Hybrid dEveLopmENt Approaches in software systems development) löst sich von der Diskussion „traditioneller“ verus „agiler“ Entwicklung und untersucht tatsächliche Kombinationen von Prozessen und Methoden in sog. Hybriden Entwicklungsansätzen. Das vorliegende Paper berichtet über die Ergebnisse der ersten Stufe der HELENA-Studie, in der 69, vornehmlich europäische, Teilnehmer an einer Onlineumfrage teilnahmen. Die Teilnehmer wurden gebeten, 40 unterschiedliche Entwicklungsansätze zu bewerten und darzustellen, wie und warum die Ansätze in ihren Unternehmen/Projekten eingesetzt werden. Die Ergebnisse zeigen, dass in der Praxis häufig ein strukturierter traditioneller Prozess den organisatorischen Rahmen für ein Projekt bildet, während feingranulare Aufgaben durch individuell eingebrachte agile Praktiken adressiert werden. Diese Kombination findet sich bei fast allen Unternehmen, und zwar unabhängig von Unternehmensgröße und Branche. Darüber hinaus wurden durch Anwendung von Affinity Propagation Clustering und Spectral Clustering unterschiedliche Kombinationsmuster aus den Daten abgeleitet, welche mit weiteren externen Studien erfolgreich auf Realitätsnähe geprüft wurden. Unsere Studie zeigt die weite Verbreitung hybrider Entwicklungsprozesse und dass sie i.d.R. durch einen individuellen Lernprozess entstehen.

  • SE Session 8 - Model evolution and transformation (Chair: Steffen Becker)
    • Roland Kluge, Michael Stein, Gergely Varró, Andy Schürr, Matthias Hollick and Max Mühlhäuser. A systematic approach to constructing families of incremental topology control algorithms using graph transformation

      In this talk, we present results on integrating support for variability modeling into a correct-by-construction development methodology for topology control algorithms, as appeared online in the Software & Systems Modeling journal in 2017. A topology control algorithm reduces the size of the visible neighborhood of a node in a wireless communication network. At the same time, it must fulfill important consistency properties to ensure a high quality of service. In previous work, we proposed a constructive, model-driven methodology for designing individual topology control algorithms based on declarative graph constraints and graph transformation rules; the resulting algorithms are guaranteed to preserve the specified properties. Even though many topology control algorithms share substantial (structural) parts, few works leverage these commonalities at design time. In this work, we generalize our proposed construction methodology by modeling variability points to support the construction of families of algorithms. We show the applicability of our approach by reengineering six existing topology control algorithms and developing e-kTC, a novel energy-efficient variant of the topology control algorithm kTC. Finally, we evaluate a subset of the algorithms using a novel integration of a wireless network simulator and a graph transformation tool.

    • Daniel Strüber, Vlad Acretoaie and Jennifer Plöger. Clone Detection for Rule-Based Model Transformation Languages

      Cloning is a convenient mechanism to enable reuse across and within software artifacts. On the downside, it is also a practice related to longterm maintainability impediments, thus generating a need to identify clones in affected artifacts. A large variety of clone detection techniques has been proposed for programming and modeling languages; yet no specific ones have emerged for model transformations. We explore clone detection for rule-based model transformation languages, including graph-based and hybrid ones. We introduce use cases for such techniques in the context of quality assurance, and a set of key requirements derived from these use cases. To address these requirements, we describe our customization of existing model clone detection techniques. We compare these techniques in a comprehensive experimental evaluation, based on three realistic Henshin rule sets, and a body of examples from the ATL transformation zoo. Our results indicate that our customization of ConQAT enables the efficient detection of the considered clones, without sacrificing accuracy. With our contributions, we pave the way for future research efforts at the intersection of clone detection and model transformation.

    • Christos Tsigkanos, Timo Kehrer and Carlo Ghezzi. Modeling and Verification of Evolving Cyber-Physical Spaces

      In this work, we report about recent research results on the Modeling and Verification of Evolving Cyber-Physical Spaces, published in ESEC/FSE17. We increasingly live in cyber-physical spaces – spaces that are both physical and digital, and where the two aspects are intertwined. Such spaces are highly dynamic and typically undergo continuous change. Software engineering can have a profound impact in this domain, by defining suitable modeling and specification notations as well as supporting design-time formal verification. In this paper, we present a methodology and a technical framework which support modeling of evolving cyber-physical spaces and reasoning about their spatio-temporal properties. We utilize a discrete, graph-based formalism for modeling cyber-physical spaces as well as primitives of change, giving rise to a reactive system consisting of rewriting rules with both local and global application conditions. Formal reasoning facilities are implemented adopting logic-based specification of properties and according model checking procedures, in both spatial and temporal fragments. We evaluate our approach using a case study of a disaster scenario in a smart city.

  • Erklärbare Software 3 - Understandable Software (Chair: Leen Lambers)
    • Jonas Winkler and Andreas Vogelsang. "What Does My Classifier Learn?" A Visual Approach to Understanding Natural Language Text Classifiers

      Neural Networks have been utilized to solve various tasks such as image recognition, text classification, and machine translation and have achieved exceptional results in many of these tasks. However, understanding the inner workings of neural networks and explaining why a certain output is produced are no trivial tasks. Especially when dealing with text classification problems, an approach to explain network decisions may greatly increase the acceptance of neural network supported tools. In this paper, we present an approach to visualize reasons why a classification outcome is produced by convolutional neural networks by tracing back decisions made by the network. The approach is applied to various text classification problems, including our own requirements engineering related classification problem. We argue that by providing these explanations in neural network supported tools, users will use such tools with more confidence and also may allow the tool to do certain tasks automatically.

    • Regina Hebig. UI-Tracer: A Lightweight Approach to Help Developers Tracing User Interface Elements to Source Code

      The ability to understand software systems is crucial to identify hidden threats or maintain software systems over many years. Still software comprehension activities take up around 58% of software development time. While most approaches support the comprehension of a software system's code perspective, its connection to the user perspective of is barely explored. We present UI-Tracer, a lightweight support for tracing user interface elements to source code using the version history of a system. The evaluation on two open source systems shows that the approach can cover all UI elements that have been changed or added within the accessible part of the version history. Furthermore, the median numbers of files flagged as potentially responsible for a UI element is 8 and 3 for the two studied systems. Thus, UI-Tracer provides an easy starting ground for developers to identify files relevant for future UI changes.

    • Patrick Holl, Elena Scepankova and Florian Matthes. Smart Contract based API usage tracking on the Ethereum Blockchain

      API service providers usually charge their customers based on internally kept usage protocols. The whole process is highly intransparent for consumers because they are dependent on the providers’ honesty. Using smart contracts on the Ethereum blockchain to log API usage creates an immutable and trustless single source of truth between consumers and providers. Leveraging the blockchain makes the whole process of API usage logging more transparent and comprehensible.

15:00Kaffeepause
15:30
  • SE Session 9 - Program analysis and failure prediction (Chair: Thomas Thüm)
    • Leonid Glanz, Sven Amann, Michael Eichberg, Michael Reif and Mira Mezini. CodeMatch: Obfuscation Won’t Conceal Your Repackaged App

      Popular mobile apps are regularly installed by millions of users. This fact attracts malicious actors to create altered, repackaged versions of those apps to steal the original owner’s revenue or to trick users to infect their devices with malware. Detecting such repackaged apps is, therefore, necessary for a secure and viable app market but is challenging due to the use of code obfuscation and the widespread usage of libraries. Due to the recent fact, non-repackaged, legitimate apps often share a majority of their code base and are classified as repackaged by state-of-the-art detectors. We, therefore, propose a new library filtering approach that relies on code representations at five different abstraction levels to achieve resilience against code obfuscation. Additionally, we propose to use the most abstract representation in combination with fuzzy-hashing to detect repackaged apps. Our evaluation shows that the overall approach leads to a better detection rate up to 50%.

    • Michael Reif, Michael Eichberg and Mira Mezini. Call Graph Construction for Java Libraries

      Today, every application uses software libraries. Yet, while a lot of research exists w.r.t. analyzing applications, research that targets the analysis of extensible libraries and frameworks as such is scarce. This is unfortunate because developers of libraries (e.g. the Java Development Kit) want to ensure that the library behaves as intended regardless of how it is used. To fill this gap, we discuss the construction of call graphs that abstract over all potential usages of the respective library. Additionally, we show that the current practice of using call graph algorithms designed for applications leads to call graphs that, at the same time, lack relevant call edges and contain unnecessary edges when used for libraries. Unlike algorithms for applications, call graph construction algorithms for libraries must also take the goals of subsequent analyses into consideration. We will discuss the differences between an analysis for vulnerabilities vs. one for general software quality attributes, e.g., dead methods. This distinction affects the decision about what constitutes the library-private implementation, which, therefore, needs special treatment.

    • Teerat Pitakrat, Dušan Okanović, André van Hoorn and Lars Grunske. Architecture-Aware Online Failure Prediction for Distributed Software Systems

      Today’s software systems are complex. They comprise an immense number of distributed hardware and software components to deliver desired functionalities. Failures during production are inevitable despite successful approaches for quality assurance during software development. A failure in one component, e.g., memory leak or slow response time, can create a chain of failures propagating to other components and the users [Av04]. Online failure prediction [SLM10] aims to foresee imminent failures by making predictions based on system parameters from monitoring data. Existing approaches employ prediction models that predict failures either for the whole system or for individual components without considering software architecture. We propose an architecture-aware online failure prediction approach that combines failure prediction with architectural knowledge. The failure probabilities of individual components are predicted based on continuously collected monitoring data [He17]. The prediction results are forwarded to a failure propagation model, which periodically computes a system failure probability. The model uses a Bayesian network to represent architectural dependencies extracted automatically from architectural knowledge. The results can, for instance, be used for proactive maintenance. The experimental evaluation shows that the prediction quality is improved when software architectural knowledge is included in the prediction.

  • SE Session 10 - Software Process (Chair: Stephan Krusche)
    • Dirk Riehle, Maximilian Capraro, Detlef Kips and Lars Horn. Inner Source in Platform-Based Product Engineering

      Inner source is an approach to collaboration across intra-organizational boundaries for the creation of shared reusable assets. Prior project reports on inner source suggest improved code reuse and better knowledge sharing. Using a multiple-case case study research approach, we analyze the problems that three major software development organizations were facing in their product line engineering efforts. We find that a root cause, the separation of product units as profit centers from a platform organization as a cost center, leads to delayed deliveries, increased defect rates, and redundant software components. All three organizations assume that inner source can help solve these problems. The article analyzes the expectations that these companies were having towards inner source and the problems they were experiencing in its adoption. Finally, the article presents our conclusions on how these organizations should adapt their existing engineering efforts.

    • Franz Zieris and Lutz Prechelt. Observations on Knowledge Transfer of Professional Software Developers during Pair Programming

      Context: Software development is knowledge-intense work, and so is pair programming. However, the importance of knowledge transfer in pair programming is usually only stressed for expert-novice constellations and not for general software development situations.
      Goal: Understand how pair programmers deal with their knowledge deficits and eventually provide guidance for practitioners.
      Method: Detailed qualitative data analysis of full-length recordings of industrial pair programming sessions.
      Results: Expert software developers need to transfer knowledge, too, in order to conduct productive pair programming sessions. There is a diversity of beneficial and potentially problematic patterns, which even good pairs do not steadily apply or avoid, respectively.
      Conclusions: Pair programming is a versatile practice that even experts can profit from. Knowledge transfer skills do not automatically emerge from good software development skills, but can probably be learned.

    • Mazen Mohamad, Grischa Liebel and Eric Knauss. LoCo CoCo: Automatically Constructing Coordination and Communication Networks from Model-Based Systems Engineering Data

      Context: Communication and coordination are essential ingredients to successful requirements and software engineering. However, especially in large organisations, it is difficult to establish and maintain communication channels.
      Objective: In order to facilitate communication, we investigate automatic construction of social network models from existing requirements and systems engineering models.
      Method: We conducted a design science research study in three iterative cycles at a large automotive company, and evaluated the outcome based on 15 interviews with practitioners and a survey with 12 participants.
      Results: The resulting approach, denoted LoCo CoCo, automatically creates and visualises social networks based on selected systems engineering components of real-life, productive systems engineering models. Our results indicate that automatic construction and visualisation of social network models could be feasible and useful to overcome existing communication challenges.
      Conclusion: Despite a lack of quality in existing social data at the case company, practitioners found LoCo CoCo potentially helpful to overcome existing communication challenges. Additionally, the visualisation could trigger practitioners to keep their social data up to date.

17:00
18:30
19:30 Fußweg zum Bootshaus
20:00

Freitag, 9.3.2018

9:00 Keynote Gerald Stieglbauer (Chair: Matthias Tichy)
10:00 Vortrag Preisträger Software-Engineering-Preis
10:30 Kaffeepause
11:00
  • SE Session 11 - Requirements and Traceability (Chair: Stefan Sauer)
    • Rashidah Kasauli, Grischa Liebel, Eric Knauss, Swathi Gopakumar and Benjamin Kanagwa. Requirements Engineering Challenges in Large-Scale Agile System Development

      Motivated by their success in software development, companies implement agile methods and their practices increasingly for software-intense, large products, such as cars, telecommunication infrastructure, and embedded systems. As a consequence of safety and regulative concerns as well as different development cycles of hardware and software, detailed, up-front requirements engineering (RE) can be at odds with agile (software) development. We present results from a multiple case study with two car manufacturers, a telecommunications company, and a technology company that are on the journey to introduce organization wide continuous integration and continuous delivery to customers. Based on 20 qualitative interviews, 5 focus groups, and 2 cross-company workshops, we discuss possible scopes of agile methods within system development, the consequences this has on the role of requirements, and the challenges arising from the interplay of RE and agile methods in large-scale system development. These relate to communicating and managing knowledge about a) customer value and b) the system under development. Better alignment of a holistic requirements model with agile development practices promises rich gains in development speed, flexibility, and overall quality of software and systems.

    • Bastian Tenbergen, Thorsten Weyer and Klaus Pohl. Hazard Relation Diagrams: a diagrammatic representation to increase validation objectivity of requirements-based hazard mitigations

      During the development of safety-critical systems, the development process must ensure that requirements, which are defined to mitigate a hazard, are adequate. Adequacy of such hazard-mitigating requirements (HMRs) means that the requirements may not oppose the system’s operational purpose and must sufficiently avoid, reduce, or control, the occurrence of the conditions that trigger the hazard. However, information about the occurrence of the hazard’s trigger conditions are a work product of hazard analyses during early stages of safety assessment, while HMRs are a work product of requirements engineering. Dependencies between HMRs and hazard analysis results are implicit and tacit. In consequence, there’s a risk that during validation, inadequacy of HMRs regarding their ability to mitigate a hazard remains covert. The result may be that the system is assumed to be safe, but in fact may still cause injury or death. We introduced Hazard Relation Diagrams (HRDs) as a means to integrate and graphically visualize hazard analysis results with HMRs. Herein, we also provide insights into their empirical evaluation and show that HRDs increase objectivity in rationales containing adequacy judgments.

    • Andreas Demuth, Roland Kretschmer, Michael Tröls, Georgio Kanakis, Davy Maes and Alexander Egyed. Experiences on Traceability and Consistency Checking across Engineering Tools in an Automation Solution Company

      The engineering of systems is unimaginable without software tools. Engineers use them to capture and analyze engineering problems; specify, implement, test, and maintain engineering solutions, and manage engineering processes. Yet, there is a gap between the capabilities of independently working engineers and the needs of a collaborative engineering team. The existing tool landscape emphasizes the former. Most engineering tools are single-user applications – often of excellent quality but limited in that they support the works of individual engineers and not that of a group of engineers. And herein lies one of the most fundamental problems of software and systems engineering. Engineers know well the engineering knowledge they capture but they often lack awareness of the many implications their work has on other engineers and/or other engineering domains. This is a problem because in today’s engineering projects, companies continuously have to adapt their systems to changing customer or market requirements. This requires a flexible, iterative development process in which different parts of the system under construction are built and updated concurrently. However, concurrent engineering is quite problematic in domains where different engineering domains and different engineering tools come together. In this paper, we discuss experiences with Van Hoecke Automation, a leading company in the areas of production automation and product processing, in maintaining the consistency between electrical models and the corresponding software controller when both are subject to continuous change. The paper discusses how we let engineers describe the relationships between electrical model and software code in form of links and consistency rules; and how through continuous consistency checking our approach then notified those engineers of the erroneous impact of changes in either electrical model or code.

  • SE Session 12 - Design (Chair: Timo Kehrer)
    • Sven Peldszus, Géza Kulcsár, Malte Lochau and Sandro Schulze. On Continuous Detection of Design Flaws in Evolving Object-Oriented Programs using Incremental Multi-Pattern Matching

      Design flaws in object-oriented programs may seriously corrupt code quality thus increasing the risk for introducing subtle errors during software maintenance and evolution. Most recent approaches identify design flaws in an ad-hoc manner, either focusing on software metrics, locally restricted code smells, or on coarse-grained architectural anti-patterns. In this work, we utilize an abstract program model capturing high-level object-oriented code entities, further augmented with qualitative and quantitative design-related information such as coupling/cohesion. Based on this model, we propose a comprehensive methodology for specifying object-oriented design flaws by means of compound rules integrating code metrics, code smells and anti-patterns in a modular way. This approach allows for efficient, automated design-flaw detection through incremental multi-pattern matching, by facilitating systematic information reuse among multiple detection rules as well as between subsequent detection runs on continuously evolving programs. Our tool implementation comprises well-known anti-patterns for Java programs. The results of our experimental evaluation show high detection precision, scalability to real-size programs, as well as a remarkable gain in efficiency due to information reuse.

    • Joel Greenyer and Timo Gutjahr. Symbolic Execution for Realizability-Checking of Scenario-based Specifications

      Scenario-based specification with the Scenario Modeling Language (SML) is an intuitive approach for formally specifying the behavior of reactive systems. SML is close to how humans conceive and communicate requirements, yet SML is executable and simulation and formal realizability checking can find specification flaws early. The realizability checking complexity is, however, exponential in the number of scenarios and variables. Therefore algorithms relying on explicit-state exploration do not scale and, especially when specifications have message parameters and variables over large domains, fail to unfold their potential. In this paper, we present a technique for the symbolic execution of SML specifications that interprets integer message parameters and variables symbolically. It can be used for symbolic realizability checking and interactive symbolic simulation. We implemented the technique in ScenarioTools. Evaluation shows drastic performance improvements over the explicit-state approach for a range of examples. Moreover, symbolic checking produces more concise counter examples, which eases the comprehension of specification flaws.

    • Birgit Vogel-Heuser, Juliane Fischer, Stefan Feldmann, Sebastian Ulewicz, Susanne Rösch and Safa Bougouffa. Modularity and architecture of PLC-based software for automated production Systems: An analysis in industrial companies

      Adaptive and flexible production systems require modular and reusable software, especially considering their long-term life cycle of up to 50 years. We introduce a benchmark process – so-called SWMAT4aPS – to measure software maturity for industrial control software of automated production systems.

  • SE Session 13 - Program editing and comprehension (Chair: Regina Hebig)
    • Norman Peitek, Janet Siegmund, Chris Parnin, Sven Apel, Johannes Hofmeister, Christian Kästner, Andrew Begel, Anja Bethmann and André Brechmann. Neural Efficiency of Top-Down Program Comprehension

      We observed program comprehension with functional magnetic resonance imaging (fMRI) and found a difference in neural efficiency between top-down and bottom-up comprehension, but failed to find a significant effect from beacons. Furthermore, we were able to replicate the results of a previous fMRI study, thereby strengthening the role of fMRI as measurement technique to observe program comprehension and other related cognitive processes.

    • Thorsten Berger, Markus Voelter, Hans Peter Jensen, Taweesap Dangprasert and Janet Siegmund. Efficiency of Projectional Editing

      Projectional editors are editors where a user's editing actions directly change the abstract syntax tree without using a parser. They promise essentially unrestricted language composition as well as flexible notations, which supports aligning languages with their respective domain and constitutes an essential ingredient of model-driven development. Such editors have existed since the 1980s and gained widespread attention with the Intentional Programming paradigm, which used projectional editing at its core. However, despite the benefits, programming still mainly relies on editing textual code, where projectional editors imply a very different -- typically perceived as worse -- editing experience, often seen as the main challenge prohibiting their widespread adoption. We present an experiment of code-editing activities in a projectional editor, conducted with 19 graduate computer-science students and industrial developers. We investigate the effects of projectional editing on editing efficiency, editing strategies, and error rates -- each of which we also compare to conventional, parser-based editing. We observe that editing is efficient for basic-editing tasks, but that editing strategies and typical errors differ. More complex tasks require substantial experience and a better understanding of the abstract-syntax-tree structure—then, projectional editing is also efficient. We also witness a tradeoff between fewer typing mistakes and an increased complexity of code editing.

    • Benjamin Behringer, Jochen Palz and Thorsten Berger. PEoPL: Projectional Editing of Product Lines

      The features of a software product line -- a portfolio of system variants -- can be realized using various implementation techniques (a.k.a., variability mechanisms). Each technique represents the software artifacts of features differently, typically classified into annotative (e.g., C preprocessor) and modular representations (e.g., feature modules), each with distinct advantages and disadvantages. Annotative representations are easy to realize, but annotations clutter source code and hinder program comprehension. Modular representations support comprehension, but are difficult to realize. Most importantly, to engineer feature artifacts, developers need to choose one representation and adhere to it for evolving and maintaining the same artifacts. We present PEoPL, an approach to combine the advantages of annotative and modular representations. When engineering a feature artifact, developers can choose the most-suited representation and even use different representations in parallel. PEoPL relies on separating a product line into an internal and external representation, the latter by providing editable projections used by the developers. We contribute a programming-language-independent internal representation of variability, five editable projections reflecting different variability representations, a supporting IDE, and a tailoring to Java. We evaluate PEoPL's expressiveness, scalability, and flexibility in eight Java-based product lines, finding that all can be realized, that projections are feasible, and that variant computation is fast (<45ms on average for our largest subject Berkeley DB).

12:30