11:00 |
- SE Session 11 - Requirements and Traceability (Chair: Stefan Sauer)
Rashidah Kasauli, Grischa Liebel, Eric Knauss, Swathi Gopakumar and Benjamin Kanagwa. Requirements Engineering Challenges in Large-Scale Agile System DevelopmentMotivated by their success in software development, companies implement agile methods and their practices increasingly for software-intense, large products, such as cars, telecommunication infrastructure, and embedded systems. As a consequence of safety and regulative concerns as well as different development cycles of hardware and software, detailed, up-front requirements engineering (RE) can be at odds with agile (software) development. We present results from a multiple case study with two car manufacturers, a telecommunications company, and a technology company that are on the journey to introduce organization wide continuous integration and continuous delivery to customers. Based on 20 qualitative interviews, 5 focus groups, and 2 cross-company workshops, we discuss possible scopes of agile methods within system development, the consequences this has on the role of requirements, and the challenges arising from the interplay of RE and agile methods in large-scale system development. These relate to communicating and managing knowledge about a) customer value and b) the system under development. Better alignment of a holistic requirements model with agile development practices promises rich gains in development speed, flexibility, and overall quality of software and systems.
Bastian Tenbergen, Thorsten Weyer and Klaus Pohl. Hazard Relation Diagrams: a diagrammatic representation to increase validation objectivity of requirements-based hazard mitigationsDuring the development of safety-critical systems, the development process must ensure that requirements, which are defined to mitigate a hazard, are adequate. Adequacy of such hazard-mitigating requirements (HMRs) means that the requirements may not oppose the system’s operational purpose and must sufficiently avoid, reduce, or control, the occurrence of the conditions that trigger the hazard. However, information about the occurrence of the hazard’s trigger conditions are a work product of hazard analyses during early stages of safety assessment, while HMRs are a work product of requirements engineering. Dependencies between HMRs and hazard analysis results are implicit and tacit. In consequence, there’s a risk that during validation, inadequacy of HMRs regarding their ability to mitigate a hazard remains covert. The result may be that the system is assumed to be safe, but in fact may still cause injury or death. We introduced Hazard Relation Diagrams (HRDs) as a means to integrate and graphically visualize hazard analysis results with HMRs. Herein, we also provide insights into their empirical evaluation and show that HRDs increase objectivity in rationales containing adequacy judgments.
Andreas Demuth, Roland Kretschmer, Michael Tröls, Georgio Kanakis, Davy Maes and Alexander Egyed. Experiences on Traceability and Consistency Checking across Engineering Tools in an Automation Solution CompanyThe engineering of systems is unimaginable without software tools. Engineers use them to capture and analyze engineering problems; specify, implement, test, and maintain engineering solutions, and manage engineering processes. Yet, there is a gap between the capabilities of independently working engineers and the needs of a collaborative engineering team. The existing tool landscape emphasizes the former. Most engineering tools are single-user applications – often of excellent quality but limited in that they support the works of individual engineers and not that of a group of engineers. And herein lies one of the most fundamental problems of software and systems engineering. Engineers know well the engineering knowledge they capture but they often lack awareness of the many implications their work has on other engineers and/or other engineering domains. This is a problem because in today’s engineering projects, companies continuously have to adapt their systems to changing customer or market requirements. This requires a flexible, iterative development process in which different parts of the system under construction are built and updated concurrently. However, concurrent engineering is quite problematic in domains where different engineering domains and different engineering tools come together. In this paper, we discuss experiences with Van Hoecke Automation, a leading company in the areas of production automation and product processing, in maintaining the consistency between electrical models and the corresponding software controller when both are subject to continuous change. The paper discusses how we let engineers describe the relationships between electrical model and software code in form of links and consistency rules; and how through continuous consistency checking our approach then notified those engineers of the erroneous impact of changes in either electrical model or code.
- SE Session 12 - Design (Chair: Timo Kehrer)
Sven Peldszus, Géza Kulcsár, Malte Lochau and Sandro Schulze. On Continuous Detection of Design Flaws in Evolving Object-Oriented Programs using Incremental Multi-Pattern MatchingDesign flaws in object-oriented programs may seriously corrupt code quality thus increasing the risk for introducing subtle errors during software maintenance and evolution. Most recent approaches identify design flaws in an ad-hoc manner, either focusing on software metrics, locally restricted code smells, or on coarse-grained architectural anti-patterns. In this work, we utilize an abstract program model capturing high-level object-oriented code entities, further augmented with qualitative and quantitative design-related information such as coupling/cohesion. Based on this model, we propose a comprehensive methodology for specifying object-oriented design flaws by means of compound rules integrating code metrics, code smells and anti-patterns in a modular way. This approach allows for efficient, automated design-flaw detection through incremental multi-pattern matching, by facilitating systematic information reuse among multiple detection rules as well as between subsequent detection runs on continuously evolving programs. Our tool implementation comprises well-known anti-patterns for Java programs. The results of our experimental evaluation show high detection precision, scalability to real-size programs, as well as a remarkable gain in efficiency due to information reuse.
Joel Greenyer and Timo Gutjahr. Symbolic Execution for Realizability-Checking of Scenario-based SpecificationsScenario-based specification with the Scenario Modeling Language (SML) is an
intuitive approach for formally specifying the behavior of reactive systems.
SML is close to how humans conceive and
communicate requirements, yet SML is executable and
simulation and formal realizability checking can find specification flaws early.
The realizability checking complexity is, however, exponential in the number of
scenarios and variables. Therefore algorithms relying on explicit-state
exploration do not scale and, especially when specifications have message
parameters and variables over large domains, fail to unfold their potential.
In this paper, we present a technique for the symbolic execution of SML
specifications that interprets integer message parameters and variables
symbolically.
It can be used for symbolic realizability checking and
interactive symbolic simulation.
We implemented the technique in ScenarioTools. Evaluation
shows drastic performance improvements over the explicit-state approach
for a range of examples.
Moreover, symbolic checking produces more
concise counter examples, which eases the comprehension of
specification flaws.
Birgit Vogel-Heuser, Juliane Fischer, Stefan Feldmann, Sebastian Ulewicz, Susanne Rösch and Safa Bougouffa. Modularity and architecture of PLC-based software for automated production Systems: An analysis in industrial companiesAdaptive and flexible production systems require modular and reusable software, especially considering their long-term life cycle of up to 50 years. We introduce a benchmark process – so-called SWMAT4aPS – to measure software maturity for industrial control software of automated production systems.
- SE Session 13 - Program editing and comprehension (Chair: Regina Hebig)
Norman Peitek, Janet Siegmund, Chris Parnin, Sven Apel, Johannes Hofmeister, Christian Kästner, Andrew Begel, Anja Bethmann and André Brechmann. Neural Efficiency of Top-Down Program ComprehensionWe observed program comprehension with functional magnetic resonance imaging (fMRI) and found a difference in neural efficiency between top-down and bottom-up comprehension, but failed to find a significant effect from beacons. Furthermore, we were able to replicate the results of a previous fMRI study, thereby strengthening the role of fMRI as measurement technique to observe program comprehension and other related cognitive processes.
Thorsten Berger, Markus Voelter, Hans Peter Jensen, Taweesap Dangprasert and Janet Siegmund. Efficiency of Projectional EditingProjectional editors are editors where a user's editing actions directly change the abstract syntax tree without using a parser. They promise essentially unrestricted language composition as well as flexible notations, which supports aligning languages with their respective domain and constitutes an essential ingredient of model-driven development. Such editors have existed since the 1980s and gained widespread attention with the Intentional Programming paradigm, which used projectional editing at its core. However, despite the benefits, programming still mainly relies on editing textual code, where projectional editors imply a very different -- typically perceived as worse -- editing experience, often seen as the main challenge prohibiting their widespread adoption. We present an experiment of code-editing activities in a projectional editor, conducted with 19 graduate computer-science students and industrial developers. We investigate the effects of projectional editing on editing efficiency, editing strategies, and error rates -- each of which we also compare to conventional, parser-based editing. We observe that editing is efficient for basic-editing tasks, but that editing strategies and typical errors differ. More complex tasks require substantial experience and a better understanding of the abstract-syntax-tree structure—then, projectional editing is also efficient. We also witness a tradeoff between fewer typing mistakes and an increased complexity of code editing.
Benjamin Behringer, Jochen Palz and Thorsten Berger. PEoPL: Projectional Editing of Product LinesThe features of a software product line -- a portfolio of system variants -- can be realized using various implementation techniques (a.k.a., variability mechanisms). Each technique represents the software artifacts of features differently, typically classified into annotative (e.g., C preprocessor) and modular representations (e.g., feature modules), each with distinct advantages and disadvantages. Annotative representations are easy to realize, but annotations clutter source code and hinder program comprehension. Modular representations support comprehension, but are difficult to realize. Most importantly, to engineer feature artifacts, developers need to choose one representation and adhere to it for evolving and maintaining the same artifacts. We present PEoPL, an approach to combine the advantages of annotative and modular representations. When engineering a feature artifact, developers can choose the most-suited representation and even use different representations in parallel. PEoPL relies on separating a product line into an internal and external representation, the latter by providing editable projections used by the developers. We contribute a programming-language-independent internal representation of variability, five editable projections reflecting different variability representations, a supporting IDE, and a tailoring to Java. We evaluate PEoPL's expressiveness, scalability, and flexibility in eight Java-based product lines, finding that all can be realized, that projections are feasible, and that variant computation is fast (<45ms on average for our largest subject Berkeley DB).
|