Monitoring Aspects for the Customization of Automatically Generated Code for Big-Step Models
Shahram Esmaeilsabzali, Bernd Fischer and Joanne Atlee
Abstract:
The output of a code generator is assumed to be correct and not
usually intended to be read or modified; yet programmers are often
interested in this, e.g., to monitor a system property. Here, we consider code customization for a family of code generators associated
with big-step executable modelling languages (e.g., statecharts).
We introduce a customization language that allows us to express
customization scenarios for the generated code independently of a
specific big-step execution semantics. These customization scenarios are all different forms of runtime monitors, which lend themselves to a principled, uniform implementation for observation and
code extension. A monitor is given in terms of the enabledness and
execution of the transitions of a model and a reachability relation
between two states of the execution of the model during a big step.
For each monitor, we generate the aspect code that is incorporated
into the output of a code generator to implement the monitor at the
generated-code level. Thus, we provide means for code analysis
through using the vocabulary of a model, rather than the detail of
the generated code. Our technique not only requires the code generators to reveal only limited information about their code generation
mechanisms, but also keeps the structure of the generated code intact. We demonstrate how various useful properties of a model, or
a language, can be checked using our monitors.