Self-Adaptive Software Performance Monitoring
In addition to studying the construction and evolution of software services, the software engineering discipline needs to address the operation of continuously running software services. A requirement for its robust operation are means for effective monitoring of software runtime behavior. In contrast to profiling for construction activities, monitoring of operational services should only impose a small performance overhead. Furthermore, instrumentation should be non-intrusive to the business logic, as far as possible. Monitoring of continuously operating software services is essential for achieving high availability and high performance of these services. A main issue for dynamic analysis techniques is the amount of monitoring data that is collected and processed at runtime. On one hand, more data allows for accurate and precise analyses. On the other hand, probe instrumentation, data collection and analyses may cause significant overheads. Consequently, a trade-off between analysis quality and monitoring coverage has to be reached. In this paper, we present a method for self-adaptive, rule-based performance monitoring. Our approach aims at a flexible instrumentation to monitor a software system's timing behavior. A performance engineer's task is to specify rules that define the monitoring goals for a specific software system. An inference engine decides at which granularity level a component will be observed. We employ the Object Constraint Language (OCL) to specify the monitoring rules. Our goal-oriented, self-adaptive method is based on the continuous evaluation of these rules. The implementation is based on the Eclipse Modeling Framework and the Kieker monitoring framework. In our evaluation, this implementation is applied to the iBATIS JPetStore and the SPECjEnterprise2010 benchmark.
Full Text: PDF