Generation of Random Software Models for Benchmarks

| 0 comments

Abstract—Since model driven engineering (MDE) is applied to larger and more complex system, the memory and execution time performance of model processing tools and frameworks has become important. Benchmarks are a valuable tool to evaluate performance and hence assess scalability. But, benchmarks rely on reasonably large models that are unbiased, can be shaped to distinct use-case scenarios, and are ”real” enough (e.g. non-uniform) to cause real-world behavior (especially when mechanisms that exploit repetitive patterns like caching, compression, JIT-compilation, etc. are involved). Creating large models is expensive and erroneous, and neither existing models nor uniform synthetic models cover all three of the wanted properties. In this paper, we use randomness to generate unbiased, non-uniform models. Furthermore, we use distributions and parametrization to shape these models to simulate different use-case scenarios. We present a meta-model-based framework that allows us to describe and create randomly generated models based on a meta-model and a description written in a specifically developed generator DSL. We use a random code generator for an object-oriented programming language as case study and compare our result to non-randomly and synthetically created code, as well as to existing Java-code.

KeywordsEMF, Benchmarks, Generation, Large models

Presentation

Download Paper
RandomEMF at GitHub

BibTex

@inproceedings{Scheidgen2015RandomEMF,
  author = {Scheidgen, Markus},
  booktitle = {Proceedings of the 3rd Workshop on Scalable Model Driven Engineering},
  editor = {Kolovos, Dimitris S and Ruscio, Davide Di and Matragkas, Nicholas and Cuadrado, Jes\'{u}s  S\'{a}nchez and Rath, Istvan and Tisi, Massimo},
  pages = {1--10},
  publisher = {CEUR},
  title = {{Generation of Large Random Models for Benchmarking}},
  url = {http://ceur-ws.org/Vol-1406/paper1.pdf},
  year = {2015}
}

Leave a Reply

Required fields are marked *.