Blogs (1) >>

In large programming classes, it takes a significant effort for teachers to evaluate exercises and provide detailed feedback. Test cases are not sufficient to assess systems programming exercises, since concurrency and resource management bugs are difficult to reproduce. This paper presents an experience report on the automatic evaluation of system programming exercises using static analysis. We present the design of systems programming assignments, and of static analysis rules that are tailored for each assignment to make them detailed and accurate. Our evaluation shows that static analysis can identify a significant number of erroneous submissions missed by test cases.

I am assistant professor at the Federico II University of Naples, Italy, and co-founder of the Critiware s.r.l. academic spin-off company. My research interests include software fault injection, security/robustness testing, dependability benchmarking, and software aging and rejuvenation, and their applications in operating systems and virtualization technologies. My work has been supported by national, European, and industry-funded research projects in cooperation with Leonardo-Finmeccanica, CRITICAL Software, and Huawei Technologies. I authored more than 60 publications in journals and conferences on dependable computing and software engineering. I have been on the steering committee of the IEEE International Workshop on Software Certification (WoSoCer), and Program Committee Chair of the IEEE International Symposium on Software Reliability Engineering (ISSRE).

More information about my research activities, scientific publications, and tools is available on my personal website (http://wpage.unina.it/roberto.natella).