Blogs (1) >>

Many computer science students complete their undergraduate degrees with insufficient testing skills and knowledge. To understand the gaps in students’ testing skills and knowledge, we analyzed 1014 software tests written by 12 groups in an undergraduate Software Quality Assurance (SQA) course project. In the project the student groups were provided a requirements document and were instructed to follow Test Driven Development (TDD) practices using black box tests. To understand how the groups applied black box testing in their project, we created an automatic tool to sort the tests into categories or “test buckets.” By analyzing the test bucket data we were able to assess the effectiveness and efficiency of student-written tests. We observed that the student groups were significantly more likely to test for explicit requirements than implicit requirements and significantly more likely to test happy paths than invalid inputs. Furthermore, students inefficiently tested happy paths, invalid inputs and explicit requirements resulting in a higher proportion of software tests with duplicate intent. Based on these results we provide insights into how black box test education can be improved.