Understanding students’ testing processes in a CS1 course is crucial in helping instructors of introductory courses determine the necessary content to teach. Prior work highlights the urgency of teaching testing practices to students, as there is great concern for students’ testing abilities upon graduation of an university CS program. Given that testing is an implicit programming process, we aim to examine how students in CS1 go about testing their code in programming assignments. Because the consistent research showing the achievement gap between students with and without prior experience in introductory classes, our analysis also aims understand specific differences in testing processes between the two groups. Leveraging a dataset of over 300 students with over 50,000 snapshots of student code during their development process, we applied metrics related to incremental testing and determined the usage of diagnostic print statements and the usage of designing test cases beyond the given tests (custom test cases). A large majority of the students used neither diagnostic print statements nor custom test cases in their programming assignments. Additionally, the three testing practices we examined do not seem to significantly contribute to the achievement gap due to prior experience to students’ success, suggesting a need for further investigation into textit{which} practices textit{do} account for that success.