This program is tentative and subject to change.
In many CS educational research studies, students are surveyed to understand their reactions to a particular pedagogical approach or tool. These surveys, as well as other types of evaluations, often invite students to provide open-ended feedback about their experiences. However, analyzing these comments can prove to be a challenge, especially to CS educators who may not have strong expertise in qualitative research methods. In addition, in a large study, evaluating all of the provided comments can consume a significant amount of researcher time. In this work, we undertook two separate conversations with ChatGPT in which we prompted it to perform qualitative analysis of a set of comments collected in an earlier study. This allowed us to begin to judge how effectively a modern large language model can serve as an assistant in such qualitative analysis. We found that with the prompts that we used, ChatGPT can reliably build a set of reasonable labels (codes) for a set of comments, but the application of its labels to specific comments may or may not be effective and human researchers still need to use care and their own understanding in interpreting its output.