This program is tentative and subject to change.
Open-ended code-writing exercises are commonly used in large-scale introductory programming courses, as they can be autograded against test cases. However, code writing requires many skills at once, from planning out a solution to applying the intricacies of syntax. As autograding only evaluates code correctness, feedback addressing each of these skills separately cannot be provided. In this work, we explore methods to detect which high-level patterns (i.e. programming plans) have been used in a submission, so learners can receive feedback on planning skills even when their code is not completely correct. Our preliminary results show that LLMs with few-shot prompting can detect the use of programming plans in 95% of correct and 86% of partially correct submissions. Incorporating LLMs into grading of open-ended programming exercises can enable more fine-grained feedback to students, even in cases where their code does not compile due to other errors.