This program is tentative and subject to change.
Clear, well-chosen names for variables and functions significantly enhance code readability and maintainability. In computer science education, teaching students to select appropriate identifiers is a critical task, especially in CS1. This study explores how large language models (LLMs) could assist in teaching this skill. While prior research has explored the use of LLMs in programming education, their precision and consistency in teaching code quality, particularly identifier selection, remains largely unexplored. For this purpose, this study investigated how well different LLMs can detect and report misleading identifiers. In a dataset of 33 code samples, we manually labeled misleading identifiers. On this dataset, we then tested five different LLMs on their ability to detect these misleading identifiers, measuring the overall accuracy, precision, recall, and f-score. Results revealed that the most successful model, ChatGPT-4o, was able to correctly detect most of the manually flagged misleading variable names. However, it also tended to flag issues with variable identifiers in cases where the human evaluators would not, and refined prompting was not able to discourage this behavior.