Blogs (3) >>

The advent of generative AI, large language models, and related technologies have created many opportunities for our community, both in support of new kinds of research and in helping improve writing. Along with the opportunities come some challenges, particularly regarding the appropriate use of these technologies.

Using generative AI to support writing papers

The ACM Policy on Authorship indicates that

The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain ­about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.

We expect authors of SIGCSE TS papers to abide by these—and all—ACM guidelines.

Generative AI as a research and teaching tool

Many SIGCSE TS authors are exploring possible roles of generative AI in their research and teaching. In some cases, questions have been raised about the appropriateness of using copyrighted work, such as problems from a course’s web site or a service like HackerRank. Since submitting copyrighted material to an LLM may make that material available as training data, and since the products of an LLM trained on such materials may be considered derivative works, authors of papers that involve the use of copyrighted works as input to or training data for LLMs must obtain explicit permission from the copyright holders to use the works. Authors must disclose such permission in the acknowledgements section of the paper.

We expect that reviewers who encounter situations in which an LLM or other tool is trained on copyrighted data will verify that the acknowledgements section indicates that permission was sought and obtained.

Generative AI in the review process

Reviewers may not submit papers to LLMs, plagiarism detectors, summarizers, or other such tools. As the ACM Peer Review Policy indicates,

[Reviewers may not upload] confidential submissions, technical approaches described by authors in their submissions, or any information about the authors into any system managed by a third party, including LLMs, that does not promise to maintain the confidentiality of that information by reviewers, since the storage, indexing, learning, and utilization of such submissions may violate the author’s right to confidentiality.

Obviously, reviewers may not use generative AI tools to write their reviews. However, since “writing helpers”, broadly defined, are now incorporated within most major editors and word processors, reviewers may take advantage of such tools to polish the writing in their reviews.

Similarly, APCs may not use generative AI tools to synthesize the reviews and discussion into a metareview, but may use tools to improve the writing or structure of the metareview.

Checking citations

Unfortunately, we have seen the occasional “hallucinatory reference” in SIGCSE TS submissions. While we do not expect our reviewers to check every reference, we ask that reviewers pay additional attention to references for the time being. If possible, reviewers might consider checking some randomly selected references in each paper they review.