To assist researchers in using generative AI tools like ChatGPT while preserving academic norms for transparency, plagiarism, accuracy, and originality, Cambridge University Press has released its AI ethics policy.
The rules prohibit treating artificial intelligence as an author of scholarly papers and books released by Cambridge. In light of worries about the improper or deceptive use of potent big language models in research and excitement about its potential, Cambridge’s action offers clarification to academics.
The Cambridge principles for generative AI in research publishing stipulate that AI must be declared and explicitly explained in publications and that AI does not satisfy Cambridge’s authorship requirements.
The principals also say that any use of AI must not violate Cambridge’s policy against plagiarism and that authors are responsible for their research papers’ accuracy, integrity, and originality.
Each year, Cambridge produces 1,500 research monographs, reference volumes, and textbooks for higher education, in addition to tens of thousands of research papers in more than 400 peer-reviewed publications.
By launching this AI ethics policy, Cambridge University Press wants to assist the academic community in navigating AI’s possible biases, shortcomings, and intriguing opportunities.