Editorial statement on the use of generative AI and other cases of professional misconduct

2025-03-24

The Israel Journal of Entomology strives to maintain its high standard and reputation by following the best practices in scholarly publishing (COPE 2022). The Editorial Office of the journal reminds prospective contributors that any case of professional misconduct, including, but not limited to, undeclared use of generative Artificial Intelligence (AI), data fabrication, falsification and plagiarism, may result in immediate rejection of the manuscript, as stipulated in the journal’s Publication Ethics and Malpractice Statement

Authors who use generative AI tools must (1) transparently disclose their use in the Material and Methods section and clearly describe details of their use, (2) add relevant citations and references, and (3) record and submit their relevant interactions (prompts etc.) with AI tools as appendices. AI algorithms should, to the extent possible, be transparent, and their limitations acknowledged. The authors who use generative AI tools must bear in mind that Large Language Models are known to make up facts, quotes and references (Emsley 2023; Goddard 2023).

Paraphrasing US District Judge Brantley Starr (Brodkin 2023), the Editorial Office of the Israel Journal of Entomology sets a new rule with immediate effect: All authors forwarding their manuscripts to the journal for consideration must simultaneously submit a declaration attesting that either no portion of their contribution has been drafted by generative AI or any portion drafted by generative AI has been checked by a human for accuracy, using print media or traditional databases, and all details are disclosed as outlined above.

References

COPE [The Committee on Publication Ethics]. 2022. Principles of transparency and best practice in scholarly publishing — English. https://doi.org/10.24318/cope.2019.1.12 (accessed 24.03.2025)

Emsley, R. 2023. ChatGPT: these are not hallucinations – they’re fabrications and falsifications. Schizophrenia (Heidelberg) 9 (1): Art. 52. https://doi.org/10.1038/s41537-023-00379-4

Goddard, J. 2023. Hallucinations in ChatGPT: A cautionary tale for biomedical researchers. The American Journal of Medicine 136 (11): 1059–1060. https://doi.org/10.1016/j.amjmed.2023.06.012

Brodkin, J. 2023. Federal judge: No AI in my courtroom unless a human verifies its accuracy.  ArsTechnica, May 31, 2023. https://arstechnica.com/tech-policy/2023/05/federal-judge-no-ai-in-my-courtroom-unless-a-human-verifies-its-accuracy