Furqan Tayyab1, Hira Kamal2 and Aqsa Parvaiz3*

1Centre of Agricultural Biochemistry and Biotechnology (CABB), University of Agriculture Faisalabad, Pakistan 2Department of Plant Pathology, Washington State University, Pullman, WA, USA 3Department of Biochemistry and Biotechnology, The Women University Multan, Multan. Pakistan


Artificial intelligence's (AI) ChatGPT such as GPT-3 has remarkable potential for revolutionizing academic writing and scientific publications. Its capabilities include automatic draft generation, literature searches, and general improvements to research productivity. However, the ethical implications must be noticed, with concerns about AI authorship, accuracy, and the potential spread of pseudoscience. Maximizing the benefits of ChatGPT in scientific writing requires responsible use, adherence to strict attribution guidelines, and continuous monitoring for biases. To create a productive and ethical research environment, it's important to find a balance between AI assistance and human expertise as technology continues to advance.


Keywords: Artificial intelligence, ChatGTP, Scientific writing

Article Information


Cite to this Article

*Corresponding author:

Copyright 2023 TBPS



Artificial intelligence (AI) has progressed significantly over the past ten years, with Pre-trained Chat Generative (ChatGPT), especially OpenAI's GPT-3 model, performing as an excellent illustration of this outstanding advancement (Lee, 2023). It has been recognized that disruptive developments like ChatGPT and similar innovations have the potential to revolutionize scholarly publications and academic institutions. This chatbot's use in scientific writing has proved to be very advantageous. ChatGPT appears to be a promising and effective tool for a variety of activities, including automatic draft generation, article summaries, and translating languages, all of which may significantly improve academic endeavors by alleviating the writing process. Text generated by ChatGPT closely resembles to the text generated by the people. The model goes through two key stages: pre-training, during which it is exposed to a vast amount of data from the internet, and fine-tuning, during which it becomes optimized for particular tasks (Shen et al., 2023). However, the use of this tool in scientific writing raises ethical questions arguing for proper regulation. This comprehensive review strives to critically examine ChatGPT's position within scientific writing, examining both its promising uses and the ethical issues that need to be taken to ensure appropriate and helpful use.


1.1. Promising Applications of ChatGPT in Scientific Writing

ChatGPT has demonstrated its capacity to support academic research by assisting in forming and developing existing ideas. Although it cannot independently develop novel ideas, it can act as an initial draft that still needs people's input and knowledge. It is important to remember that human expertise, imagination, and critical thinking should not be considered replaceable in the content produced by AI models like ChatGPT. Research assistants supported by AI, such as ChatGPT and websites like "," can be helpful during the course of a literature search by helping with activities such as reviewing scholarly articles, summarizing results, and identifying areas of confusion. Due to this automatic assistance, researchers can swiftly understand the current level of knowledge on particular issues. It helps them identify potential research gaps, although the provided summaries may lack critical analysis of differences among studies (Suverein et al., 2023). ChatGPT is useful for generating first drafts of scientific articles during the writing process. It contributes to the development of the methods section by offering raw data, explaining sample sizes, and outlining data analysis procedures. According to researchers, ChatGPT has shown useful in the editing process, helping to summarize entire papers for appropriate abstract composition (Hosseini et al., 2023). Results from ChatGPT may not always be sufficient, but they save time and effort.

There are numerous benefits of using ChatGPT and other NLP technologies in academic research. They are capable of processing massive amounts of textual data, which helps scholars to save time and effort. For instance, ChatGPT can examine research papers, extract crucial information such as authors and publication dates, and even summarize findings fair and unbiased manner. Due to this automation, researchers no longer have to manually search for publications which also saves time. Additionally, ChatGPT helps scholars to develop new research ideas and encourages them to work on novel projects (Hosseini et al., 2023). However, when utilizing ChatGPT and related NLP technologies, caution must be considered. Researchers should utilize these tools together with other research techniques and keep in mind their limits. Crucial issues to keep in mind include properly attributing sources, handling sensitive or controversial subjects, and keeping up with NLP advancements.

Before addressing the ethical issues raised due to the application of ChatGPT in scholarly publishing, it is critical to highlight its potential advantages. ChatGPT and related language models can help journal editors with repetitious work, such as correcting grammar mistakes, and prevent biased assessments of manuscripts (Hosseini et al., 2023). Additionally, ChatGPT's beneficial role in the process of collaborative review may be useful in developing an intellectual community in an educational environment. These models can enhance the distribution of research ideas, making them more accessible to experts in the field of study. They may speed up searches and identify relevant studies based on user queries by improving metadata, indexing, and providing simple language summaries of findings, facilitating the multidisciplinary study (Gilat and Cole, 2023). Researchers also take advantage of ChatGPT's by saving their time and improved interpersonal capabilities. However, researchers from different fields of study, including the social sciences, biological sciences, medical practice, business, and the field of engineering, can strategically use ChatGPT at different stages of the research process (Lund and Wang, 2023). However, it is essential to use ChatGPT appropriately, taking into consideration its limits and integrating it with domain expertise. As ChatGPT develops, it has the potential to evolve into an e-Research Assistant, providing support throughout the entire research process, including result interpretation.


1.2. Ethical Considerations in Using ChatGPT for Scientific Writing

As ChatGPT is used more frequently in research, it poses several issues that must be resolved to fully achieve its potential. One major issue is related to AI authorship, as there is a discussion about whether ChatGPT may be considered a research co-author. Some suggest that AI is ineligible for authorship since it cannot be held responsible for the research outcome. Publishing companies must create and follow strict AI authorship criteria to handle this issue (Owens, 2023). The generation of fictitious references by ChatGPT is another issue. The tool has occasionally been reported to cause illusions by giving false or untrue references. This can result in inaccurate research papers and damage the credibility of the content that is published. Avoiding unintentional plagiarism requires accurate citation and acknowledgment, but ChatGPT's tendency to reproduce content without proper citations or attribution presents a substantial challenge for researchers using this tool (Stokel-Walker and Van Noorden, 2023). Researchers and developers must handle this issue, to ensure that ChatGPT generates correct and ethical outputs that follow scholarly guidelines.

Additional challenges include ownership disputes and copyright issues with AI-generated content. Texts generated by open-source tools like ChatGPT are still unclear as to who owns the copyright. To overcome these issues and clarify who owns AI-generated data, researchers must provide clear guidelines. When using ChatGPT for research, ethical issues are crucial. Fairness, transparency, potential misuse, and data protection and confidentiality are some of the ethical concerns that need to be considered (Xames and Shefa, 2023). To use ChatGPT responsibly and ethically, researchers must specifically mention and recognize its use in their articles.

Using ChatGPT raises concerns regarding bias and accuracy. OpenAI recently acknowledged that the software occasionally generates responses that appear reasonable but are inaccurate or illogical. So, the researchers, editorial staff, and reviewers who might unintentionally accept biased and false information are in danger as a consequence (Lund and Wang, 2023). However, because the AI language system was developed using a dataset that only contains data until 2021, it could fail to respond to queries with the most recent and accurate data. This limitation affects researchers that use ChatGPT as a tool for their research.

Concerns about the development of junk science are also raised by the extensive usage of ChatGPT in research and publication. Without adequate peer review, ChatGPT can produce a large number of falsified research articles, which would spread pseudoscience in scholarly publications. The academic community must be proactive about developing AI tools that can identify ChatGPT-generated materials and address the issue of unethical publication practices to reduce these difficulties (Xames and Shefa, 2023). Additionally, concerns about the potential effects of global inequities in scientific publishing exist. While the ChatGPT platform's availability has enabled it simple for scholars worldwide to produce scholarly papers, the possibility of commercialization raises issues about unequal access, especially for researchers from impoverished and low- to middle-income countries. Another crucial factor to consider into account is the verifiability and integrity of content produced by AI. Because ChatGPT and other artificial intelligence (AI) tools rely extensively on text from the internet, it is still difficult to determine the novelty, verifiability, and correctness of their outputs.


Fig. 1: Schematic representation of Pros and cons of using Chatgpt in Scientific Writing



1.3. Future Prospects

ChatGPT, created by OpenAI, is an AI tool that has the potential to revolutionize academic publishing. The remarkable skill of generating coherent and grammatically correct text has opened doors to creating accessible and engaging content. However, there are challenges as its current robotic style and superficial content may not captivate readers or meet the high standards of scientific research in complex disciplines such as neuroscience.

One way for researchers to address this issue is by incorporating diverse writing styles to maintain a human touch and effectively communicate with a wider audience. Although ChatGPT helps in identifying similar ideas, its failure to provide clear information about its data sources raises doubts about the credibility and dependability of the text it generates. As we move forward, it is crucial to find a harmonious balance between utilizing the potential of AI and maintaining human creativity. By working alongside AI, we can unlock higher-value tasks and discover new solutions to problems. It is important to be cautious of relying too heavily on AI-generated text. While it can be helpful, it cannot replace the valuable critical thinking and analytical skills that expert scientists possess.

As the world rapidly changes, the future of academic publishing will rely on how researchers and institutions adopt and adapt to AI's potential. With AI tools like ChatGPT available as supportive aids, researchers can concentrate on more complex tasks and explore innovative ideas, leading to scientific breakthroughs that were previously unimaginable. Nevertheless, it's crucial to keep in mind that science is a combination of human expertise and AI's technological capabilities. This ensures that our quest for knowledge remains limitless and remarkable.


Gilat, R. & Cole, B. J. (2023). How will artificial intelligence affect scientific writing, reviewing and editing? The future is here. Arthroscopy, 39(5), 1119–1120.
Hosseini, M., Rasmussen, L. M. & Resnik, D. B. (2023). Using AI to write scholarly publications. Accountability in research, 1–9.
Lee, J. Y. (2023). Can an artificial intelligence chatbot be the author of a scholarly article? Journal of Educational Evaluation for Health Professions, 20.
Lund, B. D. & Wang, T. (2023). Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26–29.
Owens, B. (2023). How Nature readers are using ChatGPT. Nature, 615(7950), 20-20.
Shen, Y., Heacock, L., Elias, J., Hentel, K. D., Reig, B., Shih, G. & Moy, L. (2023). ChatGPT and other large language models are double-edged swords. Radiology, 307(2),e230163. 
Stokel-Walker, C. & Van Noorden, R. (2023). What ChatGPT and generative AI mean for science. Nature, 614(7947), 214–216.
Suverein, M. M., Delnoij, T. S. R., Lorusso, R., Brandon Bravo Bruinsma, G. J., Otterspoor, L., Elzo Kraemer, C. V, Vlaar, A. P. J., van der Heijden, J. J., Scholten, E. & den Uil, C. (2023). Early extracorporeal CPR for refractory out-of-hospital cardiac arrest. New England Journal of Medicine, 388(4), 299–309.
Xames, M. D. & Shefa, J. (2023). ChatGPT for research and publication: Opportunities and challenges. Available at SSRN 4381803

Article Files
Article Files
  • Article Views:
  • Article Downloads:
Paper Citation

Copyright ©2022 All rights reserved |