We call artificial intelligence any machine that processes information with some purpose, complying with the logical rules of Turing's computation described more than 70 years ago 1. These machines work with instructions called algorithms, a finite and well-defined sequence of information processing implemented by automata (computers) or any digital technology to optimize a process 2. This means that the purpose of artificial intelligence is optimization.
Optimization is the ability to do or solve something in the most efficient way possible and, in the best case, using the least amount of resources. The intended optimization is programmed and preset by humans; therefore, these technologies are tools humans create for human purposes 3.
The optimization capability of artificial intelligence is staggering. It is estimated that using artificial intelligence will facilitate the achievement of 134 of the 169 goals agreed in the 2030 Agenda for Sustainable Development (4. However, in this evaluation, it was projected that it could negatively affect the progress of 59 goals of the same agreement, being social, economic, educational, legal and gender inequality, the phenomenon most affected by artificial intelligence.
This projection shows us that it is necessary to counterbalance the development and implementation of processes mediated by artificial intelligence, to maintain reflection and question the influence of these technological tools, and, above all, to be based on human intelligence. A definition of human intelligence in the data science and artificial intelligence environment would be a collection of contextual tacit knowledge about human values, responsibility, empathy, intuition, or care for another living being that algorithms cannot describe or execute 5.
Improving the care capabilities of health systems, having more accurate diagnoses, achieving the optimization of medical treatments, and generating more efficient and appropriate public health measures are the promises of the advances of artificial intelligence. The World Health Organization recognizes these expectations but warns of the need to guarantee transparency, explainability and understanding of each application based on artificial intelligence implemented in health, with permanent evaluation, ensuring equity, inclusion, and sustainability 6.
Artificial intelligence is already part of the research supporting the manuscripts submitted to the editorial process for scientific journals in the health area. Fortunately, we have guidelines for authors to submit their manuscripts in total; these allow peer review and the editors' judgment to better decide their publication. So far, the Equator Network website has published twelve guidelines for artificial intelligence-based research manuscripts, and in all of them, concern for transparency about the population from which the data were acquired, the design and development of the algorithm, the training of the model; and the external validity of the optimized processes are present (Table 1).
Guideliness | Name | Date |
---|---|---|
PRIME | Requirements for Cardiovascular Imaging-Related Machine Learning Evaluation | 2020 10 |
MI-CLAIM | Clinical artificial intelligence modeling | 2020 11 |
Artificial intelligence in dental research | 2021 12 | |
SPIRIT-AI | Guidelines for clinical trial protocols for interventions involving artificial intelligence | 2020 13 |
CONSORT-AI | Reporting guidelines for clinical trial reports for interventions involving artificial intelligence | 2020 14 |
MINIMAR | Reporting standards for artificial intelligence in health care | 2020 15 |
CAIR | Guideline of Clinical AI Research | 2021 16 |
CLEAR | EvaluAtion of Radiomics research | 2023 17 |
Reporting machine learning analyses in clinical research | 2020 18 | |
CLAIM | Checklist for Artificial Intelligence in Medical Imaging | 2020 19 |
DECIDE-AI | Guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence | 2022 20 |
STREAM-URO | Reporting of Machine Learning Applications in Urology | 2021 21 |
However, the writing and editorial process does not have the same guidelines. Authors, peer reviewers and editors are surprised by algorithms that promise efficiency in their work. This fascination leads us to the risk of an absolute trust in artificial intelligence, known as algorithmmocracy, a government where humans and machines obey algorithms 2.
We have signs that algorithms are not ideal in scientific publishing. For years, we have been questioning the use of algorithms with which bibliometric indexes classify (or disqualify?) scientific journals, but we accept that research supervisory bodies consider them the gold standard for measuring scientific productivity. Authors frequently resort to artificial intelligence writing tools, such as ChatGPT, Bard and Bing, with little reflection on their limitations, which may generate factual and reasoning errors in scientific writing 7. Editors may mistakenly accept the similarity percentage issued by anti-plagiarism algorithms as a rule in the evaluation of the originality of a manuscript, completely replacing expert judgment. Whenever artificial intelligence optimization is used, it should be remembered that technology does not change society; human intelligence defines the creation of applications, their use and how they affect society. The opposite is to accept the thesis of technological determinism, and although it will not lead us to an apocalyptic future like the one proposed by Skynet in the Terminator saga, it will affect equality, truth and the originality of science 8.
The editorial guidelines of the Revista Colombia Médica accept the use of artificial intelligence in research, and the authors' adherence to the guidelines for publication of research based on artificial intelligence available on the Equator Network website will be the norm for the journal.
Additionally, Colombia Médica, as a member of the ICMJE (International Committee of Medical Journal Editors) and the WAME (World Association of Medical Editors), welcomes their recommendations regarding the definition of authorship and the use of artificial intelligence programs for the elaboration and review of manuscripts submitted to the journal 9. These recommendations, which are explained in an article reproduced from the WAME, are:
- Non-human authors are not accepted.
- Authors should be transparent when using chatbots and provide information on their use.
- Authors are responsible for the information produced with a chatbot in their article (including accuracy and absence of plagiarism) and for proper attribution of all sources.
- Reviewers and editors should advise authors if they used chatbots in evaluating the manuscript and generating revisions and correspondence. They should also explain how they used them.
- Editors need appropriate tools to help them detect AI-generated or AI-altered content for the sake of science and the public and to help ensure the integrity of health information and reduce the risk of adverse health outcomes.
Colophon: If artificial intelligence optimizes our work, why do we have less free time?