
Abstract
Assisted artificial intelligence (A-AI) has rapidly become a gold standard approach for conducting data analysis and medical writing [1]. From executing complex statistical tasks to extracting relevant information and supporting manuscript drafting, A-AI is increasingly embedded in the daily workflow of clinical and translational researchers [2]. However, the thin red line between being assisted by AI and becoming reliant on it, or even fully driven by it, is becoming harder to define. This ambiguity raises essential questions about authorship, accountability and the integrity of scientific output in the AI era. A growing number of early-career researchers are engaging with high-dimensional machine learning (ML) methodologies, often without comprehensive training in classical statistics or computational foundations. While the integration of such advanced tools reflects a welcome shift towards data-driven research, it also raises concerns regarding methodological rigor.
In certain instances, traditional statistical frameworks are overlooked, and AI-generated outputs, particularly those derived from large language models, are interpreted without sufficient critical appraisal. To better understand this, 2 examples of A-AI analysis follow.