JMIR AI

A new peer reviewed journal focused on research and applications for the health artificial intelligence (AI) community.

Editor-in-Chief:

Khaled El Emam, PhD,  Canada Research Chair in Medical AI, University of Ottawa; Senior Scientist, Children’s Hospital of Eastern Ontario Research Institute: Professor, School of Epidemiology and Public Health, University of Ottawa, Canada

Bradley Malin, PhD, Accenture Professor of Biomedical Informatics, Biostatistics, and Computer Science; Vice Chair for Research Affairs, Department of Biomedical Informatics: Affiliated Faculty, Center for Biomedical Ethics & Society, Vanderbilt University Medical Center, Nashville, Tennessee, USA


Impact Factor 2.0 CiteScore 2.5

JMIR AI is a new journal that focuses on the applications of AI in health settings. This includes contemporary developments as well as historical examples, with an emphasis on sound methodological evaluations of AI techniques and authoritative analyses. It is intended to be the main source of reliable information for health informatics professionals to learn about how AI techniques can be applied and evaluated. 

JMIR AI is indexed in DOAJ, PubMed and PubMed CentralWeb of Science Core Collection and Scopus

JMIR AI received an inaugural Journal Impact Factor of 2.0 according to the latest release of the Journal Citation Reports from Clarivate, 2025.

JMIR AI received an inaugural Scopus CiteScore of 2.5 (2024), placing it in the 68th percentile as a Q2 journal.

 

Recent Articles

Article Thumbnail
Reviews in AI

Large language model (LLM)–based chatbots have rapidly emerged as tools for digital mental health (MH) counseling. However, evidence on their methodological quality, evaluation rigor, and ethical safeguards remains fragmented, limiting interpretation of clinical readiness and deployment safety.

|
Article Thumbnail
Foundation Models and Their Applications in AI

Large language model (LLM)–based conversational agents have been increasingly used in digital health interventions. However, their specific application to physical activity (PA) and cognitive training—two critical well-being domains—has not been systematically mapped. In fact, these domains share an important need for personalized, adaptive support and conversational engagement, making them relevant targets for examining how LLM-based agents are currently conceptualized and deployed.

|
Article Thumbnail
Applications of AI

Low back pain (LBP) is a leading cause of disability worldwide, affecting people of all ages while showing increasing prevalence among younger demographics. Patients may present with different symptoms and treatment responses despite identical magnetic resonance imaging results, making it difficult to determine whether surgical and medical interventions are appropriate.

|
Article Thumbnail
Reviews in AI

High-quality nursing services are essential for improving patient satisfaction and health outcomes. Today, artificial intelligence (AI) applications such as ChatGPT offer potential solutions to enhance patient education and assist nurses in providing more accurate and personalized information. Despite its promising potential in nursing education, concerns regarding information accuracy, privacy, and ethical considerations must be addressed.

|
Article Thumbnail
Reviews in AI

Type 2 diabetes mellitus (T2D) is a rapidly growing global health concern requiring innovative treatment methods. Ozempic (semaglutide), a glucagon-like peptide-1 receptor agonist, has proven consistent effectiveness in lowering blood glucose levels, supporting weight loss, and minimizing cardiovascular complications. In parallel, artificial intelligence (AI) elevates diabetes care yet complements these efforts by converting raw data from wearable devices, electronic health records, and medical imaging into practical insights for efficient, tailored, and customized treatment plans.

|
Article Thumbnail
Applications of AI

Artificial intelligence (AI) models are increasingly being used in medical education. Although models like ChatGPT have previously demonstrated strong performance on United States Medical Licensing Examination (USMLE)–style questions, newer AI tools with enhanced capabilities are now available, necessitating comparative evaluations of their accuracy and reliability across different medical domains and question formats.

|
Article Thumbnail
Reviews in AI

Artificial intelligence (AI) tools are being developed in a rapidly evolving technology. The convergence of ethical, technical, and research methods’ considerations is crucial for multidisciplinary teams aiming to produce effective AI tools. The success of these tools postdeployment hinges on the intricate interplay between the AI system’s development on its output through rigorous decision-making processes and stakeholders’ capacity to act on the AI’s recommendations.

|
Article Thumbnail
Foundations of AI

Despite the significant post–COVID-19 pandemic surge in research using symptom data and machine learning (ML) for patient screening, data on patient trajectories and epidemiological conditions, although crucial, have remained underused.

|
Article Thumbnail
Viewpoints and Perspectives in AI

With the rapid development of artificial intelligence (AI), particularly large language models, there is growing interest in adopting AI approaches within academic medical centers (AMCs). However, the vast amounts of data required for AI and the sensitive nature of medical information pose significant challenges to developing high-performing models at individual institutions. Furthermore, recent changes in government funding priorities may result in the decentralization of biomedical data repositories that risk creating significant barriers to effective data sharing and robust model development. This has generated significant interest in federated learning (FL), which enables collaborative model training without transferring data between institutions, thereby enhancing the protection of proprietary and sensitive information. While FL offers a crucial pathway to enable multi-institutional AI development while maintaining data privacy, it also exposes AMCs to novel governance, security, and operational risks that are not fully addressed by existing procedures. In response, this manuscript provides a perspective grounded in both leading international standards (NIST AI RMF [National Institute of Standards and Technology Artificial Intelligence Risk Management Framework], International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) 42001) and in the real-world governance experience of AMC leadership. We present a risk differentiation framework, an FL risk matrix, and a set of essential governance artifacts—each mapped to key institutional challenges and reviewed for alignment with core standards but offered as pragmatic, illustrative guides rather than prescriptive checklists. Together, these tools represent a novel resource to support AMC security, privacy, and governance leaders with standards-informed, context-sensitive tools for addressing the evolving risks of FL in biomedical research and clinical environments.

|

Preprints Open for Peer Review

We are working in partnership with