JMIR AI
A new peer reviewed journal focused on research and applications for the health artificial intelligence (AI) community.
Editor-in-Chief:
Khaled El Emam, PhD, Canada Research Chair in Medical AI, University of Ottawa; Senior Scientist, Children’s Hospital of Eastern Ontario Research Institute: Professor, School of Epidemiology and Public Health, University of Ottawa, Canada Bradley Malin, PhD, Accenture Professor of Biomedical Informatics, Biostatistics, and Computer Science; Vice Chair for Research Affairs, Department of Biomedical Informatics: Affiliated Faculty, Center for Biomedical Ethics & Society, Vanderbilt University Medical Center, Nashville, Tennessee, USA
Impact Factor 2.0 CiteScore 2.5
Recent Articles

Large language model (LLM)–based conversational agents have been increasingly used in digital health interventions. However, their specific application to physical activity (PA) and cognitive training—two critical well-being domains—has not been systematically mapped. In fact, these domains share an important need for personalized, adaptive support and conversational engagement, making them relevant targets for examining how LLM-based agents are currently conceptualized and deployed.

Low back pain (LBP) is a leading cause of disability worldwide, affecting people of all ages while showing increasing prevalence among younger demographics. Patients may present with different symptoms and treatment responses despite identical magnetic resonance imaging results, making it difficult to determine whether surgical and medical interventions are appropriate.

High-quality nursing services are essential for improving patient satisfaction and health outcomes. Today, artificial intelligence (AI) applications such as ChatGPT offer potential solutions to enhance patient education and assist nurses in providing more accurate and personalized information. Despite its promising potential in nursing education, concerns regarding information accuracy, privacy, and ethical considerations must be addressed.

Type 2 diabetes mellitus (T2D) is a rapidly growing global health concern requiring innovative treatment methods. Ozempic (semaglutide), a glucagon-like peptide-1 receptor agonist, has proven consistent effectiveness in lowering blood glucose levels, supporting weight loss, and minimizing cardiovascular complications. In parallel, artificial intelligence (AI) elevates diabetes care yet complements these efforts by converting raw data from wearable devices, electronic health records, and medical imaging into practical insights for efficient, tailored, and customized treatment plans.

Artificial intelligence (AI) models are increasingly being used in medical education. Although models like ChatGPT have previously demonstrated strong performance on United States Medical Licensing Examination (USMLE)–style questions, newer AI tools with enhanced capabilities are now available, necessitating comparative evaluations of their accuracy and reliability across different medical domains and question formats.

Artificial intelligence (AI) tools are being developed in a rapidly evolving technology. The convergence of ethical, technical, and research methods’ considerations is crucial for multidisciplinary teams aiming to produce effective AI tools. The success of these tools postdeployment hinges on the intricate interplay between the AI system’s development on its output through rigorous decision-making processes and stakeholders’ capacity to act on the AI’s recommendations.

With the rapid development of artificial intelligence (AI), particularly large language models, there is growing interest in adopting AI approaches within academic medical centers (AMCs). However, the vast amounts of data required for AI and the sensitive nature of medical information pose significant challenges to developing high-performing models at individual institutions. Furthermore, recent changes in government funding priorities may result in the decentralization of biomedical data repositories that risk creating significant barriers to effective data sharing and robust model development. This has generated significant interest in federated learning (FL), which enables collaborative model training without transferring data between institutions, thereby enhancing the protection of proprietary and sensitive information. While FL offers a crucial pathway to enable multi-institutional AI development while maintaining data privacy, it also exposes AMCs to novel governance, security, and operational risks that are not fully addressed by existing procedures. In response, this manuscript provides a perspective grounded in both leading international standards (NIST AI RMF [National Institute of Standards and Technology Artificial Intelligence Risk Management Framework], International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) 42001) and in the real-world governance experience of AMC leadership. We present a risk differentiation framework, an FL risk matrix, and a set of essential governance artifacts—each mapped to key institutional challenges and reviewed for alignment with core standards but offered as pragmatic, illustrative guides rather than prescriptive checklists. Together, these tools represent a novel resource to support AMC security, privacy, and governance leaders with standards-informed, context-sensitive tools for addressing the evolving risks of FL in biomedical research and clinical environments.
Preprints Open for Peer Review
Open Peer Review Period:
-
Open Peer Review Period:
-
Open Peer Review Period:
-










