Continuous and bimonthly publication
ISSN (on-line): 1806-3756

Licença Creative Commons
1107
Views
Back to summary
Open Access Peer-Reviewed
Author's reply

AUTHORS’ REPLY: Accuracy of ChatGPT in answering asthma-related questions

Bruno Pellozo Cerqueira1, Vinicius Cappellette da Silva Leite1, Carla Gonzaga França1, Fernando Sergio Leitão Filho2, Sônia Maria Faresin2, Ricardo Gassmann Figueiredo3, Andrea Antunes Cetlin4, Lilian Serrasqueiro Ballini Caetano2, José Baddini-Martinez2

 AUTHORS’ REPLY
 
We read with great interest the correspondence regarding our recently published article and would sincerely like to thank the authors for their thoughtful and insightful comments. We greatly appreciate the opportunity to discuss the scope and implications of our work further.
 
We recognize that, like any exploratory study, our study has inherent limitations, some of which we addressed in the original manuscript. We intentionally limited the number and the wording of questions to keep the article concise and focused on the main asthma-related issues.
 
Regarding the statistical approach, our team, together with the statistical advisors, selected the content validity coefficient (CVC) to assess agreement among evaluators. We determined that this method was appropriate for the objectives of our study and that it provided a reliable measure of inter-rater agreement.
 
We also agree that the perceptions of medical professionals and laypeople may differ considerably, and we recognize this as the primary limitation of our study. To incorporate evaluations from th
 
e general population, it would have been necessary to conduct pre- and post-tests, requiring a methodology distinct from what was proposed. The objective of our study, however, was to assess the perspectives of physicians experienced in managing asthma patients in both private and public outpatient clinics, focusing on what they consider essential for patients to understand about the disease. While we acknowledge that this approach is subjective and limited, we still regard it as valuable data that adds to the discussion.
 
The additional questions raised represent promising directions for future research. The decision to avoid highly specific prompts was made to simulate real-life interactions better; however, studies utilizing personalized prompts may produce different outcomes. Involving real patients with chronic asthma and integrating both expert and patient assessments could establish new and meaningful criteria for evaluating AI-generated medical information. Furthermore, comparative analyses of various large language models, including those developed for scientific or medical applications such as OpenEvidence, constitute an important next step in this field.
The authors’ valuable contribution is appreciated. Their observations highlight significant avenues for further re-search, and such scientific dialogue enhances the understanding and development of large language model applications in the medical field.

Indexes

Development by:

© All rights reserved 2025 - Jornal Brasileiro de Pneumologia