Comparative Analysis of ChatGPT and Google Gemini in Generating Patient Educational Resources on Cardiac Health: A Focus on Exercise-Induced Arrhythmia, Sleep Habits, and Dietary Habits

Nithin Karnan,Sumaiya Fatima,Palwasha Nasir,Lovekumar Vala,Rutva Jani,Nahir Montserrat Moyano

Published 2025 in Cureus

ABSTRACT

Introduction: Patient education is crucial in cardiovascular health, aiding in shared decision-making and improving adherence to treatments. Artificial intelligence (AI) tools, including ChatGPT (OpenAI, San Francisco, CA) and Google Gemini (Google LLC, Mountain View, CA), are revolutionizing patient education by providing personalized, round-the-clock access to information, enhancing engagement, and improving health literacy. The paper aimed to compare the responses generated by ChatGPT and Google Gemini for creating patient education guides on exercise-induced arrhythmia, sleep habits and cardiac health, and “dietary habits and cardiac health. Methodology: A comparative observational study was conducted evaluating three AI-generated guides: "exercise-induced arrhythmia," "sleep habits and cardiac health," and "dietary habits and cardiac health," using ChatGPT and Google Gemini. Responses were evaluated for word count, sentence count, grade level, ease score, and readability using the Flesch-Kincaid calculator and QuillBot (QuillBot, Chicago, IL) plagiarism tool for similarity score. Reliability was assessed with the modified DISCERN score. Statistical analysis was conducted using R version 4.3.2 (The R Core Team, R Foundation for Statistical Computing, Vienna, Austria). Results: ChatGPT-generated responses had an overall higher average word count when compared to Google Gemini; however, the difference was not statistically significant (p = 0.2817). Google Gemini scored higher on ease of understanding, though this difference was also not significant (p = 0.7244). There were no significant differences in sentence count or average words per sentence. ChatGPT tended to produce more complex content for certain topics, whereas Google Gemini's responses were generally easier to read. Similarity scores were higher for ChatGPT across all topics, while reliability scores varied by topic, with Google Gemini performing better for exercise-induced arrhythmia and ChatGPT for sleep habits and cardiac health. Conclusions: The study found no significant difference in ease score, grade score, and reliability between AI-generated responses for a cardiology disorders brochure. Future research should explore AI techniques across various disorders, ensuring up-to-date and reliable public information.

PUBLICATION RECORD

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

REFERENCES

Showing 1-16 of 16 references · Page 1 of 1