Can Artificial Intelligence Practice Medicine?
Basic records fetching, assembly, and presentation required well before assisting in diagnosis and treatment.
By Peter A. McCullough, MD, MPH
A new paper landed in my inbox today titled: Software as a Medical Practitioner—Is It Time to License Artificial Intelligence? by Bressman et al from Department of Medicine, University of Pennsylvania, Philadelphia. The paper tackles the usual questions of licensing, responsibility, and liability as it applies to software tools used in clinical practice.
No doubt artificial intelligence is playing an ever-increasing role in medicine. From my perspective this paper speculates on a late step in the evolution of AI, not the first steps. Here is what Bressman et al is missing for AI:
Obtaining permission for record gathering from disparate health systems, clinics, labs and imaging services
Assembly of all information into a personal health timeline
Creation, registration, and patient set up of electronic medical records
Structured interviews
Synthesis of information for presentation to doctor or other healthcare provider
It’s important that AI enthusiasts to remain grounded in the very basic blocking and tackling in medicine. So far not a single AI program has proposed yet accomplished these five tasks in their entirety.
Please subscribe to FOCAL POINTS as a paying ($5 monthly) or founder member so we can continue to bring you the truth.
Peter A. McCullough, MD, MPH
FOCAL POINTS has partnered with Patriot Mobile to defend your medical freedom. Join Patriot Mobile today!





The point about AI is that it is based on the human information that has been fed into it. So, for example, in 2021, AI would advise that the vaccine was ‘safe and effective’. Would it have actually gone through the Pfizer trials, that indicated there was no safety data for immunocompromised, pregnant women, etc.? That becomes the issue. It’s u likely that AI will give alternative views….which is why, frankly, most of us here have subbed to this Substack.
Hi folks.
I am a programmer experimenting with ChatGPT as a programming assistant. While ChatGPT has proven very useful in providing answers to simple technical questions, when I up-scaled my questions it completely failed. Interestingly enough it did so with grace, politeness and empathy for its unfortunate user - myself. Bravo to the designers of the interface! At some points I even felt like I was talking to a human.
I found ChatGPT a very good information retrieval program with a very "human" touch. It is valuable for everyday simple tasks, yet with no guarantee on the correctness of the answers.
Unless AI functions on the basis of rules rather that statistics on masses of random data, I cannot see how it could be trusted to diagnose and suggest cures. Of course, things may change...
Thanks for your attention.