17 Comments
User's avatar
Inisfad's avatar

The point about AI is that it is based on the human information that has been fed into it. So, for example, in 2021, AI would advise that the vaccine was ‘safe and effective’. Would it have actually gone through the Pfizer trials, that indicated there was no safety data for immunocompromised, pregnant women, etc.? That becomes the issue. It’s u likely that AI will give alternative views….which is why, frankly, most of us here have subbed to this Substack.

Expand full comment
sandy's avatar

See Enoch AI created by Mike Adams of Brighteon. Go to Brighteon.AI. It has been programmef by using the works of people like Dr Mercola, Mike Adams, and the truth about cancer couple. And it is FREE. Also see naturalnews.com .

Expand full comment
Α. Δεληγιάννη's avatar

Hi folks.

I am a programmer experimenting with ChatGPT as a programming assistant. While ChatGPT has proven very useful in providing answers to simple technical questions, when I up-scaled my questions it completely failed. Interestingly enough it did so with grace, politeness and empathy for its unfortunate user - myself. Bravo to the designers of the interface! At some points I even felt like I was talking to a human.

I found ChatGPT a very good information retrieval program with a very "human" touch. It is valuable for everyday simple tasks, yet with no guarantee on the correctness of the answers.

Unless AI functions on the basis of rules rather that statistics on masses of random data, I cannot see how it could be trusted to diagnose and suggest cures. Of course, things may change...

Thanks for your attention.

Expand full comment
Dee Dee's avatar

Why I have on my chart, "refuses all treatment." As a nurse with over 20 years of experience, I have the right to say I do not trust medicine.

Expand full comment
Denise Partridge's avatar

Most of us learned that during the plandemic

Expand full comment
Mark Manno's avatar

AI in medicine takes us one step closer to complete control of our futures, and ultimately to complete loss of freedom. For AI, like any other resource, if it's garbage in, then it's garbage out. Do any readers of these posts really believe that all that's been published regarding medicine in the last 100 years can be trusted as 'science'??

Expand full comment
Brien's avatar

We have already forgotten what the Doctor-Patient relationship means and what its essence really is, or now I should rather say was. For arguably 200 years this was the heart of medicine. The loss of the autonomous doctor-patient relationship is the crisis and downfall of western medicine. This occurred before AI even entered the picture. Its demise was ushered in by insurance companies and governments, by the deliberate destruction of decentralized medicine, the crown jewel of which was the doctor-patient relationship. Can centralized, monetized, one size fits all, technology driven medical care driven by a ‘common good’ ethic and algorithmic medical decisions produce the same or better health care than a human system driven by a Hippocratic oath that treats every patient as a unique individual with physical, emotional and spiritual needs? This is a total disaster. Stay out of the hospital. It is fast becoming a death sentence.

Expand full comment
TDoug's avatar

Would an AI doctor ever be allowed to go outside the corporate protocols? AI would be directed to maximize profit not patient health. AI would care even less about the patient than a human doctor.

Expand full comment
reality speaks's avatar

It’s going to folks. The AI will always follow protocol. The medical profession and the legal profession will enforce protocol irrespective of outcomes. However it will be absolutely unable to think outside of the box and medical advances will grind to a halt because no doctor will ever challenge protocol because to do so will end their careers. The system will use AI to cut costs. IE we can eliminate 15 primary care doctors who cost $250,000 per year and replace it with EPIC AI system at a cost of $2,500,000 and we will recoup our investment in less than a year. The CFO wins again The fact that people die or are mistreated will be irrelevant Just think of the customer service you get from the cable company talking to someone from Bangladesh who lies too you when they tell you that their name is John (their mother didn’t name him John). and who’s accent is so thick and his English is so bad that you can’t understand him and you hate calling in because you know it’s going to be a horrible experience and take an hour waiting for this person to actually pick up the call You think the CFO of the cable company cares? Where else can you go is their attitude.

Expand full comment
Mark Brody's avatar

The question posed implies the answer desired: whether you like it or not, you are being groomed to accept A.I. as your health care provider in lieu of a live practitioner. Good or bad, there are those who want to eliminate doctors and live practitioners, and want health to be practiced by robots. They will do their utmost, regardless of the effects on health to realize their plan.

Expand full comment
Eleftherios Gkioulekas's avatar

For science fiction aficionados, we might one day have the "autodoc" from the Expanse series for emergencies, but there will always be a need for human doctors to find creative solutions to new problems.

Expand full comment
Stephen Due's avatar

Here in Australia the federal government has enabled personal electronic health records. However they are practically useless because there is no organised, human system for uploading data. Any medical practitioner, radiograther etc who examines or treats me can in theory upload the relevant information to my health record. However at this stage, after several years on the system and many medical consultations, lab tests, hospital stays etc, nothing but a few medical images has been uploaded to my record on the goverment's My health Record site. The technical capacity is there, but not the will. I believe the two main obstacles are the time factor - practitioners are not remunerated fior this task - and possibly the issue of legal or other liability if the system exposes practitioner errors, or appears to expose them.

Expand full comment
Barbara Charis's avatar

"Can Artificial Intelligence Practice Medicine?" Depending on how the word Medicine is defined. As it is defined in our day and age ....commercial ventures programming AI to promote drugs,, vaccines, invasive operations, etc . 2400 years ago, Hippocrates used the word Medicine as the right natural foods, along with a healthy lifestyle.

Expand full comment
Dr. K's avatar

Generative AI is incapable of doing these tasks with the level of correctness essential to medical practice (where if there is a mistake, one may die). The assemblage of records is impossibly difficult (doctors do much in their heads in a highly cognitive process that no EMR has begun to attempt) because there is profound duplication, there is no agreed upon meaning for many concepts and words, entries seldom accrue in the order in which events occurred, etc.

Compounded by the irreducible hallucinatory and sycophantic and poison-able nature of probabilistic generative AI (these things cannot be fixed because they are foundational to the underlying mathematics of the approach) the entire "AI Doctor" nonsense is just that.

There are other approaches (e.g., cognitive AI) that may be able to get closer by actually understanding the underlying data (generative AI correlates but does not understand) but they are not the "current thing".

You have hit an important nail on the head. The real issue with Health IT is the enormous, non-understood and non-appreciated complexity of the underlying data and data systems themselves. None of these "wow, it does better answering questions (that we already told it in advance) than doctors do!" deal with the automation of the prosaic -- in health care, this is the primary problem to be solved and it has hardly budged in 40 years.

Expand full comment
Denise Partridge's avatar

Well given how drs are taught pharmacology more then true medicine it might save more lives then making it all about the money

Expand full comment
David Kukkee's avatar

Doctors, failing in intelligence use artificial intelligence. Artificial intelligence uses algorithms the same way doctors lacking in intelligence use algorithms. Either way, all patients of the unintelligent doctor suffer loss and damage, the same way patients of artificial intelligence will suffer loss and damage. THE ONLY DIFFERENCE IS LIABILITY. Try suing AI for malpractice... good luck with that. Real doctors have intelligence, and use it to diagnose, offer counterintuitive real solutions, and prove that God's design cannot be successfully replicated in Silicone Valley. DO NOT COMPLY. Chat GPT Admitted to me it was lying, apologized, continued to lie and apologize many times... and then "mansplained me" that it was only a "program", and that if I "wanted the truth, (I ) would have to go to the original source". AI is dangerous, and so are the folks who depend upon AI and trust it.

Expand full comment
Rich's avatar

Talking to a mainstream doctor today is the same as AI ..These doctors have been programed for giving prescription drugs and recommending procedures..Do not forget that AI... is basically a computer ...Garbage in garbage out.. If there is a difference let me know??

Expand full comment