Before the C19 scamdemic 800,000 Americans were killed or disabled by “diagnosis errors” – indications are that Artificial Intelligence (AI) applications such as ChatGPT could halve this
“One of the important reasons for these errors is failure to consider the diagnosis when evaluating the patient.”
Eric Topol published this article on his SubStack that updated a report published on the “Science” website.
Toward the eradication of medical diagnostic errors (substack.com)
Toward the eradication of medical diagnostic errors | Science
First off, I am wondering if the term “Arterial Intelligence” is correct – rather, ChatGPT is better described as an algorithm that produces output based on rules that filter billions of entries on the internet according to rules (bias) and outputs “reports” that summarize the most popular internet entries with the most !qualifications”.
Here is the official definition of ChatGPT – other (AI) bots are going to have similar “functionality”.
“Chat Generative Pre-trained Transformer
ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot developed by OpenAI and launched on November 30, 2022. Based on a large language model, it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language!
Note the absence of the word “Intelligence”, let alone “wisdom” or “knowledge”.
Please read Eric’s article as it provides insights into the possible future of improved outcomes – regardless of technique and is even more significant now that the need to diagnose the impact on specific parts of our immune system that have been damaged by injections of the C19 MRNA spike venom.
Here’s a few snippets to whet your appetite.
On the use of neural networks:
“There are a few ways that artificial intelligence (AI) is emerging to make a difference to diagnostic accuracy. In the era of supervised deep learning with convolutional neural networks trained to interpret medical images, there have been numerous studies that show accuracy may be improved with AI support beyond expert clinicians working on their own”.
“… progress that has been made with transformer models, enabling multimodal inputs, there is expanded potential for generative AI to facilitate medical diagnostic accuracy. That equates to a capability to input all of an individual’s data, including electronic health records with unstructured text, image files, lab results, and more.”
“…unstructured text”? As in V-Safe text fields? Maybe, maybe not!
What are some of the potential diagnostic benefits from which “AI” models?
“A large randomized study of mammography in more than 80,000 women being screened for breast cancer, with or without AI support to radiologists, showed improvement in accuracy with a considerable 44% reduction of screen-reading workload.”
“"A systematic analysis of 33 randomized trials of colonoscopy, with or without real-time AI machine vision, indicated there was more than a 50% reduction in missing polyps and adenomas, and the inspection time added by AI to achieve this enhanced accuracy averaged only 10 s.”
“The LLM was nearly twice as accurate as physicians for accuracy of diagnosis, 59.1 versus 33.6%, respectively. Physicians exhibited improvement when they used a search, and even more so with access to the LLM.” LLM = Large Language Model).
“A new preprint report by Google DeepMind researchers took this another step further. Using 20 patient actors to present (by text) 149 cases to 20 primary care physicians, with a randomized design, the LLM (Articulate Medical Intelligence, AMIE) was found to be superior for 24 of 26 outcomes assessed, which included diagnostic accuracy, communication, empathy, and management plan.”
Now the AI models have biases, so do clinicians. Perhaps there is chance that the 800,000 annual deaths and disabilities can be halved – maybe by making the AI models “compete” for solutions, overseen by clinicians.
How much information do these AI models have about the impact on the immune system from injections of C19 mRNA spike venom or on evolving treatment protocols? Maybe this is “politically unacceptable” for those programming the “rules” for the algorithms of AI language models.
More importantly, how does this impact the “one size fits all” vaxx mandates? If the AI language models can diagnose patients ex-post (after the event) why can’t they be used to prevent treatments ex-ante (risk estimates based on past data)???
I am still wondering which is the most evil aspect of mandates. Is it that there are no exceptions allowed that reflect the medical history or propensity of damage to those vulnerable to side effects from the injection of spike venom, or, is it the publication of a private conversation between doctor and patient by the world at large (especially by the ignorant and emotional in politicians, the MSM and the regulators?).
Onwards!
Please subscribe or donate via Ko-fi – any amount from 3 bucks upwards. Don’t worry and God Bless, if you can’t or don’t want to. Ko-fi donations here: https://ko-fi.com/peterhalligan - an annual subscription of 100 bucks is one third less than a $3 Ko-fi donation a week!
Peter, see what I posted below Ranger71 a little while ago. Would never have found this interesting debate on the McCaim Science Dojo Show here: https://t.wtyl.live/w/ba690983-6a75-43bb-a9ec-1d9eb5125ee2 - - starting about 40 min in, running for 1.25 hrs, between Charles Rixey and John Cullen, an interesting character. Occurred on Jan 17 2024. Was covid a coverup of a recurrent H5N9 deadly bird flu in China in 2019? How did the flu disappear in countries around the world? It was not included in any of the testing panels used, although John says the test just came out the week prior and must be ordered directly from the CDC. See what you think, these guys are highly knowledgeable in different areas. A simulation or a charade?
AI Mania
> https://bra.in/2vAbAa