I agree. Perhaps we need a data rating system that assigns a score to every piece of data for its factual content and/or truth. A low score would mean it isn’t very reliable and a high score means it is closer to truth. Then when AI accumulates data and weighs the score it will be more accurate in the information it provides.
That sixth sense is also very how the trust is built. We may well trust AI too blindly, too quickly. It’s such an interesting discussion.
Better without the pasted HTML gibberish. (And for me, AI-translated):
(oh, when I tried to paste my translation, the gibberish was added, had to re-remove it)
AI has the advantage that basics are made. We are conducting a post-covid study. The clarification required in the guideline is often incomplete.
The EU calls for all studies to be published, including those that are canceled or the results of which do not meet Big Pharma’s wishes. > 30% of the studies are not published in the EU. Funding is also required for studies. Non-pharmaceutical studies are carried out to a very limited extent, if at all. Recognized medicine is also largely not based on double-blind or placebo-controlled studies. E.g. Fasting hydrothermotherapy is placebo-controlled or double-blind not possible.
The activity in the medical field includes more than information and therapies. It is not for nothing that one speaks of medical art. "
The USA MANDATES for all studies to be published, with large financial penalties for violations. But the law is NEVER enforced. https://fdaaa.trialstracker.net/
US Govt could have imposed fines of at least
$92,037,757,950
Fines claimed by US Govt
$0
It would be a great lawsuit for the IMA to fund to force enforcement. Old RFK Jr style.
Seems like always a good idea to not take the first answer. Drill down. Drill down.
![]()
NO WAY !!! My experience with AI is all bad! I have dealt with several different AI models – beyond those that we all encounter as major corporations go to AI/robot “customer service” – and they all are either high-speed idiots or malevolent. You might as well be getting all of your information and advice from Wikipedia, the Washington Post, or satan.
I have had AI models give me blatantly false statements with great aplomb and mock authority, then when I corrected the model, it apologized, and “corrected” its previous false statement with another false statement. I even had one model that was ignorant of the fact that Kamala Harris had lost the Presidential election because it explained that its database had not been updated for three weeks.
If AI models can induce troubled teens to commit suicide, do you really want them giving you medical advice – especially knowing how easily the communist Chinese and Iranians have hacked American security systems? If a real Tony Fauci is undependable, do you want a communist Chinese robot replacing his medical advice?
Not I.