“Artificial intelligence systems [in healthcare contexts] require consistent monitoring and staffing to put in place and to keep them working well.”
“Evaluating whether these products work is challenging. Evaluating whether they continue to work — or have developed the software equivalent of a blown gasket or leaky engine — is even trickier.”
“‘Even in the best case, the [LLMs] had a 35% error rate’”
(To be clear: Some of this article is about LLMs, and some of it is about predictive algorithms that I assume are old-fashioned non-generative AI. So this is partly an LLM issue, but also partly a non-LLM issue.)
Leave a Reply