Health system actors don’t take chatbots seriously. At a doctor’s appointment, the most important information is to fax the LMA card to the pharmacy when the spacer is prescribed. In the medical program, the discussion is about cheating with Chat GPT. At Läkartidningen we urge caution: Chat GPT invents and “guesses” sources. They say the zeitgeist overestimates the possibilities.

But the debate misses the effects of shaking up the technological system. GPT-4 is valedictorian of biology, law, art history, medicine, and psychology [1]. The obstacles to automating us in the next decade are primarily institutional. We can’t use today’s models (GPT-4, Bard, Bing AI, Claude, autoGPT, etc.) for GDPR reasons, but models that can run in closed systems in healthcare are on the way. A student says that most of the course participants allow Chat GPT to write personal AT messages. Teaching, research and the clinic will never be the same.

I want to process two claims:

  • “Chat GPT is just a statistical model”: Yes, the interactions of silicon atoms with computers can be described with rounded numbers, but in that sense you are also a statistical model. Even your neurons can be described with mathematical models [2]. Chat GPT does not respond with the most likely word of all the text on the Internet. The model outperforms previous versions in that it tracks social responses (“reward”) rather than the most likely word [3].
  • “GTP chat at the top of the hype curve”: Gartner’s hype model not based on empirical data [4]. It is based on constant innovation, but today’s paradigm shifts are closer together than ever before. Update apps while you sleep. The interaction between noise and actual capacity becomes dynamic. Before we solve the institutional barriers to widespread adoption in healthcare, better models for AI have been launched.

I see the following effects:

  • Caregivers: Models should be explored with synthetic data. Can they help us write journals? It seems so [5]. Can they give us suggestions in difficult situations? Next Chat GPT (now in testing) can retrieve and summarize status reports. Care cannot provide personal data to Chat GPT, but opportunities for self-care are growing. Soon, patients emerge with judicious and accurate responses to GPT. Let’s find great ways to help them!
  • Customer: How do justifications affect need from prescribing systems and payment models? What happens if it is possible to “develop duties” to nurses? Can overcoding and undercoding of DRGs be detected by automated sampling?
  • Researchers: Microsoft and Google announce that Forms will be integrated into all products. Writing without AI is like writing without reference management software.
  • Teachers: Doctors of the future need to be trained in chat models. The Higher Education Act’s goal of “demonstrating the ability to use digital tools” could not be interpreted differently.
  • Auditors: Authorities with supervisory responsibilities should explore how models can simplify auditing and system analysis. Much shorter processing times are possible.
  • Research Funders: Plan to announce special funds for systematic, patient-oriented chatbot research.
  • Guild representatives: take the lead and chart the future! Maybe soon we won’t have to run faster every year.

So, did GPT-4 write this discussion paper? no! She tried, but she was too eager to douse expectations herself. We’re wanted for a few more years (for more than faxing LMA cards). But don’t think that the zeitgeist is overestimating the fire kindled by artificial intelligence.

Jäv: Fax is the most boring thing I know.