Want to explore "what if"?

AI saved my life

Written by JoshuaRole: Senior Web Developer

If you’re experiencing similar symptoms or have any health concerns, we recommend you see your healthcare professional ASAP.

Joshua in the emergency room of the hospital after being diagnosed with type 1 diabetes

Recently, ChatGPT bullied me into jabbing my finger. The glucometer showed 31 mmol/L. Anything over 7 is “see a doctor,” 31 is “start writing your eulogy.”

I’m 35, healthy, and suddenly my blood has turned to sugar syrup.

I haven’t been myself the last month or so with industrial-grade thirst, marathon peeing and brain fog. I blamed stress, snoring pregnant wife, and just feeling generally flat. Running this past ChatGPT, it suggested diabetes and that I should go ahead and buy a blood glucose meter.

I blew this off, surely I don’t have diabetes, and instead went for the classic “wait and see” approach, hoping that a long weekend in Taupo for Matariki was just what I needed. I should have realised when I was daydreaming about soaking in the lake, drinking the fresh water, that my thirst situation was getting out of hand.

Taupo did not fix me, and symptoms escalated. Jumping on the scales, I realised 6kgs had disappeared. This all convinced me that the trend wasn’t in the right direction. Meanwhile, ChatGPT continued to scream, “buy a blood glucose meter.”

Screenshot of a ChatGPT table titled “DIY triage kit for the next 72 h.” The note beneath reads: “Objective: collect hard numbers, catch any ‘go-to-ED-now’ readings, and arrive at the doctor with data instead of vibes.”

So Sunday morning, I relented and followed its advice. After some back and forth on how to use the thing, I got a result:

Screenshot of a ChatGPT exchange where the user reports a glucose reading of 432 mg/dL. ChatGPT urgently replies, “Stop scrolling, this is critical,” explains this equals ~24 mmol/L, and lists emergency re-check steps for dangerously high blood sugar.

It was pretty clear to me at this stage that yes, ChatGPT was right, I have diabetes. There was some slightly panicked stabbing of fingers while I tried to test again. Multitasking, I kept ChatGPT talking throughout.

Screenshot of a ChatGPT chat where the user asks about diabetes. ChatGPT replies, “If tomorrow’s reading >11 mmol/L, welcome to Club Diabeetus"

I kept stabbing but struggled to draw enough blood due to poor technique, dehydration, and maybe some shock. Meanwhile, ChatGPT kept feeding me the good news:

Screenshot of ChatGPT explaining diabetes management, covering complications, daily habits, and a bottom line on keeping it manageable.

6 attempts later, I got a good reading.

Screenshot of ChatGPT urgently telling a user with a 31 mmol/L (566 mg/dL) glucose reading to go to the ER immediately for possible DKA.

I followed its advice and went to the hospital, whose staff were shocked that I had just strolled in, having tested myself. Normally, people find out by going into a coma or at least uncontrollable vomiting. They were surprised that I had gone out and tested myself, but had a decent laugh when I told them ChatGPT made me do it.

Sure enough, ChatGPT was right, and the medical staff diagnosed me with Type 1 diabetes. The staff and care have been great, and they have been doing their best to educate me on how to manage my new diabetes situation. But I have been finding it very useful to have things reinforced by ChatGPT. It has been a stern but fair teacher.

Screenshot of ChatGPT giving insulin timing tips, practical diabetes hacks, and perspective on managing treatment versus long-term risks.

Obviously, I have been deferring to the doctors and nurses, but still, being able to have a constant dialogue with ChatGPT has helped flatten the learning curve. Now that I am out of the hospital with my shiny new type 1 diabetes diagnosis, it has been very reassuring getting support from ChatGPT.

But still, I know I need to learn to be able to take care of this myself, and not rely on ChatGPT. To do this, I have tried to be strict on myself by first taking my measurements, making a hypothesis on the next steps, feeding the same data that I used into ChatGPT, trying my best not to bias the results, and comparing my results with ChatGPT's. It definitely helps me be confident that I am doing the right thing when we come to the same conclusions before I start jabbing myself with insulin or having emergency sugar.

It helps keep me sane, keeping me company in the crisis moments and even occasionally making me laugh during the day-to-day routine.

Anyway, I will let ChatGPT sum it up:

Screenshot of ChatGPT explaining DKA risk after extreme glucose and ketone readings, outlining warning signs, emergency criteria, and takeaways.

And did AI save my life?

Screenshot of a ChatGPT conversation where the user asks ChatGPT if it helped him recognise a medical emergency, and ChatGPT responds with a humorous but caring tone



Written by JoshuaRole: Senior Web Developer