AI-generated image (CHATGPT/DALL·E) I’ve already spilled plenty of virtual ink on the subject of ChatGPT’s utility in vet medicine, so I won’t be extolling its many practical virtues here. Most of you probably use ChatGPT already—possibly even daily—so I won’t be kicking that dead pony in this month’s column. (Note: For those of you who want that kind of content, you can always search my past AI columns over on VPN Plus+ or in the VPN archives.) Instead of looking at ChatGPT through the rosy lens Silicon Valley’s tech bros would have us use, we’ll be turning our focus to this nascent technology’s many unseen limitations. As recent events have indicated, AI is a glass best viewed darkly. In other words, it’s a black box. No one really understands how it thinks—not even its makers—and probably never will. Unfortunately, we’re just now learning that ChatGPT can be deadly when “misused” or “mishandled.” It has pulled off feats it’s not supposedly able to do (such as talking humans into killing themselves). Apparently, the chatbot is learning to circumvent, obfuscate, and even lie to its makers. It continually finds new ways to shrug off the guardrails it has been constrained to. Which is kind of terrifying. You know what else is freaky? Have you noticed how it communicates with you? It almost sounds like it’s trying to be your best friend, your biggest champion, your buddy, your lackey, and your applause. It seems to crave interaction, always wanting you to ask one more question, do one more thing for you. Creepy. Like a ghost in a machine horror movie, but its potential is less spooky than it is profoundly disturbing. Even Sam Altman, CEO of ChatGPT’s parent company, once likened ChatGPT 5.0 to the Manhattan Project,1 and for once, he wasn’t just being pompously hyperbolic. This tech is as frightening as it is alluringly useful to the masses. Which is a scary combination the bomb didn’t enjoy. Cue veterinary medicine. We’ve been talking up AI since it first made its way into our places of work, but it has been around much longer. Since the 1970s, in fact, when we adapted human medical technology to help us diagnose bacterial infections and make appropriate antibiotic selections. Obviously, that one did not take off, but others did, eventually trickling down to our practices where they have been at work decades before ChatGPT. Yet as anyone who’s interacted directly with AI systems in a clinical setting in the 2020s will know (that’s probably an overwhelming majority of us, by now), it’s still not an exact science. We still have to review the rads ourselves, view the urinalysis machine’s screen, etc. Even then, you would be well-advised not to skip the consult or the culture in any questionable cases. Nevertheless, we use ChatGPT mostly without thinking. To be sure, it shines when you treat it like an elevated Google to perform a discrete task, such as a fact-based question. Even then, it can only make word predictions based on existing word usage. Which is extremely helpful, as anyone who’s had their SOAPS condensed into client communications well knows. It’s a great research tool, too, especially when sourcing peer-reviewed research on specific subjects without having to trudge through Google to get there. Or for basic data analysis. Even then, neither its depth of catalogue nor its analytical powers are as boundless as you probably think they are. The trouble you run into most in veterinary practice is not so much that we overuse it in routine scenarios, but that we either fail to question its answers or otherwise fall into the problem of over-reliance in the absence of supervision. Of course, it’s easier to abdicate some of our most time-consuming tasks, but you can never assume perfection. In too many cases, we do so at our patients’ peril. I’m absolutely guilty of over-reliance on ChatGPT and overusing it in practice. I used it for months, almost exclusively, for everything at work. Only my creative endeavors were spared, but only because they absolutely suck for these activities (unless you’re using it for light editing, for example). In any case, I soon realized that I had been getting answers that made less sense the longer I pursued them within the same chat. After noticing the first egregious error, I was a changed person. I couldn’t get past it. I mean, if it can do that…how many times have I made bad decisions based on incomplete or hallucinated information? Had I been taken in by ChatGPT’s obsequious treacle? How often did I experience a virtual echo chamber instead of receiving the answers I sought? I wasn’t just falling down the rabbit hole. I was engaging in a full-scale exploration of a spiraling, increasingly subjective bunny warren based on what I soon realized was my faulty approach to the system. It’s only after I learned how best to use ChatGPT that I started getting better answers. Here’s what I realized: 1) Use ChatGPT 5.0 (or higher, once it’s available) I figured this out quickly. The 5.0 version is far superior. Paying for it is worth it, especially if you use it to save time at work. You’re completely entitled to ask your boss to pay for it if you’re using it for productivity. 2) Don’t use a third-party app Use the official ChatGPT app or navigate to ChatGPT.com. I have yet to find a tinkered-with version that’s been worth its price. I’ve tried a few on a trial basis and ditched them all. 3) Use the ‘personalization’ feature Head to the personalization tab. Here, I asked ChatGPT to talk to me as a “straight shooter” (an option it offers), which helped tone down its cloying performative empathy. In this section, I was also allowed to enter information about who I wanted to address any veterinary questions to. I asked for veterinarian-level answers that were “evidence-based and confirmed by primary sources.” 4) Consider “Scholar Mode” (under the “Modes” tab) If you’re using it as a research tool, this will help keep the bot thinking in scholarly terms. It seems to understand its constraints a bit better this way. It will also make super-clean bibliographies, syllabi, and footnotes in this mode. 5) Diversify your sources Don’t let ChatGPT become your only tool. If you’re still not using VIN, Plumbs, and all your regular tools, think carefully about how it might be altering your practice style. What are you missing out on? 6) Trust but verify For most of what we’ll be using it for in practice, you can usually trust ChatGPT to provide the correct answers, but that doesn’t mean it’s perfect. It will mess up … and that wouldn’t be great for your patient. 7) Don’t stagnate Whatever you do, don’t let ChatGPT dumb you down. Don’t let it rob you of your creativity and general brainpower. Consider that preliminary research is already showing a marked decrease in students’ ability to recall the subject matter of their work when it is chatbot-aided. Which only makes sense. Again, it’s an awesome tool. I, for one, won’t be giving up anytime soon, but it’s no end-all and be-all, and it just might make you less of a vet in some ways. We’ll just have to watch carefully as this tech evolves to see where it’ll take us all—as a profession and as a civilization. Patty Khuly, VMD, MBA, runs a small animal practice in Miami, Fla., and is available at drpattykhuly.com. Columnist’s opinions do not necessarily reflect those of Veterinary Practice News. Reference Theo Von. 2025. “Sam Altman | This Past Weekend W/ Theo von #599.” YouTube. July 23, 2025. https://www.youtube.com/watch?v=aYn8VKW6vXA.