I wouldn't axe the subject matter experts quite yet.
ChatGPT lays down some solid answers to questions you'd get asked by someone who doesn't really want all the details. It's impressive as a work-in-progress, and a bit scary if your boss see this and immediately starts thinking about ways to reduce OpEx.
Not surprisingly, when you get it out of its comfort zone it attempts to overcompensate with length, spew jargon all over the place, and crank the humbleness up to 11 when you call it out on its bullshit. In other words, it's every smarmy know-it-all you've ever met.
Worst, for me, is how it doesn't adjust the way it presents information when it isn't confident. Whether it's summarizing the main ideas of a well-known topic or hurridly scanning Wikipedia in an attempt to respond quickly, it has the same feel in its response. I can't deal with that. Humans interacting with this thing need a hint about how it feels about its own response so we can establish some kind