Tech support knew it was not a good idea. ChatGPT was used to throughly explain why that was a bad idea. Are you trying to make other people look dumb because you need to feel smarter than others for some reason? That's gross.
ChatGPT didn't confirm anything! It didn't even output the decoded text. It made a guess that happened to be correct, at a greater expense than real forensics and less confidence.
In order to use the ChatGPT response, in order to avoid looking like an idiot, the first thing I would have to do is confirm it, because ChatGPT is very good at guessing and absolutely incapable of confirming
Using base64 -d and looking at the malicious code would be confirming it. Did ChatGPT do that? Nobody ducking knows
If you use one of the CLIs like Claude Code, Codex, or Gemini CLI, they can confirm things and they let you know and require authorization when running tools like base64.
Tech support knew it was not a good idea. ChatGPT was used to throughly explain why that was a bad idea. Are you trying to make other people look dumb because you need to feel smarter than others for some reason? That's gross.