I always see posts like this and don't understand. I use AI successfully for code every day. Do y'all just not know how to ask the right questions? Is this one of those things where if it isn't 100% effective in every situation it's crap?
I think the point is that the profit motive, along with the massive damage LLMs cause to the environment and intellectual property, is not worth saving a few programmers some time doing their job.
I once submitted some LLM-generated code to my boss to see if he thought it lived up to company standards (I’m not a dev, so I was genuinely curious). He told me the code would technically work, but it was incredibly messy, ugly, inelegant, and prone to future issues. He said it would have taken him maybe 5 minutes to write it from scratch at the quality he expects from his devs.