It depends very much on whether it violates SOLID. I do strongly encourage my teammates to use new language features when appropriate, but I would never fail a PR for it, just mention “here’s a better way to do this in the future.”
But if someone is violating the architectural integrity of the application, that’s something else. If a field isn’t final when it could be, or is public when it could be private, or uses a class that technically works but is intended for a different purpose, or just adds unnecessary complexity, I won’t approve those until they are addressed.
That being said, you do have to pick battles. The current app I’m working on is a bit of a mess, architecturally, and sometimes I’ll work with someone to try to adopt better standards and it turns out doing something the right way is just too far afield. Just this week I spent an hour trying to help someone get it right before conceding we should just do it quick and dirty and address the structural problems in a more deliberate way once the underlying structure was ready to support that.
Coding is as much art as science. Sometimes we like code to look different, but at the end of the day if something is isolated and passes tests, it’s fine for now and can be rewritten in the future should that become necessary.
My guess is 25% of their developers use AI coding assist. Because as a developer who uses AI almost every day, I can promise you only the most pedestrian code can be written by AI. As autocomplete, it saves me some time typing. But actually writing code from scratch, no way.
Yesterday, I asked it to write some particular code for me to do with multi-threading, and it constantly wrote things wrong, like initializing the access of a database client with the user of a request, which would mean every single user would have the access of the first user, not their own.
I reviewed some code earlier this week that did the same thing with the GlobalExceptionHandler that I suspect was also written by AI. These are sort of insidious in that when you write tests to make sure the code works, the tests will pass just fine.
You have to have a skilled developer to identify those issues because the code looks good, and just about any test an individual developer will throw at it will pass. That bug would have gone to production if I hadn’t caught it. And that’s on top of code that just uses the wrong class or confuses common methods from two completely different classes.
And I couldn’t even get a job at Google when they interviewed me, twice. So you can’t tell me 25% of their code is AI-generated. It’s useful, and a time saver. But it’s not capable of generating reliable code on its own. Certainly not yet, and I believe not ever in this form of AI (maybe AGI if it ever comes about).