But there are different types of temporary. Temporary because the code got updated/upgraded or new and better software got implemented feels fine. It feels like your work was part of the never ending march of technical progress. Temporary because it gets ripped out if favor of a different, inferior suite hits hard.
If my code gets superseded by someone else’s complete rewrite that is better, then I’m all for it. If my code gets thrown out because we’re switching to a different, inferior system that is completely incompatible with my work, then that just hits like a ton of bricks.
Yeah, it’s not technically impossible to stop web scrapers, but it’s difficult to have a lasting, effective solution. One easy way is to block their user-agent assuming the scraper uses an identifiable user-agent, but that can be easily circumvented. The also easy and somewhat more effective way is to block scrapers’ and caching services’ IP addresses, but that turns into a game of whack-a-mole. You could also have a paywall or login to view content and not approve a certain org, but that only will work for certain use cases, and that also is easy to circumvent. If stopping a single org’s scraping is the hill to die on, good luck.
That said, I’m all for fighting ICE, even if it’s futile. Just slowing them down and frustrating them is useful.