• 12 Posts
  • 387 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • Remember Tango Gameworks? The studio that everyone liked, and didn’t have any flops? That was completely laid off?

    He pointed that out as an exception. But, it’s been mostly the AAA studios that produced massive, massive high-budget flops, and then they laid off a bunch of their staff.

    But it’s not developers doing that, it’s publishers and executives. No one writing code is like, “I’ve decided to make live-service schlock”. But they’re the ones losing their jobs, not the dorks who did decide that.

    No, but when developers and the rest of the teams see that it’s “live-service schlock”, they should start looking at their resumes, instead of thinking “well, my job is safe because it’s a large corporation”.

    Why would anybody working on Concord think that it’s a good game with a good concept that is going to succeed? Or Kill the Justice League? Or Multiverse? You think all of those microtransactions and attempts at catching some unoriginal idea are going to be well-received?

    Just look at it for what it is, and realize it’s going to fail. And then plan accordingly.

    He then turns this into some kind of attack on game journalists, who have been rightfully calling out the game industry layoffs

    No, look at what they did before they talked about the layoffs. Sure, calling out the layoffs is justified and it’s worth reporting.

    What’s not worth reporting is what Twitter is saying about any of this, and then going on some soapbox trying to counter it. Thus, promoting this idea that the general public gives a shit about whatever fight this is, when in reality, they don’t even know it exists. He’s literally reading off one of this articles, that goes off on a tangent that a few people on Twitter said something about games being “too woke” and tries to counter that.

    Fuck Twitter. Stop reporting on Twitter. It’s a shit platform that is a tiny, tiny microverse of actual people doing actual things that don’t see any of that. Obviously, nobody looked at a game and thought “oh, well, that’s too woke, so I’m not going to buy it”. They didn’t buy it because it was a shit game with shitty microtransactions.

    And if you check the comments, his fans definitely heard the whistle too.

    I checked the comments. I read the comments on most YouTube videos. I saw nothing of the sort. Most of them are praising him for what he’s saying.

    Ideological soapboxes are very real things that games “journalists” push on a daily basis. It’s manufactured bullshit that gets echoed only because they report on whatever some dude on Twitter said. I don’t know why you would mistake that as some dog whistle.

    “As a customer I’m going to be honest, I just don’t care or feel anything for any of these internal struggles that these companies go through.” (7:10 in the video)

    Right, instead of talking about the discussion as a whole, let’s take some out-of-context quote he said in the video and use that as evidence that he doesn’t care about the industry.

    You didn’t even quote the entire sentence: “…especially when it’s mismanagement to blame.” I guess that bit didn’t fit your narrative?



















  • I guess the idea is that the models themselves are not infringing copyright, but the training process DID.

    I’m still not understanding the logic. Here is a copyrighted picture. I can search for it, download it, view it, see it with my own eye balls. My browser already downloaded the image for me, in order for me to see it in the browser. I can take that image and edit it in a photo editor. I can do whatever I want with the image on my own computer, as long as I don’t publish the image elsewhere on the internet. All of that is legal. None of it infringes on copyright.

    Hell, it could be argued that if I transform the image to a significant degree, I can still publish it under Fair Use. But, that still gets into a gray area for each use case.

    What is not a gray area is what AI training does. They download the image and use it in training, which is like me looking at a picture in a browser. The image isn’t republished, or stored in the published model, or represented in any way that could be reconstructed back to the source image in any reasonable form. It just changes a bunch of weights in a LLM model. It’s mathematically impossible for a 4GB model to somehow store the many many terabytes of images on the internet.

    Where is the copyright infringement?

    I remember stories about the RIAA suing individuals for many thousands of dollars per mp3 they downloaded. If you applied that logic to OpenAI — maximum fine for every individual work used — it’d instantly bankrupt them. Honestly, I’d love to see it. But I don’t think any copyright holder has the balls to try that against someone who can afford lawyers. They’re just bullies.

    You want to use the same bullshit tactics and unreasonable math that the RIAA used in their court cases?


  • Legislators have to come up with a way to handle how copyright works in conjunction with AI.

    That’s the neat part. It doesn’t.

    Copyright hasn’t worked for the past 100 years. Copyright was borne out of an social agreement that works generated from it would enter public domain in a reasonable time frame. Thanks to Mark Twain and Disney, the limit is basically forever, or it might as well be. Here we are still arguing about the next Bond film for a book series that was made in the fucking 1950s. Or the Lord of the Rings series, the genesis of all fantasy. Or thousands of other things that deserve to be in public domain already.

    Copyright is a blunt tool that rich people use to bash the poor with. Whatever you think copyright is doing to protect your rights or your works is easy enough for them to just spend enough money with lawyers and cases until you cave. If copyright isn’t working for the public good, then we should abolish it.

    People hate AI because it’s mostly developed and used by the rich as a shitty way to save money and layoff even more people than we’ve already had. But, it doesn’t have to be. All of these LLM projects were based on freely available research. Hell, Stable Diffusion is still something you can just download and use for free, despite the fact that Stability AI is still trying to wrestle back their own control into the model.

    Instead of sticking our ears in our fingers and saying “la la la la, AI doesn’t exist, it must be destroyed/regulated/fined”, we could push this technology to open sourced as much as possible. I mean, let’s assume that we somehow regulate AI so that people have to pay to use copyrighted works for training (as absurd as that is). AI training goes down drastically, and stagnates. Counties like China are not going to follow those same rules, and eventually, China will be the technological leader here.

    Or the program works, and other people who don’t give a shit about copyright freely allow AI to train their works. Then you have AI models that have to follow these arcane rules, but arrived at the same spot, anyway, but only for the rich people who can afford the systems that allow for that regulation. What the fuck was the point in the regulation, except to make it even more expensive to make?