Three raccoons in a trench coat. I talk politics and furries.
Other socials: https://ragdollx.carrd.co/
I heard that he’s an ethereal being from another dimension that has already faded away from our plane of existence so the police is wasting its time looking for him and should close the case.
I did something like this when working support at Phillips
I thought it was funny but my colleagues and supervisor were not entertained lol
It is, in fact, very easy to code a game!
from pygame import game
game.load_player()
game.load_enemies()
game.load_audio()
game.run()
I wish science was a simple as taking the mean and confidence intervals.
update it faster
*cries in Deltarune fan*
Have you tried some data augmentation?
After some Googling I couldn’t find anything about “code-free” .exe’s or some “.EXE” framework, so probably just a joke.
Gave me flashbacks to my time working with Philips’ Tasy system in 2017.
By now they’ve surely finished implementing their HTML5 system which was somewhat better, but back then it was still a desktop app made using Delphi and Java, and it was basically as unsightly and unwieldy as the example in the meme lol
An example of why this is incorrrect.
If a card is the ace of spades, it is black.
A card is black if and only if it is the ace of spades.
There are other conditions under which B (a card is black) can happen, so the second statement is not true.
A conclusion that would be correct is “If a card is not black, it is not the ace of spades.”. The condition is that if A is true B will also always be true, so if B is false we can be sure that A is false as well - i.e. “If not B, not A”.
We should really consider adding it to the DSM-5.
Don’t do my boy Megamind dirty like that, he’d never say such villainous things!
IDK what you’re talking about but I do know that only Republicans defend child marriage.
just a widdle twolling
The UK is truly becoming a fascist hellhole.
“You’re allowed to protest, but only between 5PM and 6PM and you must get a permit and also don’t bother anyone or make too much noise and also you must walk at the right speed otherwise you’re just being a meanie and we’re going to arrest you >:(”
While I think some of Just Stop Oil’s previous antics were counterproductive to the public image of climate activists, arresting someone because they didn’t protest “at the right speed” is ridiculous. The whole point of protests is to be disruptive and bring attention to the protesters and their cause, and this is an incredibly mild way of doing it.
Damn, hadn’t heard about this before and I have no idea how this might turn out. I thought the first two movies were equally excellent and although I found the third more mediocre I also thought it was a decent place to end the series.
I had no idea what kind of plot they might come up with for a 4th movie, and the fact they’re planning 5th and 6th ones just leaves me with even more questions.
I was hoping we’d get more of Tai Lung but it looks like he only gets a cameo 😔
He deserves better! Give my boy the chance for a redemption arc!
It is absolutely true that increasing income can improve parenting and by extension the outcomes of kids, but there is also evidence that using computers too much can be detrimental for their education.
Really it’s no different than how these things affect us adults: We all know that social media is trying to monopolize our attention, and that it’s affecting our attention spans and mental health. Although arguably for kids it’s even worse since their brains are still in development.
Controversial take: unga bunga, bunga unga
Not quite, since the whole thing with image generators is that they’re able to combine different concepts to create new images. That’s why DALL-E 2 was able to create a images of an astronaut riding a horse on the moon, even though it never saw such images, and probably never even saw astronauts and horses in the same image. So in theory these models can combine the concept of porn and children even if they never actually saw any CSAM during training, though I’m not gonna thoroughly test this possibility myself.
Still, as the article says, since Stable Diffusion is publicly available someone can train it on CSAM images on their own computer specifically to make the model better at generating them. Based on my limited understanding of the litigations that Stability AI is currently dealing with (1, 2), whether they can be sued for how users employ their models will depend on how exactly these cases play out, and if the plaintiffs do win, whether their arguments can be applied outside of copyright law to include harmful content generated with SD.
Well they don’t own the LAION dataset, which is what their image generators are trained on. And to sue either LAION or the companies that use their datasets you’d probably have to clear a very high bar of proving that they have CSAM images downloaded, know that they are there and have not removed them. It’s similar to how social media companies can’t be held liable for users posting CSAM to their website if they can show that they’re actually trying to remove these images. Some things will slip through the cracks, but if you show that you’re actually trying to deal with the problem you won’t get sued.
LAION actually doesn’t even provide the images themselves, only linking to images on the internet, and they do a lot of screening to remove potentially illegal content. As they mention in this article there was a report showing that 3,226 suspected CSAM images were linked in the dataset, of which 1,008 were confirmed by the Canadian Centre for Child Protection to be known instances of CSAM, and others were potential matching images based on further analyses by the authors of the report. As they point out there are valid arguments to be made that this 3.2K number can either be an overestimation or an underestimation of the true number of CSAM images in the dataset.
The question then is if any image generators were trained on these CSAM images before they were taken down from the internet, or if there is unidentified CSAM in the datasets that these models are being trained on. The truth is that we’ll likely never know for sure unless the aforementioned trials reveal some email where someone at Stability AI admitted that they didn’t filter potentially unsafe images, knew about CSAM in the data and refused to remove it, though for obvious reasons that’s unlikely to happen. Still, since the LAION dataset has billions of images, even if they are as thorough as possible in filtering CSAM chances are that at least something slipped through the cracks, so I wouldn’t bet my money on them actually being able to infallibly remove 100% of CSAM. Whether some of these AI models were trained on these images then depends on how they filtered potentially harmful content, or if they filtered adult content in general.