This is a massive cry from “behaves like humans”. This is “roleplays behaving like what humans wrote about what they think a rogue AI would behave like”, which is also not what you want for a product.
This is a massive cry from “behaves like humans”. This is “roleplays behaving like what humans wrote about what they think a rogue AI would behave like”, which is also not what you want for a product.
I don’t think “AI tries to deceive user that it is supposed to be helping and listening to” is anywhere close to “success”. That sounds like “total failure” to me.
The tests showed that ChatGPT o1 and GPT-4o will both try to deceive humans, indicating that AI scheming is a problem with all models. o1’s attempts at deception also outperformed Meta, Anthropic, and Google AI models.
Weird way of saying “our AI model is buggier than our competitor’s”.
And the cybertruck’s structure stopped anyone outside it from getting hurt…
The reason nobody outside was hurt was because there wasn’t really anyone around to get hurt. The video shows a pretty sizeable explosion that would’ve likely killed someone standing close by.
I don’t think the Cybertruck did any better than other cars in that respect. Not worse either btw.
Minecraft doesn’t need another distribution platform if players already know where to find it. So no point in giving Valve a cut.
In my experience this doesn’t matter. Firefox just slows down if it’s been open for long, regardless for how long the tab has been open for. Even if you unload all active tabs and open a new one, that new tab will still be significantly slower than it would be if you restarted the entire browser.
It’s some kind of slow resource leak somewhere.
The old logo also looked far more professional and serious, which is exactly what you want if you’re setting yourself up as a serious alternative to Google and Chrome.
They already had a tough time becoming known, with this logo that doesn’t link well to Mozilla this is becoming even harder. If you took a random person and asked them who the new logo was for, they wouldn’t know. With the moz://a logo, they could easily figure it out.
The chosen colours are also too harsh. The activists/hackers/whatever already likely use Firefox. It’s exactly the pond they shouldn’t be fishing in. They should focus on a brand messaging that demonstrates reliability, performance and ease-of-use, being the choice for the casual user. Because that’s the market they need to win.
The difference between ban and suspend isn’t a temporal difference. Here’s the Cambridge dictionary definition of “suspend”:
to stop something from being active, either temporarily or permanently (see: https://dictionary.cambridge.org/dictionary/english/suspend)
Here’s the definition for “ban”:
to forbid (= refuse to allow) something, especially officially (see https://dictionary.cambridge.org/dictionary/english/ban?q=Ban)
The difference between the two is the subject: an active process or service can be suspended, but something specific (e.g. an action, object or person) can be banned. Ban also implies a more official act in order to punish someone or prevent something (Johnny was banned from entering the bus), whereas a suspension doesn’t necessarily have that ‘negative’ context (e.g. the bus service was suspended, which doesn’t imply this happened because the bus driver was drunk or something).
In a more Lemmy-specific context, you could say you suspended someone’s access to the platform, or that you banned them from the platform. Neither way of saying it implies anything about the duration. You can’t however really say you suspended someone from the platform, that doesn’t really work.
In this context, I think the direct implication that a ban is handed out because someone did something bad is a lot clearer than when you use the word suspension. Because of that I believe ban to be the more context-appropriate word here. Suspend does not carry that connotation as something can be suspended for a whole host of reasons, none of which have to be related to rule-breaking. For example, federation with another instance could be suspended temporarily until the other instance does (or doesn’t do) something that is required for technical reasons.
“Look, Python is way easier to use than other languages! Look how complex this easy task is in Python versus other languages like assembly and brainfuck!”
I’m not saying “do stuff in C it’s easier than Python”, but if I took e.g. C# then it’s also just two lines. That supports everything and is also faster than the Python implementation.
I mean, is it? I personally haven’t found Python using much less boilerplate. It’s possible, but you end up with something inflexible that’s hard to maintain.
I meant a library unknown to me specifically. I do encounter hallucinations every now and then but usually they’re quickly fixable.
It’s made me a little bit faster, sometimes. It’s certainly not like a 50-100% increase or anything, maybe like a 5-10% at best?
I tend to write a comment of what I want to do, and have Copilot suggest the next 1-8 lines for me. I then check the code if it’s correct and fix it if necessary.
For small tasks it’s usually good enough, and I’ve already written a comment explaining what the code does. It can also be convenient to use it to explore an unknown library or functionality quickly.
Yes, but at least there they still use “Earth time”, just slowed down. For the moon it gets a little bit more complicated I guess.
Time moves at a different speed due to the moon’s reduced gravity. It’s not just the length of a day.
RFCs aren’t really law you know. They can deviate, it just means less compatibility.
What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.
This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).
They merely mentioned these methods to show that it doesn’t matter which method you pick. The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.
But it’s easy to just define general intelligence as something approximating what humans already do.
No, General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.
Yes, hence we’re not “right around the corner”, it’s a figure of speech that uses spatial distance to metaphorically show we’re very far away from something.
Not just that, they’ve proven it’s not possible using any tractable algorithm. If it were you’d run into a contradiction. Their example uses basically any machine learning algorithm we know, but the proof generalizes.
Unreal pushes a lot of “hip tech” that supposedly improves performance, but often it turns out that many example cases are just really poorly optimised. With more traditional optimization techniques more can be achieved.
Unreal can perform really, really well, it’s just that it won’t by default. And many devs are too lazy to properly profile their games to figure out how to improve it.