That is why I use just int main(){...}
without arguments instead.
That is why I use just int main(){...}
without arguments instead.
The point of it being open is that people can remove any censorship built into it.
The particular AI model this article is talking about is actually openly published for anyone to freely use or modify (fine-tune). There is a barrier in that it requires several hundred gigs of RAM to run, but it is public.
Now, if only the article explained how that killing was related to TikTok. The only relevant thing I saw was,
had its roots in a confrontation on social media.
It’s says “social media”, not “TokTok” though.
Wary reader, learn from my cautionary tale
I’m not sure what to learn exactly. I don’t get what went wrong or why, just that the files hit deleted somehow…
Yes, almost like they have intentionally waited until Trump’s election.
And they’re all with different commit message:
“switched arse to bottom to create a more uplifting vibe”
“took arse out and put bottom in to keep my language warm and friendly”
“thought bottom would sound a lot nicer than arse, so I used it”
And so on…
Type in "Is Kamala Harris a good Democratic candidate
…and any good search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “good”.
[…] you might ask if she’s a “bad” Democratic candidate instead
In that case, of course the search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “bad”.
So the whole premise that, “Fundamentally, that’s an identical question” is just bullshit when it comes to searching. Obviously, when you put in the keyword “good”, you’ll find articles containing “good”, and if you put in the keyword “bad”, you’ll find articles containing “bad” instead.
Google will find things that match the keywords that you put in. So does DuckDuckGo, Qwant, Yahoo, whatever. That is what a good search engine is supposed to do.
I can assure you, when search engines stop doing that, and instead try to give “balanced” results, according to whatever opaque criteria for “balanced” their company comes up with, that will be the real problem.
I don’t like Google, and only use google when other search engines fail. But this article is BS.
In TikTok or instagram reels, you don’t follow people you like. You just watch stuff happening.
That’s actually the whole point of TikTok, what made it different when it started. An app for short videos where you follow people you like is more of a Snapchat competitor, not TikTok.
If we wait for AI to be advanced enough to solve the problem and don’t do anything in the meantime, when the time finally comes, the AI will (then, rightfully) determine that there’s only one way to solve it…
It’s not an article about LLMs not using dialects. In fact, they have learned said dialects and will use them if asked.
What they did was, ask the LLM to suggest adjectives associated with sentences - and it would associate more aggressive or negative adjectives with African dialect.
Seems like not a bias by AI models themselves, rather a reflection of the source material.
All (racial) bias in AI models is actually a reflection of the training data, not of the modelling.
And who hasn’t contributed any code to this particular repo (according to github insights).
15 hours for what period of time? The article mentions they’d refill in two days…
I like the idea, but I really hate that they’ve hardcoded the provider.
I see there an access violation…
Bluesky users will be able to opt into experiences that aren’t run by the company
Yea, no, the biggest server not showing federated content by default is just pseuso-federation - being able to say you have it, while not really doing it.
Not for international (non-English) results.
Too bad that’s based on macros. A full preprocessor could require that all keywords and names in each scope form a prefix code, and then allow us to freely concatenate them.