

Thanks! I’ll take a look.
https://github.com/fabien0102/ts-to-zod looks promising. Thanks!
My machine is not a workhorse. I got it second hand. It has around 8gb of RAM, and an 80gb HDD I found in a laptop.
But it’s enough to work as a testbed, so it’s fine with me.
I’ve finally powered on a 15 year old machine to run a bot I’ve been writing. The thing is slow as dirt and stuck behind a flakey power line network, but it’s working. I got to write my first systemd service definition, which is kind of cool.
just one little drop
Works for me!
No, that’s awesome! It’s an impressive level of opsec. 👍
Edit: By “no”, I mean there’s nothing wrong with doing that. Like I said: awesome.
I just noticed your account age and name. Did you create this account just to ask the question?
Eh. If you’re worried about people thinking it’s a weird or offensive name, just pick another name. You don’t have to use it.
As an aside: I wouldn’t broadcast a variant of my name as a network name or mention it on social media. I don’t think anything bad is likely to happen, but it’s just good practice.
they didn’t grow up in the 80s 😞
I got my kids to watch the Goonies and they didn’t get into it.
Make a new thing for a new generation of kids to love.
Phew. Glad that bug got papered over 9999 years ago, so we won’t have to deal with it.
Screenshots are also disabled.
The responsibility to prevent AI / SEO is … in moderation and users verifying / certifying the quality of web-rings.
The proposal should include mechanisms to support moderation and user feedback. That flavour of crowdsourcing is difficult because users, search engine maintainers, and web ring participants may be malicious.
What if they don’t want that multi step process?
Hard thing, but what are a few steps to avoiding AI spam results?
There’s nothing inherent to this proposal that avoids spam or SEO. You describe it as a “moderation issue” and then mark it as out of scope.
If avoiding AI spam or SEO sites is a feature of this proposal, then it should be addressed directly.
no one can index exponential amounts of data, nevermind the predatory SEO and AI.
Google is. The NSA almost undoubtedly is. A bunch of other governments are. AI companies probably are. Meta probably is.
When a user wants to search something up, they first search for a topic in web-rings, and then they select that web-ring.
From a usability perspective, that doesn’t feel great. How does a user find the first web ring search engine? What if they don’t want that multi step process? How do users avoid predatory web rings that are trying to sell them stuff? How does this compete with existing search?
Instead of returning a random number, what if we make the program guess?