• 0 Posts
  • 188 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

  • What if they don’t want that multi step process?

    Hard thing, but what are a few steps to avoiding AI spam results?

    There’s nothing inherent to this proposal that avoids spam or SEO. You describe it as a “moderation issue” and then mark it as out of scope.

    If avoiding AI spam or SEO sites is a feature of this proposal, then it should be addressed directly.


  • sbv@sh.itjust.workstoProgramming@programming.devA Better Federated Search
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    6 days ago

    no one can index exponential amounts of data, nevermind the predatory SEO and AI.

    Google is. The NSA almost undoubtedly is. A bunch of other governments are. AI companies probably are. Meta probably is.

    When a user wants to search something up, they first search for a topic in web-rings, and then they select that web-ring.

    From a usability perspective, that doesn’t feel great. How does a user find the first web ring search engine? What if they don’t want that multi step process? How do users avoid predatory web rings that are trying to sell them stuff? How does this compete with existing search?









  • I thought those were for only when shit is seriously wrong and execution can’t continue in the current state.

    That’s how it starts. Nice and simple. Everyone understands.

    Until

    some resource was in a bad state

    and you decide you want to recover from that situation, but you don’t want to refactor all your code.

    Suddenly, catching exceptions and rerunning seems like a good idea. With that normalized, you wonder what else you can recover from.

    Then you head down the rabbit hole of recovering from different things at different times with different types of exception.

    Then it turns into confusing flow control.

    The whole Result<ReturnValue,Error> thing from Rust is a nice alternative.