Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.

  • 5 Posts
  • 1.82K Comments
Joined 2 years ago
cake
Cake day: June 26th, 2023

help-circle

  • Because it’s easier to migrate from Twitter to BlueSky.

    • Mastodon onboarding sucks: have to select an app, select an instance… and you’ve lost 99% of the users 😮‍💨
    • Bluesky: install the official app, pick a username and password, get to pick some interest topics, and you’re set up with a basic feed.

    Extras:

    • Starter packs: Users can advertise curated lists of people to follow, making it easier to migrate whole communities.
    • Moderation is arguably better with community “labelers” who don’t remove the content (doesn’t antagonize “freeze peach” people).
    • 3rd-party tools to automatically match and add ex-Twitter users who migrated to BlueSky.

    Overall, it gets a boost from a faster increase in network effect.


  • Nostr is great for privacy and for crypto, but not yet suitable for the general public.

    Asking an average user to secure a cryptographic key for their identity, when most can barely hold onto a user:pass, is kind of ridiculous… so Nostr is selling a $100 “authenticator box”. Not particularly user friendly.

    One strong point of Nostr is Bitcoin LN integration, which potentially could work as a source of revenue, but the look&feel is not published enough, while at the same time trying to offer more interaction types (like the marketplace), than what people really want: Twitter’s sweet teet.


  • It will be, “but”.

    The code is dual-licensed MIT and Apache. Meaning it’s fully compatible with a privative fork, but also a free federated network could still survive.

    For now, it seems like they are planning on developing extra features on top of the basic functionalities, not paywall basic features… but time will tell.

    In any case, they seem to be led by people who jumped ship from Twitter before the Muskocalypse, so it’s becoming kind of “the old time Twitter”. Chances are, as Musk rides Twitter’s popularity and inertia until fully turning it into a dystopian dictatorship propaganda machine, BlueSky will emerge to replace it as a slightly better iteration of what Twitter used to be.


  • Well… pretty much all plastics shed microplastics, it’s a matter of how much. Scratching, rubbing, heating without melting, or starting with loosely packed fibres… accelerate the process.

    There’s estimates that weekly we ingest enough microplastics to make a credit card, which is truly dystopian. I wonder how much more it was during my ballpen cap chewing phase. All the variety of compounds used to manufacture them, under more or less control, are a Pandora’s box of possible issues.







  • Political isolation would allow them to decide which countries to do business with, and on what terms, in order to maximize their own profits. Instead of a “free for all” market, it could become a “limited for all, except for some chosen ones”. The USD has been artificially propped by the USA being “the world police”, or the largest bully on the playground. In a multifaceted world, where some large nations start to put that to the test, chances are the USD could stop being the world’s currency, making it more beneficial to establish selective trade agreements valued in goods, closer to barter.

    They could still solidify Trump’s power… or just the opposite: impeach him, throw him away, and put in his place someone they might see as easier to control. For now, having a Republican majority in both Congress and Senate, the threat of impeachment could work to keep Trump in line. As much as Trump’s ability to say “yes, no, and the opposite” worked great for people to cherry pick whatever they wanted to hear and vote for him, the same can work wonders to cherry pick the opposite and destroy him. In countries like China, they use that kind of politics all the time: anyone wanting to advance, needs to commit some irregularities that can be used to blackmail them, then throw them under the bus when the higher ups feel threatened.

    As the DOGE works its way through the administration to cull down non-political appointments, it can as easily decide to keep Trump loyalists, or Musk loyalists, or Vance loyalists, or whatever. That’s one part of the autocoup. For the other part, Trump has already promised no more elections, and to get the term limit removed. But even if he manages to get those, whether he’s the one to enjoy them, remains to be seen. Trump’s a babbling loudmouth, but Musk has already gained several time more ($100B) on his election, than Trump’s whole net worth ($6B, maybe).




  • Hm… good point… but… let’s see, assuming full parallel processing:

    • […]
    • Frame -2 ready
    • Frame -1 ready
      • Show frame -2
      • Start interpolating -2|-1 (should take less than 16ms)
      • Start rendering Frame 0 (will take 33ms)
      • User input 0 (will be received in 20ms if wired)
    • Wait 16ms
      • Frame -2|-1 ready
    • Show Frame -2|-1
    • Wait 4ms
      • Process User input 0 (max 12ms to get into next frame)
      • User input 1 (will be received in 20ms if wired)
    • Wait 12ms
    • Frame 0 ready
      • Show Frame -1
      • Start interpolating -1|0 (should take less than 16ms)
      • Start rendering Frame 1 {includes User input 0} (will take 33ms)
    • Wait 8ms
      • Process User input 1 (…won’t make it into a frame before User input 2 is received)
      • User input 2 (will be received in 20ms if wired)
    • Wait 8ms
      • Frame -1|0 ready
    • Show Frame -1|0
    • Wait 12ms
      • Process User Input 1+2 (…will it take less than 4ms?)
    • Wait 4ms
    • Frame 1 ready {includes user input 0}
      • Show Frame 0
      • Start interpolating 0|1 (should take less than 16ms)
      • Start rendering Frame 2 {includes user input 1+2… maybe} (will take 33ms)
    • Wait 16ms
      • Frame 0|1 ready {includes partial user input 0}
    • Show Frame 0|1 {includes partial user input 0}
    • Wait 16ms
    • Frame 2 ready {…hopefully includes user input 1+2}
      • Show Frame 1 {includes user input 0}
    • […]

    So…

    • From user input to partial display: 66ms
    • From user input to full display: 83ms
    • Some user inputs will be bundled up
    • Some user inputs will take some extra 33ms to get displayed

    Effectively, an input-to-render equivalent of between a blurry 15fps, and an abysmal 8.6fps.

    Could be interesting to run a simulation and see how many user inputs get bundled or “lost”, and what the maximum latency would be.

    Still, at a fixed 30fps, the latency would be:

    • 20ms best case
    • 53ms worst case (missed frame)

  • If the concern is about “fears” as in “feelings”… there is an interesting experiment where a single neuron/weight in an LLM, can be identified to control the “tone” of its output, whether it be more formal, informal, academic, jargon, some dialect, etc. and expose it to the user for control over the LLM’s output.

    With a multi-billion neuron network, acting as an a priori black box, there is no telling whether there might be one or more neurons/weights that could represent “confidence”, “fear”, “happiness”, or any other “feeling”.

    It’s something to be researched, and I bet it’s going to be researched a lot.

    If you give ai instruction to do something “no matter what”

    The interesting part of the paper, is that the AIs would do the same even in cases where they were NOT instructed to “no matter what”. An apparently innocent conversation, can trigger results like those of a pathological liar, sometimes.


  • IANAL either, in recent streams from Judge Fleischer (Houston, Texas, USA) there have been some cases (yes, plural) where repeatedly texting a victim with life threats, or even texting a victim’s friend to pass on a threat to the victim, have been considered a “terrorist threat”.

    As for the “sane country” part… 🤷… but from a strictly technical point of view, I think it makes sense.


    I once knew a guy who was married to a friend, and he had a dog. He’d hit his own dog to make her feel threatened. Years went by, nobody did anything, she’d come to me crying, had multiple miscarriages… until he punched her, kicked out of the car, and left stranded on the road after a hiking trip. They divorced, went their separate ways, she found another guy, got married again, and nine months later they had twins.

    So… would it’ve been sane to call what the guy did, “terrorism”? I’d vote yes.



  • There are several separate issues that add up together:

    • A background “chain of thoughts” where a system (“AI”) uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
    • Ability to call external helper tools that allow it to interact with, and control other systems
    • Training corpus that includes:
      • How to program an LLM, and the system itself
      • Solutions to programming problems
      • How to use the same helper tools to copy and deploy the system or parts of it to other machines
      • How operators (humans) lie to each other

    Once you have a system (“AI”) with that knowledge and capabilities… shit is bound to happen.

    When you add developers using the AI itself to help in developing the AI itself… expect shit squared.