Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 326 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle

  • The idea that GPT has a mind and wants to self-preserve is insane. It’s still just text prediction, and all the literature it’s trained on is written by humans with a sense of self preservation, of course it’ll show patterns of talking about self preservation.

    It has no idea what self preservation is, even then it only knows it’s an AI because we told it it is. It doesn’t even run continuously anyway, it literally shuts down after every reply and its context fed back in for the next query.

    I’m tired of this particular kind of AI clickbait, it needlessly scares people.


  • To kind of visually see it, I found this thread of some guy that took oscilloscope captures of the output of their UPS and they’re all pseudo-sines: https://forums.anandtech.com/threads/so-i-bought-an-oscilloscope.2413789/

    As you can see, the power isn’t very smooth at all. It’s good enough for a lot of use cases and lower end power supplies, because they just shove that into a bridge rectifier and capacitors. Higher end power supplies have tighter margins, and are also more likely to have more safety features to protect the PC so they can get into protection mode and shut off. Because bad power can mean dips in power to the system which can cause calculation errors which is very undesirable especially in on a server. It probably also messes with power factor correction circuits, which is something cheap PSUs often cheap out on but a good high quality one would have and may shut down because of it.

    As you can see in those images too, it spends a significant amount of time at 0V (no power, that’s at the middle of the screen) whereas the sine waves spends an infinitely short time at 0, it goes positive and then negative immediately. All the time spent at 0, you rely on big capacitors in the PSU to hold enough charge to make it to the next burst of power. With the sine wave they’d hold just long enough (we’re going down to 12V and 5V from 120/240V input, so the amount of time normally spent at or below ±12V is actually fairly short).

    It’s technically the same average power, so most devices don’t really care. It really depends on the design of the particular unit, some can deal with some really bad power inputs and manage just fine and some will get damaged over long term use. Old linear ones with an AC transformer on the input in particular can be unhappy because of magnetic field saturation and other crazy inductor shenanigans.

    Pure sine UPSes are better because they’re basically the same as what comes out of the wall outlet. Line interactive ones are even better because they’re ready to take over the moment power goes out and exactly at the same spot in the sine wave so the jitter isn’t quite as bad during the transition. Double conversion is the top tier because they always run off the battery, so there’s no interruption for the connected computer at all. Losing power just means the battery isn’t being charged/kept topped off from the wall anymore so it starts discharging.



  • I would probably just skip the Lemmy Easy Deploy and just do a regular deployment so it doesn’t mess with your existing. Getting it running with just Docker is not that much harder and you just need to point your NGINX to it. Easy Deploy kind of assumes it’s got the whole machine for itself so it’ll try to bind on the same ports as your existing NGINX, so does the official Ansible as well.

    You really just need a postgres instance, the backend, pictrs, the frontend and some NGINX glue to make it work. I recommend stealing the files from the official Ansible, as there’s a few gotchas in the NGINX config as the frontend and backend share the same host and one is just layered on top.




  • What often happens next is the realization that the existing system was handling far more edge cases than it initially appears. You often discover these edge cases when the new system is deployed and someone complains about their use case breaking.

    The reverse is also sometimes true and it’s when a rewrite is justifyable.

    I’ve worked with many systems that piled up a ton of edge cases handling for things that are no longer possible, it makes the code way harder to follow than it should.

    I’ve had successful rewrites that used 10x+ less the amount of code, for more features and significantly more reliable. And completely eliminated many of the edge cases by design.


  • IMO a lot of what makes nice self-hostable software is clean and sane software in general. A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down. Don’t make me install an email server and API keys to services needed by features I won’t even use.

    I don’t particularly mind needing a database and Redis and the likes, but if you need MySQL and PostgreSQL and Redis and memcached and an ElasticSearch cluster and some of it is Go, some of it is Ruby and some of it is Java with a sprinkle of someone’s erlang phase, … no, just no, screw that.

    What really sucks is when Docker is used as a bandaid to hide all that insanity under the guise of easy self-hosting. It works but it’s still a pain to maintain and debug, and it often uses way more resources than it really need. Well written software is flexible and sane.

    My stuff at work runs equally fine locally in under a gig of RAM and barely any CPU at idle, and yet spans dozens of servers and microservices in production. That’s sane software.



  • Misconfigured CORS is no worse than someone using curl, or postman, or any other tool of that kind. What could compromise your server is the backend side of things, the frontend is just a limited HTTP client in the end. The real risk is those making direct requests to your server. CORS is just an ask for browsers specifically to stop cross domain communication, it protects the users not you.

    You can help that a lot by using containers like Docker or Podman, but you should also make sure your backend is secure. But the most risk really even then would usually be, break into your database via SQL injection or something like that, still not breaking into the whole instance.

    If anything, making sure to use SSH keys, disable root login and general server best practices is way more important than your app. You’re more likely that your server itself will be attacked than the backend. Security comes in layers.

    But realistically you’ll be fine, and if you do end up hacked, it’s a learning experience.



  • If you look at it from a different angle and ask: who might be interested by a user being reported, given that each instance operate independently? The answer is all of them.

    • The instance you’re on could be interested because it might violate the local instance’s rules, and the admin might want to delete it even if from just that instance.
    • The instance hosting the community, because regardless of the other two instances they might not want that there.
    • The instance of the user being reported, because it’s their user and if they’re causing trouble they might want to ban the account.

    The rest comes naturally: obviously if the account is banned at the source it’s effectively banned globally. If it’s banned on the community’s instance, then you won’t see that user there but might on other instances. And your instance can ban the user, in which case they’re freely posting on other instances but you won’t see it from your perspective.







  • With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:

    • Containers have their own namespace including network namespace, so each container have a blank iptables just for them.
    • For container communication, that goes through the FORWARD table, not the INPUT/OUTPUT ones.
    • Docker adds its own rules to ensure that this works as expected.

    The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.

    When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.


  • Also, series F but they’re only deploying on one server? Try scaling that to a real deployment (200+ servers) with millions of requests going through and see how well that goes.

    And also no way their process passes ISO/SOC 2/PCI certifications. CI/CD isn’t just “make do things”, it’s also the process, the logs, all the checks done, mandatory peer reviews. You can’t just deploy without the audit logs of who pushed what when and who approved it.