I currently have a bit over 2400 tabs open, and it has been roughly a month since I restarted firefox for being too laggy. It is becoming an issue again.
I currently have a bit over 2400 tabs open, and it has been roughly a month since I restarted firefox for being too laggy. It is becoming an issue again.
Some years age when I was still using some more google stuff (like an account for calling out from my PBX) I had each service assigned to its own google account to limit the impact of google doing something crazy to an account.
Apart from playstore youtube red is now the only service left - and that’s about to go as they now made it too expensive, especially taking into account that they enshittified it so much that we’ve blocked it on the TV, and “adfree on TV” was the main use case there…
If you can afford it see if Eaton has a smaller tower UPS suitable for you.
About 20 years ago I made a script that converts pictures to HTML tables. Back then RAM was a severe problem for this, and even for more powerful hardware browsers tended to just crash on larger pictures.
I checked it again a few years later, and things looked way better. I guess using CSS it’d be rather trivial nowadays to do the same with a short video by just cycling through showing/hiding tables of each frame.
At the time of sending the mail I need the metadata - so offering a SMTP server implementation which keeps this in memory while forwarding is not hard. You’d lose a persistent spool in case of delivery errors - but we’ve been doing relays that keep the client connection open while trying to deliver the mail to relay errors directly to the client already 30 years ago, so that also isn’t an excuse.
For IMAP - if you don’t do serverside searching or similar it’ll work with fully encrypted mails.
They will have access to metadata - otherwise they wouldn’t be able to work as email service. That’s sufficient to implement those protocols.
The client then would have to bring their own crypto, and you’d probably want the SMTP server to reject mails if delivered unencrypted (though their FAQ says you can send unencrypted mails).
The reason they claim they can’t is probably trying to keep full control over what users are doing, in which case I agree - fuck them, don’t use services like that.
Don’t have links anymore, but few months ago I came across some startup trying to sell AI that watches your production environment and automatically optimizes queries for you.
It is just a matter of time until we see first AI induced large data loss.
Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.
Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
Shitty companies did it like that back then - and shitty companies still don’t properly utilize what easy tools they have available for controlled deployment nowayads. So nothing really changed, just that the amount of people (and with that, amount of morons) skyrocketed.
I had automated builds out of CVS with deployment to staging, and option to deploy to production after tests over 15 years ago.
I nowadays manage my private stuff with the ansible scripts I develop for work - so mostly my own stuff is a development environment for work, and therefore doesn’t need to be done on private time.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.
Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.
I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.
A problem of this bubble is that it is making AI synonymous with LLM - and when it goes down will burn other more sensibly forms of AI.
It surely is a bubble - so probably a bit different than many other bubbles.
I think OpenAI made the right call (for them) to commercialize when they did - as that pretty much was their only chance to do so. Things has moved fast over the last 1.5 years - and what used to take a decade in tech has happened within months: OpenAI is the dinosaur company grandfathered in, while for already about a year it’s been more sensible for anybody wanting to do something with LLM to selfhost (or buy hosting capacity, but put up own data) one of the more open language models, and possibly adjust or re-train it.
As a company owner I get a ridiculous amount of spam for a year already from all kinds of companies building products on top of OpenAI stack, or are trying to sell training or conferences. All those companies will be left with nothing once all the slower users realize technology has moved on. It’s like somebody trying to build all their product offerings based on VMWare stack nowadays.
If you as a company want to offer something around AI right now the safest option is probably offering hosting, or if you want to do more hands on, adjustment of open models. Both of those are very risky, and many will go bust in years to come - but not as suicidal as building on top of a closed dinosaur.
All my software can be configured using dedicated configuration files (.c)
Because it does JBOD if the controller supports it. Pretty much none of the controllers you’ll find in consumer hardware support that.
That’d break git repos where files with the same name, but different case exist.