Wrapping my head around reverse proxy was a game changer for me. I could finally host things that are usefull outside my LAN. I use Nginx-Proxy-Manager which makes the config simple for lazy’s like me.
Wrapping my head around reverse proxy was a game changer for me. I could finally host things that are usefull outside my LAN. I use Nginx-Proxy-Manager which makes the config simple for lazy’s like me.
Wow. W Bush was president (or Obama depending on month).
Edit: yep, W. Bush. Oct 6th 2008, so Obama hadn’t even been elected yet.
I’m glad you asked because I’ve sort of been meaning to look into that.
I have 4 8TB drives that have ~64,000 hours (7.3 years) powered on.
I have 2 10TB drives that have ~51,000 hours (5.8 years) powered on.
I have 2 8TB drives that have ~16,800 hours (1.9 years) powered on.
Those 8 drives make up my ZFS pool. Eventually I want to ditch them all and create a new pool with fewer drives. I’m finding that 45TB is overkill, even when storing lots of media. The most data I’ve had is 20TB and it was a bit overwhelming to keep track of it all, even with the *arrs doing the work.
To rebuild it with 4 x 16TB drives, I’d have half as many drives, reducing power consumption. It’d cost about $1300. With double parity I’d have 27TB usable. That’s the downside to larger drives, having double parity costs more.
To rebuild it with 2 x 24TB drives, I’d have 1/4 as many drives, reducing power consumption even more. It’d cost about $960. I would only have single parity with that setup, and only 21TB usable.
Increasing to 3 x 24TB drives, the cost goes to $1437 with the only benefit being double parity. Increasing to 4*24TB gives double parity, 41TB, and costs almost $2k. That would be overkill.
Eventually I’ll have to decide which road to go down. I think I’d be comfortable with single parity, so 2 very large drives might be might be my next move, since my price per kWh is really high, around $.33.
Edit: one last option, and a really good one, is to keep the 10TB drives, ditch all of the 8TB drives, and add 2 more 10TB drives. That would only cost $400 and leave me with 4 x 10TB drives. Double parity would give me 17TB. I’ll have to keep an eye on things to make sure it doesn’t get full of junk, but I have a pretty good handle on that sort of thing now.
This has some limitations if I remember correctly. It doesn’t use PostgreSQL, and I don’t think you can use Collabora or whatever, so editing documents in your browser won’t work.
It’s quite possible that I’m wrong about that.
This prompted me to try using it again. The pointer is moving around slow, then fast, then way too fast. It’s difficult to get it to land on what I want. Is that the point?
There’s an add-on and an integration, yeah.
Oh interesting. How fast things change. I’ve only been using Frigate for around a year and I’m already behind the times.
The Home Assistant mobile client? Or is there a Frigate app, too? I have the Frigate webpage bookmarked and used that. It’s also available in the HA front end, but I prefer using Frigate directly.
Frigate for software. Add a Coral to your computer (they come in M.2, Mini PCIe, even USB) to handle the object detection. Configuration is slightly complex, but the documentation is very good.
I’m using a couple of Amcrest cameras which I have on a VLAN that can’t access the internet, so no spying from the manufacturer.
I also added a hard drive specifically for the recording. It stores a ton of days worth of footage and Frigate handles deleting old footage to make room for new. I figure that hard drive will probably fail sooner than my other drives which is why I got one just for that.
Immich. Come for the photo backup, stay for everything else because it’s awesome.
9 spinning disks and a couple SSD’s - Right around 190 watts, but that also includes my router and 3 PoE WiFi AP’s. PoE consumption is reported as 20 watts, and the router should use about 10 watts, so I think the server is about 160 watts.
Electricity here is pretty expensive, about $.33 per kWh, so by my math I’m spending $38/month on this stuff. If I didn’t have lots of digital media it’d be worth it to get a VPS probably. $38/month is still cheaper than Netflix, HBO, and all the other junk I’d have to subscribe to.
Ah, yeah, NYC is definitely full of shared surfaces. I recommend taking a shower before you go home, multiple times, maybe once per day. And also when you get home. (unless by “home” you mean the place you’re staying).
NYC is a great place to visit. Have fun.
That’s why I’ve always said using Optical/Toslink etc. is a mistake. Sending music with light just means you’ll hear shade in your music.
Series is more accurate.
If I remember correctly, Proxmox recommends running Docker in virtual machines instead of LXC containers. I sort of gave up on LXC containers for what I do, which is run stuff in Docker and use my server as a NAS with ZFS storage.
LXC containers are unprivileged by default, so the user IDs don’t match the conventional pattern (1000 is the main user, etc.). For a file sharing system this was a pain in the butt, because every file ended up being owned by some crazy user ID. There are ways around it which I did for some time, but moving to virtual machines instead has been super smooth.
They also don’t recommend running Docker on bare metal (Proxmox is Debian, after all). I don’t know the reasons why, but I tend to agree simply for backups. My VMs get automatically backed up on a schedule, and those backups automatically get sent to Backblaze B2 on a schedule
It just comes down to personal preference.
We run ours using the Docker “method”, but I sort of wish we had gone the Ansible route. What we have works, but the documentation isn’t up to snuff. To do things in Docker (without ansible) you basically still have to reference the Ansible repo and use their lemmy.hjson and their Docker Compose, but they have lots of environmental variables that you have to change yourself instead of Ansible doing it.
I do enjoy just using my normal workflow, which is using Dockge/Portainer as much as possible, but it’s a bit of work trying to figure out what Lemmy wants.
I just checked and we have that turned on, too.
We don’t get a lot of applications. A couple per week, maybe.
It’s called Lemmy-Safety of Fedi-Safety depending on where you look.
One thing to note, I wasn’t able to get it running on a VPS because it requires some sort of GPU.
A lemmy instance, a wiki, and a couple of other website type things, yes.
Publicly facing things are pretty limited, but it’s still super handy inside the LAN with Adguard Home doing DNS rewrites to point it to the reverse proxy.
I appreciate what you’re saying, though. A lot of people get in trouble by having things like Radarr etc. open to the internet through their reverse proxy.