• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

    I can’t see why regular file would be any different.

    I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

    I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

    I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

    I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

    Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.


  • 3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.



  • This sounds unbelievable, like the turning of a ship to avoid an iceberg. It’s an unbelievably light sentencing, showcasing the country’s lack of interest in protecting women’s rights while declaring the intent to do so in the ruling.

    If my partner was attacked, lost her hearing and had to attend court multiple times to defend her rights to safety, and the perpetrator got 3 years? I’d be furious.

    I know she’d be devastated. The times she felt unsafe already leave such a big impact, let alone a realised attack.

    Anyway. I do hope it’s just a positive sign, that all it will take is a bit more time. I want to believe it’s positive. But it’s wild to compare what I’d like to believe as obvious human rights; to not be attacked to the point of disability from an unprovoked human, then believe in the justice system in arrears to punish and (theoretically) prevent.

    Anyway, long rant. Processing it because I probably believed Korea was better than that. Not all the humans, just at least the culture and law.


  • I think you probably don’t realise you hate standards and certifications. No IT person wants yet another system generating more calls and complexity. but here is iso, or a cyber insurance policy, or NIST, or acsc asking minimums with checklists and a cyber review answering them with controls.

    Crazy that there’s so little understanding about why it’s there, that you just think it’s the “IT guy” wanting those.


  • https://johnmjennings.com/an-important-lesson-from-bullet-holes-in-planes/

    The responses needs to be contain representation at least equally to non Firefox people who no longer care to answer a poll about a product that they don’t use. Why? Only current users are going to answer the poll, not the people with the cuts and pain that forced them back to Chrome or safari. Asking survivors how to reinforce survival actually doesn’t solve for why do many people off board Firefox.

    Frankly you should ask people like my 60-70yo parents why chrome not Firefox. You’ll learn more from that than the corrected responses of people who loudly have preferences but at the end of the day would stay either way. My parents tried Firefox, but then left it. Although they only tried from insistence from their son.

    PS: I agree with the poll. I don’t want a chat bot either. If I did, I’d install a plugin that integrates once of my own choosing. Given the availability, privacy, and ease of lmstudio I’d rather leave it in its own place outside the browser and network. I don’t know how those like my parents feel about a bot that can probably answer their questions. I also doubt they care. Maybe it would help them ask questions they’re too embarrassed to ask friends and family for. Usually how to questions they’ve asked dozens of times. But that’s super dangerous.








  • Fundamentally what the alternative is, is to propose that you remain the sole owner of your privacy at the cost of sharing with advertisers that you have, say, 6 generic topics you’re interested in. Like motorsports. It, with the millions or billions of others looking. The ad tracking currently knows everything about everyone and then works out if motorsports is an effective ad for you individually based on their profile of you.

    For me, I’m fine with the current system. For my family though, they’re just using phones and tablets with their default browser, blissfully unaware that there’s no privacy. Then their data gets leaked out.

    I know it’s an extreme kind of case, but domestic abuse victims are always my thought when you think of a counter to “well I’ve got nothing to hide”. Those people if they’re unsure about privacy, will err on the side of caution. They stay trapped.

    In conclusion, I’d rather move the needle forward for those who are at risk. Those who installing anti-tracking plugins would put at further risk. Where installing odd browsers make them a target. We can find perfection later. Make the Web safer now.

    Plenty of people could justifiably take the opposite stance. But even just for my grandparents, they shouldn’t be tracked the way they are. They’re prime candidates for scams, and giving away privacy is one data leak away from a successful scam.

    Kind of off topic to what you said I realise. :)


  • One rich company trying to claim money off the other rich companies using its software. The ROI on enforcing these will come from only those that really should have afforded to pay and if they can’t, shouldn’t have built on the framework. Let them duke it out. I have zero empathy for either side.

    The hopeful other side is with a “budget” for the license, a company can consider using that to weigh up open source contributions and expertise. Allowing those projects to have experts who have income. Even if it’s only a few companies that then hire for that role of porting over, and contributing back to include needed features, more of that helps everyone.

    The same happens in security, there used to be no budget for it, it was a cost centre. But then insurance providers wouldn’t provide cyber insurance without meeting minimum standards (after they lost billions) and now companies suddenly have a budget. Security is thriving.

    When companies value something, because they need to weigh opportunity cost, they’ll find money.


  • Hold them all to account, no single points of failure. Make them all responsible.

    When talking about vscode especially, those users aren’t your mum and dad. They’re technology professionals or enthusiasts.

    With respect to vendors (Microsoft) for too long have they lived off an expectation that its always a end user or publisher responsibility, not theirs when they’re offering a brokering (store or whatever) service. They’ve tried using words like ‘custodian’ when they took the service to further detract from responsibility and fault.

    Vendors of routers and firewalls and other network connected IoT for the consumer space now are being legislatively enforced to start adhering to bare minimum responsible practices such as ‘push to change’ configuration updates and automated security firmware updates, of and the long awaited mandatory random password with reset on first configuration (no more admin/Admin).

    Is clear this burden will cost those providers. Good. Just like we should take a stance against polluters freely polluting, so too should we make providers take responsibility for reasonable security defaults instead of making the world less secure.

    That then makes it even more the users responsibility to be responsible for what they then do insecurely since security should be the default by design. Going outside of those bounds are at your own risk.

    Right now it’s a wild West, and telling what is and isn’t secure would be a roll of the dice since it’s just users telling users that they think it’s fine. Are you supposed to just trust a publisher? But what if they act in bad faith? That problem needs solving. Once an app/plugin/device has millions of people using it, it’s reputation is publicly seen as ok even if completely undeserved.

    Hmm rant over. I got a bit worked up.



  • The messaging around this so far doesn’t lead me to want to follow the fork on production. As a sysadmin I’m not rushing out to swap my reverse proxy.

    The problem is I’m speculating but it seems like the developer was only continuing to develop under condition that they continued control over the nginx decision making.

    So currently it looks like from a user of nginx, the cve registration is protecting me with open communication. From a security aspect, a security researcher probably needs that cve to count as a bug bounty.

    From the developers perspective, f5 broke the pact of decision control being with the developer. But for me, I would rather it be registered and I’m informed even if I know my configuration doesn’t use it.

    Again, assuming a lot here. But I agree with f5. That feature even beta could be in a dev or test environment. That’s enough reason to know.

    Edit:Long term, I don’t know where I’ll land. Personally I’d rather be with the developer, except I need to trust that the solution is open not in source, but in communication. It’s a weird situation.