• 1 Post
  • 114 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle
  • What? I’m not privy to RedHat/IBM/Google’s internal processes but they are all massive FOSS contributors at least some of which I assume are using Agile internally. The Linux kernel is mostly corpo-backed nowadays.

    The development cycle of FOSS is highly compatible with Agile processes, especially as you tend towards the Linux Kernel style of contributing where every patch is expected to be small and atomic. A scrum team can 100% set as a Sprint Goal “implement and submit patches for XYZ in kernel”.

    Also agile ≠ scrum. If you’re managing a small github project by sorting issues by votes and working on the top result, then congratulations, you’re following an ad-hoc agile process.

    I think what you’re actually mad at is corporate structures. They systematically breed misaligned incentives proportional to the structure’s size, and the top-down hierarchy means you can’t just fork a project when disagreements lead to dead ends. This will be true whether you’re doing waterfall or scrum.


  • I was there Gandalf…

    Before that date their algorithm was soft-locked to around 5k upvotes. If a post was extremely, massively popular it would climb to maybe a bit over 10k but that was insane. There was clearly a logarithmic scaling effect that kicked in after a few thousand upvotes. Not entirely sure why, perhaps to prevent the super-popular stuff from ballooning in some kind of horrible feedback loop.

    The change was to uncap the vote counts. One day posts just kept climbing well beyond the 5k mark. Now what they also did was recalculate old posts in order not to fuck up the /top rankings. Kinda. Took a while and I’m not sure they got to every post.

    I don’t know or care if reddit does vote manipulation, but this ain’t proof and I don’t see how it is unbelievable that a website with tens of millions MOA would occasionally have a post with 100k+ upvotes.


  • Whether it’s 48 or 52 % is an immaterial difference. Every other American who voted, voted for Trump. The rest don’t seem to care either way. He has very broad popular assent and is as popular as Harris give or take a margin of error.

    Everyone is lasered-focused on the EC because it makes all the difference for the practicalities, but if one is to make a broad judgement of whether Trump won fair and square the answer is “yeah, mostly”. Further proof is the fact that the House is probably going to be his as well.

    Americans now bear the collective responsibility for the horrors of the next 4(+?) years. Do not make the mistake of blaming the popular will of outright fascism on institutional failures, because institutions didn’t force half of Americans to vote for the fascist, again.



  • Or just :set mouse=a if your terminal emulator was updated in the past decade. gVim has nothing to offer anymore, except that it bundles its own weird terminal emulator that doesn’t inherit any of the fonts, themes, settings or shortcuts of one’s default terminal. Blegh.

    Also if you’re not going to leverage Vim’s main feature and just want to click around on stuff, just install VSCod(e|ium), which is genuinely amazingly good.


  • I wasn’t very old then but the main thing was RAM. Fuckers in Microsoft sales/marketing made 1 GB the minimum requirement for OEMs to install Vista.

    So guess what? Every OEM installed Vista with 1 GB of RAM and a 5200 RPM hard drive (the “standard” config for XP which is what most of those SKUs were meant to target). That hard drive would inevitably spend its short life thrashing because if you opened IE it would immediately start swapping. Even worse with OEM bloat, but even a clean Vista install would swap real bad under light web browsing.

    It was utterly unusable. Like, everything would be unbearably slow and all you could do was (slowly) open task manager and say “yep, literally nothing running, all nonessential programs killed, only got two tabs open, still swapping like it’s the sex party of the century”.

    “Fixing” those hellspawns by adding a spare DDR2 stick is a big part of how I learned to fix computer hardware. All ya had to do was chuck 30 € of RAM in there and suddenly Vista went from actually unusable to buttery smooth.

    By the time the OEMs wised up to Microsoft’s bullshit, Seven was around the corner so everyone thought Seven “fixed” the performance issues. It didn’t, it’s just that 2 GB of RAM had become the bare minimum standard by then.

    EDIT: Just installed a Vista VM because I ain’t got nothing better to do at 2 am apparently. Not connected to the internet, didn’t install a thing, got all of 12 processes listed by task manager, and it already uses 500 MB of RAM. Aero didn’t even enable as I didn’t configure graphics acceleration.




  • Unrelated to the article itself but I initially clicked on mobile and was presented with this clearly GDPR-violating prompt:

    Tracking consent prompt with only an "Accept all" button

    Where’s the button to reject tracking? It doesn’t exist.

    For reference this is the correct prompt on admiral’s own website:

    Tracking consent prompt with a "Reject all" button next to "Accept all"

    First time I see GDPR violation this brazen. While writing this comment I finally figured out how to reject consent (clicking on “Purposes” and manually deselecting each purpose).

    I double checked with remote debugging, the button is not just hidden in CSS; it’s missing entirely:

    HTML source showing no reject all button

    For some reason I don’t get a consent prompt at all from my desktop even on a brand new firefox profile – perhaps because of my user-agent?

    Anyways I felt motivated today so I’ve sent an email to their Data Protection Officer and set a reminder for next month in case they ghost me.


  • Yeah as I expected you’re projecting right wing talking points on what I said and answering those instead of anything I -at the very least- meant.

    I just do not think that, in a frictionless vacuum, one can completely dismiss the idea that there can be some, however microscopic and inconsequential downsides to immigration (through no individual fault in the vast majority of the population).

    Do consider that at the very least if Europe hypothetically did away with border checks entirely and strived for massive immigration, the ensuing brain drain would wreak havoc on the Global South (even worse than right now, kinda like happened within the EU with the former eastern block). Regardless of the exact mechanism, mass migration has long-lasting sociocultural impacts and to say these are only positive is pure globalist ideology.


  • You gloss over the part where even with the best intentions imaginable European immigration would have killed 90 % of American Natives with their new pathogens. No matter which way you slice it that is a scenario where European culture becomes the dominant culture, though it would certainly be nice not to have overt genocide and oppression sprinkled on top.

    (Of course that’s not the case right now and the great replacement theory is a fascist invention, if that needs saying)

    Also be careful not to infantilise immigrants. There is a marginal but highly visible issue happening for example where Saudi Arabia is funding Wahhabit (i.e. highly orthodox) mosques and imams in Europe that when combined with depressed socioeconomic opportunities fuels religious antagonism/radicalism particularly amongst particularly vulnerable teenage second generation immigrants. Is it an existential threat to European hegemony or something Europe is incapable of absorbing? Certainly not. Doesn’t mean it’s an issue we have to refuse to acknowledge in the name of our own leftist orthodoxy.



  • You’re describing proper incident response but I fail to see what that has to do with the status page. They have core metrics that they could display on that status page without a human being involved.

    IMO a customer-friendly status page would automatically display elevated error rates as “suspected outage” or whatever. Then management can add more detail and/or say “confirmed outage”. In fact that’s how the reddit status page works (or at least used to work), it even shows little graphs with error rates and processing backlogs.

    There are reasons why these automated systems don’t exist, but none of these reasons align with user interests.



  • I looked into it after this year’s massive price hike… There’s no meaningful alternative. We’re on the FOSS version of GitLab now (GitLab-CE), but the lack of code ownership / multiple reviewers / etc. is a real pain and poses problems with accountability.

    Honestly there are not that many features in Gitlab EE that are truly necessary for a corporate environment, so a GitLab-CE fork may be able to set itself apart by providing those. To me there are two hurdles:

    • Legal uncertainties (do we need a clean room implementation to make sure Gitlab Inc doesn’t sue for re-implementing the EE-only features into a Gitlab fork?)
    • The enormous complexity of the GitLab codebase will make any fork, to put it mildly, a major PITA to maintain. 2,264 people work for GitLab FFS (with hundreds in dev/ops), it’s indecent.

    Honestly I think I’d be happy if forgejo supported gitlab-runner, that seems like a much more reasonable ask given the clean interface between runner and server. Maybe I should experiment with that…


  • All of this has already been implemented for over a hundred years for other trades. Us software people have generally escaped this conversation, but I think we’ll have to have it at some point. It doesn’t have to be heavy-handed government regulation; a self-governed trades association may well aim to set the bar for licensing requirements and industry standards. This doesn’t make it illegal to write code however you want, but it does set higher quality expectations and slightly lowers the bar for proving negligence on a company’s part.

    There should be a ISO-whateverthefuck or DIN-thisorother that every developer would know to point to when the software deployment process looks as bad as CrowdStrike’s. Instead we’re happy to shrug and move on when management doesn’t even understand what a CI is or why it should get prioritized. In other trades the follow-up for management would be a CYA email that clearly outlines the risk and standards noncompliance and sets a line in the sand liability-wise. That doesn’t sound particularly outlandish to me.


  • But a company that hires carpenters to build a roof will be held liable if that roof collapses on the first snow storm. Plumbers and electricians must be accredited AFAIK, have the final word on what is good enough by their standards, and signing off on shoddy work exposes them to criminal negligence lawsuits.

    Some software truly has no stakes (e.g. a free mp3 converter), but even boring office productivity tools can be more critical than my colleagues sometimes seem to think. Sure, we work on boring office productivity tools, but hospitals buy those tools and unreliable software means measurably worse health outcomes for the patients.

    Engineers signing off on all software is an extreme end of the spectrum, but there are a whole lot of options between that and the current free-for-all where customers have no way to know if the product they’re buying is following industry standard practices, or if the deployment process is “Dave receives a USB from Paula and connects to the FTP using a 15 year-old version of FileZilla and a post-it note with the credentials”.


  • Oh I was talking in the context of my specialty, software engineering. The main difference between an engineer and an operator is that one designs processes while the other executes on those processes. Negligence/malice aside the operator is never to blame.

    If the dev is “the guy who presses the ‘go live’ button” then he’s an operator. But what is generally being discussed is all the engineering (or lack thereof) around that “go live” button.

    As a software engineer I get queasy when it is conceivable that a noncritical component reaches production without the build artifact being thoroughly tested (with CI tests AND real usage in lower environments).
    The fact that CrowdWorks even had a button that could push a DOA update on such a highly critical component points to their processes being so out of the industry standards that no software engineer would have signed off on anything… If software engineers actually had the same accountability as Civil Engineers. If a bridge gets built outside the specifications of the Civil Engineer who signed off on the plans, and that bridge crumbles, someone is getting their tits sued off. Yet there is no equivalent accountability in Software Engineering (except perhaps in super safety-critical stuff like automotive/medical/aerospace/defense applications, and even there I think we’d be surprised).


  • I strongly believe in no-blame mindsets, but “blame” is not the same as “consequences” and lack of consequences is definitely the biggest driver of corporate apathy. Every incident should trigger a review of systemic and process failures, but in my experience corporate leadership either sucks at this, does not care, or will bury suggestions that involve spending man-hours on a complex solution if the problem lies in that “low likelihood, big impact” corner.
    Because likely when the problem happens (again) they’ll be able to sweep it under the rug (again) or will have moved on to greener pastures.

    What the author of the article suggests is actually a potential fix; if developers (in a broad sense of the word and including POs and such) were accountable (both responsible and empowered) then they would have the power to say No to shortsighted management decisions (and/or deflect the blame in a way that would actually stick to whoever went against an engineer’s recommendation).