• 0 Posts
  • 24 Comments
Joined 1 month ago
cake
Cake day: December 6th, 2024

help-circle
  • A family of software development processes for teams, which focuses on cycles of quickly building and delivering smaller blocks of program functionally (often just a single program feature - say: “search customers by last name” - or just part of a feature) to end-users so as to get quick feedback from those users of the software, which is then is use to determining what should be done for subsequent cycles.

    When done properly it addresses the issues of older software development processes (such as the Waterfall process) in siuations where the users don’t really have a detailed vision of what the software needs to do for them (which are the most usual situations unless the software just helps automates their present way of doing things) or there are frequent changes of what they need the software to do for them (i.e. they already use the software but frequently need new software features or tweaks to existing features).

    In my own career of over two decades I only ever seen it done properly maybe once or twice. The problem is that “doing Agile” became fashionable at a certain point maybe a decade ago and pretty much a requirement to have in one’s CV as a programmer, so you end up with lots of teams mindlessly “doing Agile” by doing some of the practices from Agile (say, the stand up meeting or paired programming) without including other practices and elements of the process (and adjusting them for their local situation) thus not achieving what that process is meant to achieve - essentially they don’t really understand it as a software development process which is more adequate for some situations and less for others and what it actually is supposed to achieve and how.

    (The most frequent things not being done are those around participation of the end-users of the software in evaluating what has been done in the last cycle, determining new features and feature tweaks for the next cycle and prioritizing them. The funny bit is that these are core parts of making Agile deliver its greatest benefits as a software development process, so basically most teams aren’t doing the part of Agile that actually makes it deliver superior results to most other methods).

    It doesn’t help that to really and fully get the purpose of Agile and how it achieves it, you generally need to be at the level of experience at which you’re looking at the actual process of making software (the kind of people with at least a decade of experience and titles like Software Architect) which, given how ageist a lot of the Industry is are pretty rare, so Agile is usually being done by “kids” in a monkey-sees-monkey-does way without understanding it as a process, hence why it, unsurprising, has by now gotten a bit of a bad name (as with everything, the right tool should be used for the right job).


  • They’re supposed to work as an adaptor/buffer/filter between the technical side and the non-technical stakeholders (customers, middle/upper management) and doing some level of organising.

    In my 2 and a half decades of experience (a lot of it as a freelancer, so I worked in a lot of companies of all sizes in a couple of countries), most aren’t at all good at it, and very few are very good at it.

    Some are so bad that they actually amplify uncertainty and disorganisation by, every time they talk to a customer or higher up, totally changing the team’s direction and priorities.

    Mind you, all positions have good professionals and bad professionals, the problem with project management is that a bad professional can screw a lot of work of a lot of people, whilst the damage done by, for example, a single bad programmer, tends to be much more contained and generally mainly impacts the programer him or herself (so that person is very much incentivised to improve).


  • Half way into saving the World it turns out you need some data that’s not even being collected, something that nobody had figured out because nobody analysed the problem properly beforehand, and now you have to take a totally different approach because that can’t be done in time.

    Also the version of a library being include by some dependency of some library you included to do something stupidly simple is different from the version of the same library being included by some dependency of a totally different library somebody else includeed to do something else that’s just as stupidly simple and neither you nor that somebody else want to be the one to rewrite their part of the code.



  • It eliminates the dependency of specific distributions problem and, maybe more importantly, it solves the dependency of specific distribution versions problem (i.e. working fine now but might not work at all later in the very same distribution because some libraries are missing or default configuration is different).

    For example, one of the games I have in my GOG library is over 10 years old and has a native Linux binary, which won’t work in a modern Debian-based distro by default because some of the libraries it requires aren’t installed (meanwhile, the Windows binary will work just fine with Wine). It would be kinda deluded to expect the devs would keep on updating the Linux native distro (or even the Windows one) for over a decade, whilst if it had been released as a Docker app, that would not be a problem.

    So yeah, stuff like Docker does have a reasonable justification when it comes to isolating from some external dependencies which the application devs have no control over, especially when it comes to future-proofing your app: the Docker API itself needs to remain backwards compatible, but there is no requirement that the Linux distros are backwards compatible (something which would be much harder to guarantee).

    Mind you, Docker and similar is a bit of a hack to solve a systemic (cultural even) problem in software development which is that devs don’t really do proper dependency management and just throw in everything and the kitchen sink in terms of external libraries (which then depend on external libraries which in turn depend on more external libraries) into the simplest of apps, but that’s a broader software development culture problem and most of present day developers only ever learned the “find some library that does what you need and add it to the list of dependencies of your build tool” way of programming.

    I would love it if we solved what’s essentially the core Technical Architecture problem of in present day software development practices, but I have no idea how we can do so, hence the “hack” of things like Docker of pretty much including the whole runtime environment (funnilly enough, a variant of the old way of having your apps build statically with every dependency) to work around it.



  • Look for a processor for the same socket that supports more RAM and make sure the Motherboard can handle it - maybe you’re lucky and it’s not a limit of that architecture.

    If that won’t work, breakup your self-hosting needs into multiple machines and add another second hand or cheap machine to the pile.

    I’ve worked in designing computer systems to handle tons of data and requests and often the only reasonable solution is to break up the load and throw more machines at it (for example, when serving millions of requests on a website, just put a load balancer in front of it that assigns user sessions and associated requests to multiple machines, so the load balancer pretty much just routes request by user session whilst the heavy processing stuff is done by multiple machines in such a way the you can just expand the whole thing by adding more machines).

    In a self-hosting scenario I suspect you’ll have a lot of margin for expansion by splitting services into multiple hosts and using stuff like network shared drives in the background for shared data, before you have to fully upgrade a host machine because you hit that architecture’s maximum memory.

    Granted, if a single service whose load can’t be broken down so that you can run it as a cluster, needs more memory than you can put in any of your machines, then you’re stuck having to get a new machine, but even then by splitting services you can get a machine with a newer architecture that can handle more memory but is still cheap (such as a cheap mini-PC) and just move that memory-heavy service to it whilst leaving CPU intensive services in the old but more powerful machine.


  • Which is why if the objective was just to cool down the Earth (and ignoring that solar panels replace other sources of electricity that warm up the Earth more) just painting the ground white would be more reflective than solar panels as the white paint increases the amount of sunlight that gets reflected back to space whilst solar panels not only capture some of it as electricity (that will ultimately end up transformed into heat somewhere) but they also absorb some transforming it directly into heat (i.e. they warm up a bit).


  • I have a cheap N100 mini-PC with Lubuntu on it with Kodi alongside a wireless remote as my TV box, and use my TV as a dumb screen.

    Mind you, you can do it even more easily with LibreELEC instead of Lubuntu and more cheaply with one of its supported cheap SBCs plus a box instead of a mini PC.

    That said, even the simplest solution is beyond the ability of most people to set up, and once you go up to the next level of easiness to setup - a dedicated Android TV Box - you’re hit with enshittification (at the very least preconfigured apps like Netflix with matching buttons in your remote) even if you avoid big brands.

    Things are really bad nowadays unless you’re a well informed tech expert with the patience to dive into those things when you’re home.



  • I use a pretty basic one (with an N100 microprocessor and intel integrated graphics) as a TV box + home server combo and its excellent for that.

    It’s totally unsuitable for gaming unless we’re talking about stuff running in DOSEmu or similar and even then I’m using it with a wireless remote rather than a keyboard + mouse, which isn’t exactly suitable for PC gaming.

    Mind you, there are configurations with dedicated graphics but they’re about 4x the price of the one I got (which cost me about €120) and at that point you’re starting to enter into the same domain as small form factor desktop PCs using things like standard motherboards, which are probably better for PC gaming simply because you can upgrade just about anything in those whilst hardware upgradeability of mini PCs is limited to only some things (like SDD and RAM).


  • Clearly my point about this being like Junior Devs thinking they know better that the “lusers” whilst not knowing enough to understand the limits of their knowledge hit the mark and hurt.

    It’s hilarious that you think a background in game making (by the way, love that hypocrisy of yours of criticizing me for pointing out my background whilst you often do exactly the same on your posts) qualifies you to understand things like the error rates in the time and amplitude domains inherent to the sampling and quantization process which is Analog-to-Digital conversion “FAR” better than a Digital Systems Electronics Engineering Degree - you are literally the User from the point of view of a Digital Systems EE.

    Then the mention of Physics too was just delicious because I also have part of a Physics degree that I took before changing to EE half way in my degree, so I studied it at Uni level just about long enough to go all the way to Quantum Mechanics which is a teensy weensy bit more advanced than just “energy” (and then, funnily enough, a great deal of EE was also about “energy”).

    Oh, and by the way, if you think others will Shut The Fuck Up just because you tell them to, you’re in for a big disappointment.


  • But people do stop believing money has value, or more specifically, their trust in the value of money can go down - you all over the History in plenty of places that people’s trust in the value of money can break down.

    As somebody pointed out, if one person has all the money and nobody else has money, money has no value, so it’s logical to expect that between were we are now and that imaginary extreme point there will be a balance in the distribution of wealth were most people do lose trust in the value of money and the “wealth” anchored on merelly that value stops being deemed wealth.

    (That said, the wealthy generally move their wealth into property - as the saying goes “Buy Land: they ain’t making any more of it” - but even that is backed by people’s belief and society’s enforcement of property laws and the mega-wealthy wouldn’t be so if they had to actually protect themselves their “rights” on all that they own: the limits to wealth, when anchored down to concrete physical things that the “owners” have to defend are far far lower that the current limits on wealth based on nation-backed tokens of value and ownership)


  • And further on point 2, the limit would determined by all that people can produce as well as, on the minus side, the costs of keeping those people alive and producing.

    As it so happens, people will produce more under better conditions, so spending the least amount possible keeping those people alive doesn’t yield maximum profit - there is a sweet spot somewhere in the curve were the people’s productivity minus the costs of keeping them productive is at a peak - i.e. profit is maximum - and that’s not at the point were the people producing things are merelly surviving.

    Capitalism really is just a way of the elites trying to get society to that sweet spot of that curve - under Capitalism people are more productive than in overtly autocratic systems (or even further, outright slavery) were less is spent on people, they get less education and they have less freedom to (from the point of view of the elites) waste their time doing what they want rather than produce, and because people in a Capitalist society live a bit better, are a bit less unhappy and have something to lose unlike in the outright autocratic systems, they produce more for the elites and there is less risk of rebelions so it all adds up to more profit for the elites.

    As you might have noticed by now, optimizing for the sweet spot of “productivity minus costs with the riff-raff” isn’t the same as optimizing for the greatest good for the greatest number (the basic principle of the Left) since most people by a huge margin are the “riff-raff”, not the elites.


  • Nice content-free slogan.

    I’m not a Sound Engineer, I’m an Electronics Engineer - we’re the ones who had to find the right balance between fidelity, bit error rates, data rates and even circuit price when designing the digital audio sampling systems that capture from the analog world the digital data which the Sound Engineers use to work their magic: so I’m quite familiar with the limits of analog to digital conversion and that’s what I’m pointing out.

    As it so happens I also took Compression and Cryptography in my degree and am quite familiar with where the term “lossless” comes from, especially since I took that elective at the time when the first lossy compression algorithms were starting to come out (specifically wavelet encoding as used in JPEG and MPEG) so people had to start talking about “lossless” compression algorithms with regards to the kind of algorithms what until then had just been called compression algorithms (because until then there were no compression algorithms with loss since the idea of losing anything when compressing data was considered crazy until it turns out you could do it and save tons of space if it was for stuff like image and audio because of the limitations of human senses - essentially in the specific case of things meant to be received by human senses, if you could deceive the human senses then the loss was acceptable, whilst in a general data sense losing data in compression was unacceptable).

    My expertise is even higher up the Tech stack than the people who to me sound like Junior Devs making fun of lusers because they were using technical terms to mean something else, even while the Junior Devs themselves have yet to learn enough to understand the scope of usage and full implications for those technical terms (or the simple reality that non-Techies don’t have the same interpretation of technical terms as domain experts and instead interprete those things by analogy)


  • A PNG is indeed an imperfect representation of reality. Are you claiming that the lossness in the data domain of the compression algorithm in a PNG means its contents are a perfect representation of reality?!

    (Funnilly enough, the imperfections in the data contained on a PNG are noticeable for some and the lower the “sampling rate” - i.e. number of pixels, bits per pixel - the easier it is to spot, same as audio)

    As I’ve been trying to explain in my last posts, a non-Techie “audophile” when they claim FLAC is not lossless aren’t likely to be talking about it’s technical characteristics in the data domain (i.e. that data that you take out of a FLAC file is exactly the same as it goes in) but that its contents don’t sound the same as the original performance (or, most likely, a recording made via an entirelly analog pathway, such as in an LP).

    Is it really that hard to grasp the concept that the word “lossless” means different things for a Technical person with a background in digital audio processing and a non-Technical person who simply compares the results of a full analog recording and reproduction pathway with those of a digital one which include a FLAC file and spots the differences?

    This feels like me trying to explain to Junior Developers that the Users are indeed right and so are the Developers - they’re just reading different meanings for the same word and, no, you can’t expect non-Techie people to know the ins and outs of Technical terms and no they’re not lusers because of it. Maybe the “audiphile” was indeed wrong and hence “Confidently Incorrect”, but maybe he was just using lossless in a broader sense of “nothing lost” like a normal person does, whilst the other one was using the technical meaning of it (no data loss) so they were talking past each other - that snippet is too short to make a call on that.

    So yeah, I stand by my point that this is the kind of Dunning-Krugger shit junior techies put out before they learn that most people don’t have the very same strictly defined technical terms on their minds as the junior techies do.


  • They’re deemed “lossless” because there are no data losses - the word actually comes from the broader domain of data handling, specifically Compression were for certain things - like images, audio and video - there are compression algorithms that lose some information (lossy) and those which don’t (lossless), for example JPEG vs PNG.

    However data integrity is not at all what your average “audiophile” would be talking about when they say there are audio losses, so when commenting on what an non-techie “audiophile” wrote people here used that “losslessness” from the data domain to make claims in a context which is broader that merelly the area were the problem of data integrity applies and were it’s insuficient to disprove the claims of said “audiophile”.


  • My point being that unlike the misunderstanding (or maybe just mis-explanation) of many here, even a digital audio format which is technically named “lossless” still has losses compared to the analog original and there is no way around it (you can reduce the losses with a higher sampling rate and more bits per sample, but never eliminate it because the conversion to digital is a quantization of an infinite precision input).

    “Losslessness” in a digital audio stream is about the integrity of the digital data itself, not about the digital audio stream being a perfect reproduction of the original soundwaves. With my mobile phone I can produce at home a 16 bit PCM @ 44.7 kHz (same quality as a CD) recording of the ambient sounds and if I store it as an uncompressed raw PCM file (or a Wav file, which is the same data plus some headers for ease of use) it’s technically deemed “lossless” whilst being a shit reproduction of the ambient sounds at my place because the capture process distorted the signal (shitty shit small microphone) and lost information (the quantization by the ADC in the mobile phone, even if it’s a good one, which is doubtful).

    So maybe, just maybe, some “audiophiles” do notice the difference. I don’t really know for sure but I certainly won’t dismiss their point about the imperfect results of the end-to-process, with the argument that because after digitalization the digital audio data has been kept stored in a lossless format like FLAC or even raw PCM, then the whole thing is lossless.

    One of my backgrounds is Digital Systems in Electronics Engineering, which means I also got to learn (way back in the days of CDs) how the whole process end to end works and why, so most of the comments here talking about the full end-to-end audio capture and reproduction process (which is what a non-techie “audiophile” would be commenting about) not being lossy because the digital audio data handling is “lossless”, just sounds to me like the Dunning-Krugger Effect in action.

    People here are being confidently incorrect about the confident incorrection of some guy on the Internet, which is pretty ironic.

    PS: Note that with high enough sampling rates and bits per sample you can make it so precise that the quantization error is smaller that the actual noise in the original analog input, which de facto is equivalent to no losses in the amplitude domain and so far into the high frequencies in the time domain that no human could possibly hear it, and if the resulting data is stored in a lossless format you could claim that the end-to-end process is lossless (well, ish - the capture of the audio itself into an analog signal itself has distortions and introduces errors, as does the reproduction at the other end), but that’s something quite different from claiming that merely because the audio data is stored in a “lossless” format it yields a signal as good as the original.


  • Strictly speaking, as soon as an analog signal is quantized into digital samples there is loss, both in the amplitude domain (a value of infinite precision is turned into a value that must fit in a specific number of bits, hence of finited precision) and on the time domain (digitalization samples the analog input at specific time intervals, whilst the analog input itself is a continuous wave).

    That said, whether that is noticeable if the sampling rate and bits per sample are high enough is a whole different thing.

    Ultra high frequency sounds might be missing or mangled at a 44.7 kHz sampling rather (a pretty standard one and used in CDs) but that should only be noticeable to people who can hear sounds above 22.35kHz (who are rare since people usually only hear sounds up to around 20kHz, the oldest the person the worse it gets) and maybe a sharp ear can spot the error in sampling at 24 bit, even though its miniscule (1/2^24 of the sampling range assuming the sampling has a linear distribution of values) but its quite unlikely.

    That said, some kinds of trickery and processing used to make “more sound” (in the sense of how most people perceive the sound quality rather than strictly measured in Phsysics terms) fit in fewer bits or fewer samples per second in a way that most people don’t notice might be noticeable for some people.

    Remember most of what we use now is anchored in work done way back when every byte counted, so a lot of the choices were dictated by things like “fit an LP as unencoded audio files - quite luterallyplain PCM, same as in Wav files - on the available data space of a CD” so it’s not going to be ultra high quality fit for the people at the upper ends of human sound perception.

    All this to say that FLAC encoded audio files do have losses versus analog, not because of the encoding itself but because Analog to Digital conversion is by its own nature a process were precision is lost even if done without any extra audio or data handling process that might distort the audio samples even further, plus generally the whole thing is done at sampling rates and data precision’s fit for the average human rather than people at the upper end of the sound perception range.