• 1 Post
  • 244 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • A few months ago, we had a question about what would happen if necromancy was possible and an undead was called as a court witness. I gave a rather fun-to-write, tongue-in-cheek answer, which might be germane to your question too. Here’s just a snippet:

    So now we come back to zombies. Would a jury be able to set aside their shock, horror, and awe about a zombie in court that they could focus on being the finder of fact? If a zombie says they’re an eye-witness to a mugging, would their lack of actual eyeballs confuse the jury? Even more confusing would be a zombie that is testifying as an expert witness. Does their subject matter need to be recent? What if the case needs an expert on 17th Century Parisian fashion and the undead is from that era and worked in haute couture? Are there no fashion historians who could provide similar expert opinions?



  • While I get your point that Python is often not the most appropriate language to write certain parts of an OS, I have to object to the supposed necessity of C. In particular, the bolded claim that an OS not written in C is still going to have C involved.

    Such an OS could instead have written its non-native parts using assembly. And while C was intentionally designed to be similar to assembly, it is not synonymous with assembly. OS authors can and do write assembly when even the C language cannot do what they need, and I gave an example of this in my comment.

    The primacy of C is not universal, and has a strong dependency on the CPU architecture. Indeed, there’s a history of building machines which are intended for a specific high-level language, with Lisp Machines being one of the most complex – since Lisp still has to be compiled down to some sort of hardware instructions. A modern example would be Java, which defines the programming language as well as the ISA and byte code: embedded Java processors were built, and thus there would have been zero need for C apart from legacy convenience.


  • As it happens, this is strikingly similar to an interview question I sometimes ask: what parts of a multitasking OS cannot be written wholly in C. As one might expect, the question is intentionally open-ended so as to query a candidate’s understanding of the capabilities and limitations of the C language. Your question asks about Python, but I posit that some OS requirement which a low-level language like C cannot accomplish would be equally intractable for Python.

    Cutting straight to the chase, C is insufficient for initializing the stack pointer. Sure, C itself might not technically require a working stack, but a multitasking operating system written in C must have a stack by the time it starts running user code. So most will do that initialization much earlier, so that the OS’s startup functions can utilize the stack.

    Thjs is normay done by the bootloader code, which is typically written in assembly and runs when the CPU is taken out of reset, and then will jump into the OS’s C code. The C functions will allocate local variables on the stack, and everything will work just fine, even rewriting the stack pointer using intrinsics to cause a context switch (although this code is often – but not always – written in assembly too).

    The crux of the issue is that the initial value of the stack pointer cannot be set using C code. Some hardware like the Cortex M0 family will initialize the stack pointer register by copying the value from 0x00 in program memory, but that doesn’t change the fact that C cannot set the stack pointer on its own, because invoking a C function may require a working stack in the first place.

    In Python, I think it would be much the same: how could Python itself initialize the stack pointer necessary to start running Python code? You would need a hardware mechanism like with the Cortex M0 to overcome this same problem.

    The reason the Cortex M0 added that feature is precisely to enable developers to never be forced to write assembly for that architecture. They can if they want to, but the architecture was designed to be developed with C exclusively, including interrupt handlers.

    If you have hardware that natively executes Python bytecode, then your OS could work. But for x86 platforms or most other targets, I don’t think an all-Python, no-assembly OS is possible.



  • I like this answer. The only thing I would add is that when the fan blades are all stalled, it might seem then that drag and energy consumption should reduce, since there’s not much air moving. But in a cruel twist (fan pun intended) of aerodynamics, the useless spinning of stalled fan blades still causes parasitic drag. So not only does the fan not move air, it’s also consuming more energy than spinning a solid disk of the same moment-of-inertia.

    When the engine fails for certain single-propeller aircraft, there’s sometimes a mechanism to lock the propeller to make it stop rotating, since it would otherwise “windmill” in the air and waste the precious kinetic energy that’s keeping the plane aloft. Or so I’m told.



  • I guess your nephew can start studying to become a network engineer now lol

    In all seriousness, a 16 port managed switch exposes enough complexity to develop a detailed understanding of Ethernet and Layer 2 concepts, while not having to commit to learning illogical CLI commands to achieve basic functionality. 16 ports is also enough to wire up a non-trivial network, with ports to spare for exercising loop detection/protection or STP, but doesn’t consume a lot of electricity.

    I would pair that switch with a copy of The All-New Switch Book, 2nd Edition to go over the networking theory. Yes, that book is a bit dated but networking fundamentals have not changed that much in 15 years. Plus, it can be found cheap, or on the high seas. It’s certainly not something to read cover-to-cover, since you can skip anything about ATM networks.

    Then again, I think students might just simulate switch behaviors and topologies in something like GNS3, so no hardware needed at all.


  • The other comments correctly mention aspects like managing terrain and the width of railroads vs roadways. What I want to highlight is the development of road building methods at around the same time that metal-on-metal rail developed.

    The 1800s were a wild time. Some clever folks figured out that they could put a contemporary steam engine – invented early 1700s; used only for stationary uses in lieu of water power – onto a wagonway. Wagonways are basically wooden or metal guides/flanges so that a horse-drawn wagon could be pulled along and stay perfectly centered on the path.

    Up until this point in history, the construction of graded, flattened surfaces for moving goods didn’t change very much compared to what the Romans were doing with their roads. That is, a road had to be dug down and some soil removed, then backfilled with coarse material (usually large stones), and then a layer of smaller stones to try to approximate a smooth surface. The innovations the Roman introduced included a keen eye for drainage – freeze/thaw cycles destroy roads – and surveying methods (also to build things like aqueducts and canals). And concrete, of course.

    But even the best built roads of that era were still prone to rutting, where each passing wagon slowly wears a groove into the road. Wooden wagons wider or narrower than the groove would suffer poor performance or outright break down. The wagonways sought to solve that issue by: 1) forcing all wagons to fit within the fixed guides on the sides, and 2) concentrate the grooves to exactly within the guides. The modern steel-on-steel railway takes this idea to its logical end.

    An adhesive railroad seeks to be: all-weather, heavy duty, and efficient. Like Roman roads before it, all railways (except maybe on-street tramways) need to excavate the soil and build it up, usually being higher and wider than the rest of the land. It also minimizes the width of the earthworks, by being so compact and building upward. This sturdy base also provides a strong foundation to support heavy loads, preventing the steel rails from sinking or “rutting”. And finally, putting the wheel atop the rail makes for low-friction operation. Early wooden plateways sort-of did this, but they didn’t manage curves like how modern rails do.

    All the while, instead of trying to support heavy wagons, another clever person sought to reinvent road building outright, postulating that if a surface could just spread out the load from light/medium traffic, then the soil beneath could be used as-is, saving a lot of earthworks. A gravel surface would meet this criteria, but gravel is not all-weather and can develop rutting. The key innovation was the use of binder (basically glue) to hold the surface together, such as tar. This sealing process meant the surface wouldn’t shift underneath traffic. This neatly avoided the issue of dust, made the surface water impermeable, and reduced road maintenance. So famous is this surfacing process that the inventor’s name can still be found in the surface for airport runways, despite runways always being excavated down to a significant depth.

    So on one hand, rail technology developed to avoid all the pitfalls of 1700s roads. On the other hand, road surfacing developed to allow light/medium traffic roads to be economically paved for all-weather conditions. Both developments led to increased speed and efficiency in their domain, and networks of both would be built out.

    Rail networks made it possible to develop the “streetcar suburbs” around major historical cities in the late 1800s. But on the same token, cheap road surfacing made it possible to build 1950s American suburbs, with wide, pedestrian-hostile streets sprawling in serpentine patterns. The fact that sealed roads are water impermeable has also substantially contributed to water pollution, due to increased rain runoff rather than absorbing into the underlying soil.



  • I once read a theory on an electricians forum about how the USA electrical code’s mandated maximum distance between adjacent outlets on a wall, coupled with the typical bedroom layout, as well as home builders trying to be as cheap as possible, led to only a single outlet being placed directly in the middle of the longest wall. This is also the most logical position for a bed, so the theory is that the bed pressing against the outlet over time was a contributing factor to electrical-related house fires.

    I cannot find where I read that originally, and certainly the granularity of nationally-reported fire data is not sufficient to prove that theory. And while the electrical code’s distance requirements haven’t changed, more homes will now put enough outlets so the only one isn’t behind the bed.


  • I’m not trying to be ignorant, I’m just curious.

    I think you’re in the right community! Don’t let anyone tell you to shy away from asking curious questions. (well, unless the question is also bigoted, illegal, baiting, sealioning, or otherwise disingenuous)

    I’m not an electrician in any jurisdiction, but one answer for why two 2-meter (~6 ft) extension cords in series is inadvisable compared to a single 4 meter cord is that it’s not an apples-to-apples comparison. Longer cords necessarily have to be built differently than shorter cords, not only because of electrical codes (eg the NEC in USA) or product safety specs (eg UL, CSA) but also being well-designed for their expected use. There’s also the human aspect, which all good designs must account for as well.

    Here in the USA, common extension cord lengths are ~2 m (6 ft), ~7.5 m (25 ft), ~15 m (50 ft), and ~30 m (100 ft). Of those cords, the common wire gauge used might be 18 AWG (~1 mm^2), 14 AWG (~2 mm^2), 16 AWG (~1.5 mm^2), and 12 AWG (~3.5 mm^2). I’ve intentionally rounded the metric units so they’re more analogous to common wire gauges outside the USA. Finally, the insulation used can be anything from “thin, indoor only” to “heavy, abrasion and sunlight resistant”. And while the USA technically has a boat-load of AC connectors, the grand majority will use the standard 2-pin or 3-pin 120v connector, formally known as NEMA 1-15 and NEMA 5-15 respectively. What this means is that chaining extension cords is both possible and somewhat common. The problem is one of mismatched designs.

    From a cursory search on the website of a major USA home improvement store, the smallest wire gauge used for a 100 ft cable is 16 AWG. The largest is 10 AWG (nb: smaller numbers mean bigger wire). That thinner cable is marketed for outdoor use. The thicker cable indicates its use “indoor/outdoor” and for heavy-duty applications. It is also branded with a major power-tool company, which would be appropriate as power tools often draw high current.

    Whereas looking at 6 ft extension cords, most are 16 AWG but a few were 18 AWG (thinner than 16) or 14 AWG (thicker). But I could not find any thicker cables than that, certainly nothing that uses 10 AWG (~6 mm^2). The “heavy duty” cables of this length also used only 16 AWG wire.

    Because electrical resistance is additive in series, and because Ohm’s Law governs the voltage lost at the end of a cord, the use of insufficiently large conductors can cause voltage issues for high-current appliances. Appliances for USA-spec generally require 120 Volts +/- 10%, with utilities aiming to provide 120 Volts +/- 5% from the outlets. This means a “sufficient” power cord should not have a voltage drop of more than 6 volts, give or take. Of course, a high-current appliance will also cause a larger voltage drop than a low-current device, so we only consider the former case.

    For a machine that draws 12 Amps attached to a 100 ft extension cord made of 18 AWG wire, the voltage drop would be 15 volts. This is bad for the machine, which now sees a lower voltage than expected. Had the cord been made of 12 AWG wire, the drop is an acceptable 3 volts.

    So if you’re operating construction tools, it would be a terrible idea to use three random 6-ft cables, and you should instead use a single 25-ft cable. Even though it’s longer than you need, the fact is that most 25 ft cables use thicker conductors, which reduces the voltage drop overall.

    But there’s also that peaky human factor. Sure, there would also be more connectors which could come loose, but the really pressing issue with daisy chained cords is when people do that indoors, because they only have light-duty 6 ft cables handy. And for that Christmas tree, they need to use attach three cables together to go beneath the hallway rug.

    This is essentially the worst-case scenario: using thin conductor cords, with thin insulation, underneath very flammable household surfaces, which are also trodden upon by foot traffic. Every step on that cord weakens the insulation and fatigues the conductors. Over time, the conductor becomes thinner where it’s being fatigued, and this increases the voltage drop. An unfortunate result of a voltage drop is that it generates heat. For a cable which is uniformly thin, this heat is spread over the whole length. But for localized conductor damage, the heat is pin-point… directly under a flammable rug.

    In the USA, some 3300 house fires started from an extension cord. Because these cords are not within the walls, they are usually beyond the control of often-strict building/electrical codes, something that’s been critiqued by a prominent YouTuber. The US CPSC even goes so far as to create memes to promote their messaging that space heaters – a common, high-current appliance – should not be used with extension cords or strips.

    CPSC meme about space heaters

    Of course, from an electrical perspective, even a ten-long chain of dinky extension cords would have no problem powering just a single LED night light. But it’s reasonable to ask: 1) is this just asking to be struck down by fate, 2) are there better alternatives like thicker/longer cords, and 3) why isn’t there an outlet where you need it?

    (There’s also a scenario where too long or thin of an extension cord can cause a circuit breaker to fail to trip during a short circuit, but it’s fairly esoteric and this post is quite long now)

    In short, the blanket recommendation to avoid daisy-chaining cords is to avoid the nasty and sometimes fatal results when that can go wrong, even with it might not always play out that way. There’s almost always something safer than can be done than daisy chaining.


  • IANAL, and lawsuits almost always end up being very fact-intensive, which means that the specifics of the case often make the difference. So it’ll depend. But broadly speaking, if there isn’t a specific law – eg ADA – that specifically assigns liability, then the most typical claim someone would try to make is a theory of negligence. That is, failure of the laundromat to behave with a reasonable degree of care.

    In the absence of signage or disclaimers or waivers (like in some amusement park rides), the jury will have to assess whether this laundromat’s environment suggested some heightened sense of security (eg security cameras, even fake ones) or that management implied or leaned into marketing that made it sound like clothes wouldn’t be stolen there. But a typical coin-op laundromat has people going in and out at all times of day, so it’s not reasonable to think it’s akin to Fort Knox, even without a sign indicating that management disclaims liability for clothes theft.

    As for posting that sign, it won’t change the general lack of liability on the laundromat in a case where someone snatches clothing. But the equation is different if, say, a patron asked a staff member to watch their laundry for 5 minutes as they make a phone call, and that staff member agreed but then went out for a smoke, resulting in an opportunistic thief stealing the $80 bras from the dryer. Here, the laundromat would carry liability, because although they don’t normally watch the clothes, they agreed to do it this once and did it so badly that the clothes were stolen. That’s negligence, despite the sign.

    That said, posting a warning sign is generally encouraged, since a core principle of liability is that avoidance of harms is always going to be preferable than litigating after they’ve already happened. So if the sign causes patrons to stay near their clothes in the machine, then some amount of theft has been outright avoided. For this reason, courts seldom will punish a business for having an overzealous sign, unless the sign itself is materially false or the sign itself causes a hazard (eg a loose “Gusty Winds” highway warning sign that falls over in a light breeze, injuring a middle school student).

    But to muddy the waters some more, another core principle of liability is that liability should fall upon the person whose behavior if changed will prevent future harms. For stolen clothes, it’s quite clear that the thief should be liable for the value of the stolen bras. If a court instead holds the laundromat liable, then that creates a perverse incentive where rather than spending money on more/better washers, the laundromat must spend that money on cameras and private security, raising the cost of the laundry machines. In additional to absolving civil liability on the thief. All for something which would be more cheaply solved by patrons just watching their laundry, or perhaps installing hasps on the machines so patrons can bring their own locks.

    On the flip side, denying liability means the patron has lost the value of their clothes. Perhaps they now have to spend more on “clothes insurance”, which only serves to benefit an insurance company rather than affording more bras. Adjudicating liability – in any legal system – is a thankless job and there are never perfect answers to the delicate balancing act. Life is messy, and even the best civil tribunals struggle to make sense in all of the turbulent circumstances.

    TL;DR: it depends




  • You’re going to have to clarify what jurisdiction, since USA law is going to be vastly different than EU law, in the realms of product, medical devices, and public accommodations liability.

    But if we did examine the USA, then we can find some generalized rules. For product liability – the responsibility of manufacturers and distributors of a tangible object – strict liability will lay when a product has an inherent defect (meaning it didn’t become defective after the initial sale) and this defect causes some sort of injury. Although this criteria doesn’t depend on the frequency of injuries, if a product is accumulating a body count, that’s usually a good sign that there’s a defect. Causality is also important to establish, as well as any mitigations that may have existed. On this front, a manufacturer might argue that the warnings in the instruction manual specifically advised against diving headlong into a 30 cm deep swimming pool. And although warning consumers to not do something may be somewhat effective at discharging liability, warnings alone do not prevent someone from trying a lawsuit anyway; the popular wisdom that the “pages of warnings” in manuals are written by lawyers is only partly true, since most manufacturer prefer repeat business by customers that are still alive.

    Medical product liability is similar, but slightly different because medical products are built for a specific purpose but a doctor can instruct a patient to use it differently, if medically appropriate. If not used as instructed by the manufacturer, the manufacturer is usually off the hook, but the doctor might be liable for medical malpractice. Maybe. Doctor liability in the USA is framed within a “duty of care”, meaning that the doctor takes on a responsibility to act with a reasonable degree of skill and competency. The “standard of care” idea is related, in that it sets the floor for what is reasonable for all doctors. It is, for example, grossly negligent to a drunk doctor to examine a patient. Harms from such negligence can be litigated through a malpractice suit. But this doesn’t mean all harm is actionable. A successful appendectomy that results in blood sepsis is always going to be a possibility, even with the best infection controls in place. If all the staff discharged their duties within their training, then negligence does not attach. Also, malpractice is not something which can be waived, because even if a patient doesn’t sue, a doctor’s medical license can be suspended. Whereas the risks of a surgery can be described in detail to a patient, for informed consent.

    Finally, public accommodations law sets the floor for how public and private businesses conduct themselves if they provide goods or services to the general public. Very prominently in this realm are accessibility requirements, which are rules that assure the disabled will not have undue burdens that able-bodied people wouldn’t face. The Americans with Disabilities Act (ADA) provides for very stiff fines for non-compliance, and because its objective was to set the standard, there is no provision for a “fix it ticket” approach for enforcement. That is to say, the ADA does not allow business owners to wait until a wheelchair user makes a complaint; they must follow the standard from day 1.

    No doubt there is abuse of the liability laws – there’s nothing more American than filing “ambitious” lawsuits – and this is just a brief (and uncited, '“from the hip”) summary of possible areas of law that might answer your question. But I hope it gives you an idea of why a warning or sticker or sign might incur liability. Or at the very least, an unexpected lawsuit from left-field.



  • To start, I’m assuming you’re talking about low-cost index funds tracking the S&P500. All of the “actively managed” funds tracking an index are, IMO, farces designed to extract money for the fund managers rather than delivering value to the (index fund) share holders. A passively-managed index fund is a fairly boring (and cheap) operation to manage, primarily buying and selling shares to keep the same proportions as the tracked index, be it the popular S&P500, the CRSP Total US Market index, or any other imaginable index. The low-cost appears in the very low expense ratio, some measured in single-digit hundreds of 1 percent (eg 0.04% for VTSAX).

    As for whether an index fund tracking American large-cap stocks is a “sure fire” investment, absolutely not. Any investment needs to be viewed in terms of its appropriateness, such as being properly diversified (within one’s abilities) and the timescale must match one’s financial objectives. The conventional adage is that everyone would like to win the lottery, but when pressed for a more specific answer, most would say that they just want to live without worrying about finding an income. That is to say, they’re just looking for “enough”.

    Practical financial advice aims to sustainably achieve “enough”, usually framed in terms of retirement but quite frankly, the process works for all sorts of goals, such as saving for higher education for oneself or a child, buying a car, building a marriage dowry, or planning to support aging parents. What’s distinct with these scenarios are: the amount needed, and the time remaining to achieve that amount.

    For a mid-20s newly-employed knowledge worker (eg mechanical engineer), they have about 40 years until retirement age. Time is a very valuable asset, because time can overcome short-term problems like economic recessions or high interest rates. Even if a recession strikes just prior to turning 65, the nest egg will have grown with 40 years of dividends prior to the recession taking a small haircut. Alternatively, starting one’s career in a recession means post-recovery investments will bolster the savings.

    The large-cap index funds (like S&P500) are high risk, high reward. For someone with a long time horizon and a good savings rate like a young professional, large-cap makes a lot of sense. But having only large-cap would be wholly inappropriate for a retired octogenarian who just needs to draw a steady income to pay their living expenses. After all, having already gotten so far in life, the meaning of “enough” changed from “high growth of nest egg” to “drawing down the nest”. So this retired person would probably have gradually swapped out most their index funds for things like bonds, which pay less in dividends but are steady even through recessions and bad times. But they might still keep a small portion in large-cap, in case they live longer than expected.

    For a longer discussion about investing according to one’s definition of “enough”, I would recommend reading some pages from the Bogleheads community, like this one: https://www.bogleheads.org/wiki/Bogleheads®_investment_philosophy


  • I suspect that PG&E’s smart meters might: 1) support an infrared pulse through an LED on the top of the meter, and 2) use a fairly-open protocol for uploading their meter data to the utility, which can be picked up using a Software Defined Radio (SDR).

    Open Energy Monitor has a write-up about using the pulse output, where each pulse means a quantity of energy was delivered (eg 1 Watt-hour). So counting 1000 of such pulses would be 1 kWh, and that would be a way to track your energy consumption for any timescale.

    What it won’t do is provide instantaneous power (ie kW drawn at this very moment) because the energy must accumulate to the threshold before sending a pulse. For example, a 9 Watt LED bulb that is powered on would only cause a new pulse every 6.7 minutes. But for larger loads, the indication would be very quick; a 5000 W dryer would emit a new pulse after no more than 0.72 seconds.

    The other option is decoding the wireless protocol, which people have done using FOSS software. An RTL-SDR receiver is not very expensive, is very popular, and can also be used for other purposes besides monitoring the electric meter. Insofar as USA law is concerned, unencrypted transmissions are fair game to receive and decode. This method also has a wealth of other useful info in the data stream, such as instantaneous wattage in addition to the counter registers.