Every site has a dev environment, some are lucky enough to have a separate production one.
Every site has a dev environment, some are lucky enough to have a separate production one.
Again, show your work. What traits are those and how did you come up with the list? And why do those traits only exist in specific types of societies?
To contrast such a claim, the first nation to reach space was Nazi Germany. The USSR also did a fine job of conquering the vacuum of space under Khrushchev. And, no sane person would look at Khrushchev’s USSR and describe it a system which “maximize[s] the potential of all individuals”. In a more modern context, China is doing a pretty good job in space exploration, having landed a rover on the moon and built their own space station. And, at the risk of provoking the wrath of the tankies, China isn’t exactly a free and open nation.
So again, I’m just not seeing a basis for such a claim. And the example we do have, human history, seems to disagree with it.
In fact, any civilization capable of long distance space travel would have to overcome such idiocy and maximize the potential of all individuals, regardless of the wealth they were born into.
I’d be curious what you base this statement on? Historically, the societies which did the most long distance travel and exploration were the opposite of this. Spain and Portugal were absolute monarchies, with well defined feudal systems which exploited anyone outside the noble class. Yet, their efforts to “explore” and dominate the Americas were incredibly successful. The UK’s greatest exploration and extent was a direct offshoot of Mercantilism, with the East India Trading Company being both the primary actor and beneficiary. US Westward expansion was predicated on theft, war and genocide. Though, as a counter-point, the modern US system does a better job of providing opportunity to most people (with some notable problems), than it used to. And the US has been a hotbed of advancement in the last century.
In modern times, space exploration was originally driven by the desire to find new and interesting way to kill other people. And it’s only been recently that peaceful sharing of information has been normalized. Even there, the cutting edge of space exploration seems to be back in the hands of mercantilist forces. I mean, I love me some SpaceX, “let’s catch a rocket” shenanigans. But, we also shouldn’t pretend that SpaceX is anything other than a for-profit corporation under a leadership which would be happy to harvest organs from people for a profit.
I know it’s popular to think that space exploration must be a Star Trek style “space communism”. But, this doesn’t really align with the examples we have from history. And while that is certainly a human centric way to look at the problem, it’s also the only real world example we have to look at. Everything else is just philosophers sitting around, passing a bong and saying, “man, what if…” It can be a useful exercise to think about other possibilities. But, I’d tend to focus more effort on what we have evidence for, than made up ideas.
I’m sure there are several out there. But, when I was starting out, I didn’t see one and just rolled my own. The process was general enough that I’ve been able to mostly just replace the SteamID of the game in the Dockerfile and have it work well for other games. It doesn’t do anything fancy like automatic updating; but, it works and doesn’t need anything special.
I see containers as having a couple of advantages:
That all said, if an application does not have an official container image, the added complexity of creating and maintaining your own image can be a significant downside. One of my use cases for containers is running game servers (e.g. Valheim). There isn’t an official image; so, I had to roll my own. The effort to set this up isn’t zero and, when trying to sort out an image for a new game, it does take me a while before I can start playing. And those images need to be updated when a new version of the game releases. Technically, you can update a running container in a lot of cases; but, I usually end up rebuilding it at some point anyway.
I’d also note that, careful use of VMs and snapshots can replicate or mitigate most of the advantages I listed. I’ve done both (decade and a half as a sysadmin). But, part of that “careful use” usually meant spinning up a new VM for each application. Putting multiple applications on the same OS install was usually asking for trouble. Eventually, one of the applications would get borked and having the flexibility to just nuke the whole install saved a lot of time and effort. Going with containers removed the need to nuke the OS along with the application to get a similar effect.
At the end of the day, though. It’s your box, you do what you are most comfortable with and want to support. If that’s a monolithic install, then go for it. While I, or other might find containers a better answer for us, maybe it isn’t for you.
I’m not going to defend everything the TSA does. And they do have a lot of problems. But, the lines at the checkpoint are the result of trade-offs in security. For all things security related, it’s about managing risk. You will never eliminate risk, so you need to pick and choose where to apply controls to reduce the worst risks and accept some risk in other areas.
Think about the possible outcomes from terrorist attacks on airports. There are several possible scenarios:
We could probably come up with other cases, but I think this covers the bulk of it. So, let’s dive into managing these risks. What are the effects of such attacks, if successful?
Looking at case 1, how many people are likely to be killed? Well, that depends on the police response time and the effectiveness of the attacker’s weapon. But, based on other mass casualty events, this probably falls into the range of 10-30 people. It could move outside this range, but this is pretty typical of such situations. To pick a number in the middle, will say they the expected loss for such an attack is around 20.
With Case 2, again there is variability. But, it’s also something we have analogs for and may be able to put a range of casualties on. The Boston Marathon bombing in 2013 killed 6. The attack on Kabul Airport in 2021 during the US evacuation killed 182, though that also included multiple gunmen attacking after the explosion. Let’s put the loss rate around 50 for as single bomb, assuming a very packed area and a very effective bomb.
For Case 3, the numbers are a bit easier to get a handle on. Typical airliners carry anywhere from 100-200 passengers. The 737 MAX 8-200 is designed for 200, while the Airbus A200-100 carries around 100 passengers. We’ll pin the loss rate here at 150, as attackers are likely to target larger aircraft for this sort of attack.
Case 4 is basically Case 3, but with an optional loss of only money. For that reason, I’m going to remove this case, but wanted to mention it to avoid the “well akshuly” crowd, since this is a historic problem.
That leaves Case 5. And it’s Case 4’s situation, plus some number of people on the ground. Certainly, not every such use of an airplane as a weapon will be as successful as the attack on 9/11. And that also involved multiple successful attacks. But, let’s assume that such attacks will hit populated buildings and cause significant damage. We’ll pin the expected loss at 200, This is 150 for the airplane and 50 on the ground, somewhat equivalent to Case 2 with a bomb in a crowded area.
Ok, so we have expected losses, now lets talk about how often we expect such attacks to happen? And yes, this is a rough guess. But, since terrorists are unlikely to publish their plans, it’s the best we can do. We also face a difficulty in that these are still (thankfully) pretty rare events. And trying to extrapolate from a small set of data points is always a fraught exercise. So, fell free to quibble over these numbers, but I don’t think any numbers which fall into a reasonable range will change things much.
Case 1 - This attack as a pretty low barrier to entry. If a person can be found to perform the attack, arming them isn’t terribly hard. So, we let’s assume we get 2 of these attacks a year. I don’t think we’re actually getting that, but out goal is just to get into the right ballpark.
Case 2 - This attack takes a touch more work, bomb making isn’t that hard, but making a really effective one isn’t easy either. This type of attack does have the advantage that it doesn’t always require the attacker to die in the process. So, it might be easier to find someone willing to engage in such an attack. Let’s call this 1 per year.
Case 3 - This also requires a bomb, but it may not need to be quite as big to be effective. Granted, modern aircraft can be amazingly resilient (see Aloha Flight 243). This attack also results in the attacker dying, so that can be a bit harder to source. So, lets say this happens once every other year, or 1/2 per year.
Case 5 - So, no bomb this time, but you have to have an attacker not only willing to die in the process, but also go through enough flight training to fly the aircraft to it’s target. And you need the training itself. Plus, the attacker needs to get a weapon onto the aircraft. And since they need to overpower 100-200 people who might just take exception to the hijacking, you probably need multiple attackers willing to die in the attack. This is a pretty high bar to clear; so, let’s say that these attacks happen at a rate of 1 every 5 years.
Ok, so let’s consider our Annualized Loss Expectancy (ALE) with what we have:
Case | Loss Expectancy | Frequency | ALE |
---|---|---|---|
1 | 20 | 2 | 40 |
2 | 50 | 1 | 50 |
3 | 150 | 0.5 | 75 |
5 | 200 | 0.2 | 40 |
Total | - | - | 205 |
Alright, so lets start talking about controls we can use to mitigate these attacks. By raw numbers, the thing we should care about most is Case 3, as that has the highest ALE. So, what can we do about bombs on airplanes? Making them more resilient seems like a good start, but if we could do that, the military would have done it long ago. So, really the goal is to keep bombs out of airplanes. And that’s going to mean some sort of screening. We could just say “no carry on, period” and move the problem to the cargo hold. This would reduce the frequency of Case 3 and Case 5, as it would be much harder to get a bomb or weapon onto an airplane, without a bag to hide them in. But, travelers are not likely to give up all carried on bags. So, that really leaves us with searching bags and controlled checkpoints to do it. Of course, as has been noted, this would likely mean that Cases 1 and 2 become deadlier. Let’s put some numbers to it. Let’s say that checkpoints reduce the frequency of Cases 3 and 5 by a factor of 4 and increase the Loss Expectancy of Cases 1 and 2 by 1.5.
Case | Loss Expectancy | Frequency | ALE |
---|---|---|---|
1 | 30 | 2 | 60 |
2 | 75 | 1 | 75 |
3 | 150 | 0.125 | 18.75 |
5 | 200 | 0.05 | 10 |
Total | - | - | 163.75 |
And we could push the numbers around for the effect of the checkpoints. And we could look at other controls or controls in combination. But, this is the sort of risk analysis which would need to be done to make such decisions. And, ideally, the numbers chosen would be done with a bit more care than my rectal extraction method. Can I say that anyone at the TSA/DHS/etc did this sort of analysis? No, but I suspect there has been some work on it. And it probably does lead to the conclusion that the expected loss is lower for airports with checkpoints than airports without. Though, that doesn’t excuse the TSA’s abysmal track record for tests done by the FBI.
My list of items I look for:
As for that hackernews response, I’d categorically disagree with most of it.
An app, self-contained, (essentially) a single file with minimal dependencies.
Ya…no. Complex stuff is complex. And a lot of good stuff is complex. My main, self-hosted app is NextCloud. Trying to run that as some monolithic app would be brain-dead stupid. Just for the sake of maintainability, it is going to need to be a fairly sprawling list of files and folders. And it’s going to be dependent on some sort of web server software. And that is a very good place to NOT roll your own. Good web server software is hard, secure web server software is damn near impossible. Let the large projects (Apache/Nginx) handle that bit for you.
Not something so complex that it requires docker.
“Requires docker” may be a bit much. But, there is a reason people like to containerize stuff, it avoids a lot of problems. And supporting whatever random setup people have just sucks. I can understand just putting a project out as a container and telling people to fuck off with their magical snowflake setup. There is a reason flatpak is gaining popularity.
Honestly, I see docker as a way to reduce complexity in my setup. I don’t have to worry about dependencies or having the right version of some library on my OS. I don’t worry about different apps needing different versions of the same library. I don’t need to maintain different virtual python environments for different apps. The containers “just work”. Hell, I regularly dockerize dedicated game servers just for my wife and I to play on.
Not something that requires you to install a separate database.
Oh goodie, let’s all create our own database formats and re-learn the lessons of the '90s about how hard databases actually are! No really, fuck off with that noise. If your app needs a small database backend, maybe try SQLite. But, some things just need a real database. And as with web servers, rolling your own is usually a bad plan.
Not something that depends on redis and other external services.
Again, sometimes you just need to have certain functionality and there is no point re-inventing the wheel every time. Breaking those discrete things out into other microservices can make sense. Sure, this means you are now beholden to everything that other service does; but, your app will never be an island. You are always going to be using libraries that other people wrote. Just try to avoid too much sprawl. Every dependency you spin up means your users are now maintaining an extra application. And you should probably build a bit of checking into your app to ensure that those dependencies are in sync. It really sucks to upgrade a service and have it fail, only to discover that one of it’s dependencies needed to be upgraded manually first, and now the whole thing is corrupt and needs to be restored from backup. Yes, users should read the release notes, they never do.
The corollary here is to be careful about setting your users up for a supply chain attack. Every dependency or external library you add is one more place for your application to be attacked. And just because the actual vulnerability is in SomeCoolLib.js, it’s still your app getting hacked. You chose that library, you’re now beholden to everything it gets wrong.
At the end of it all, I’d say the best app to write is the one you are interested in writing. The internet is littered with lots of good intentions and interesting starts. There is a lot less software which is actually feature complete and useful. If you lose interest, because you are so busy trying to please a whole bunch of idiots on the other side of the internet, you will never actually release anything. You do you, and fuck all the haters. If what you put out is interesting and useful, us users will show up and figure out how to use it. We’ll also bitch and moan, no matter how great your app is. It’s what users do. Do listen, feedback is useful. But, also remember that opinions are like assholes: everyone has one, and most of them stink.
For a similar story, which isn’t a urban legend. My mother used to be the main resource for an archeological information center in the US Southwest. When work crews dug up a body, she’d get a call from the coroner to ask, “is it yours or mine?” While both are going to want to know the cause of death, the coroner isn’t going to open a criminal case for a Native America burial.
Someone is trying to re-create the virus from Snow Crash
I’d argue that the main reason you see more anime is the target audience.
Western animation is usually aimed at young children. For as much as I may have loved Disney’s Gummi Bears as a young child (decades later and I can still hear the theme song on my head), it’s now pretty painful to watch. Some shows have aged pretty well and some newer shows aren’t quite so bad. But, the target audience still seems to be younger children for much of it. There are exceptions, and several of those are pretty well known. For example, The Simpsons and Futurama are both popular animated shows, and both are not aimed at children.
Anime, by contrast is often aimed at teenagers. This means that it’s part of the audience’s formative years. People form bonds with the shows and carry some of those bonds into adulthood. And while the writing often falls into cringe inducing melodrama, there’s enough of it that is passable fun, usually simple hero stories. The shows can be like a comfy blanket that doesn’t insult the audience’s intelligence too much.
I’d also note that anime’s appeal goes back further than the 2000’s. My own introduction was Robotech, back in the 80’s. While it was a bastardized version of Macross, with some pretty awful writing (not that Macross’s writing is going to win awards any time soon) and a couple other shows, it was certainly a step above what most western studios were putting on for Saturday Morning cartoons. And that created a lifelong soft spot for anime. Heck, my desktop background is currently a Veritech Fighter. I still love the idea of Robotech, even if I only watch it in my memory through very heavily rose tinted glasses. And I imagine I’m not alone. The show may be different, but I suspect a lot of folks graduated from Disney and Hanna-Barbera cartoons to some type of anime as they got older and that anime was stuck with them.
The fact that the OS is replaceable sealed the deal for me.
And the default OS isn’t locked down and doesn’t try to prevent you from doing other stuff with it. What you want to do isn’t in the Steam interface? Switch over to desktop mode and you have full access to the underlying OS.
My only complaint with the Steamdeck is that I find using the touchpad on the right side for long gaming sessions hurts my hands. I 3d printed some grips which help; but, I think my hands just don’t like the orientation. Still love my deck though.
It probably comes down to the difficulty of of transport. We have a local fruit in the Eastern US, the Pawpaw. It’s a fantastic fruit and has a history of cultivation in the area. But, it does not transport well and has to be eaten pretty quickly after they ripen. So, it’s not a wide commercial success.
Have you considered just beige boxing a server yourself? My home server is a mini-ITX board from Asus running a Core i5, 32GB of RAM and a stack of SATA HDDs all stuffed in a smaller case. Nothing fancy, just hardware picked to fulfill my needs.
Limiting yourself to bespoke systems means limiting yourself to what someone else wanted to build. The main downside to building it yourself is ensuring hardware comparability with the OS/software you want to run. If you are willing to take that on, you can tailor your server to just what you want.
I do agree with what you are saying, but for a complete beginner, and a very general overview, I didn’t want to complicate things too much. I personally run my own stuff in containers and am behind CG-NAT (it’s why I gave it a mention).
That said, if you really wanted to give the new user that advice, go for it. Rather than just nit pick and do the “but actshuly” bit, start adding that info and point out how the person should do it and what to consider. Build, instead of just tearing down.
No, but you are the target of bots scanning for known exploits. The time between an exploit being announced and threat actors adding it to commodity bot kits is incredibly short these days. I work in Incident Response and seeing wp-content
in the URL of an attack is nearly a daily occurrence. Sure, for whatever random software you have running on your normal PC, it’s probably less of an issue. Once you open a system up to the internet and constant scanning and attack by commodity malware, falling out of date quickly opens your system to exploit.
Short answer: yes, you can self-host on any computer connected to your network.
Longer answer:
You can, but this is probably not the best way to go about things. The first thing to consider is what you are actually hosting. If you are talking about a website, this means that you are running some sort of web server software 24x7 on your main PC. This will be eating up resources (CPU cycles, RAM) which you may want to dedicated to other processes (e.g. gaming). Also, anything you do on that PC may have a negative impact on the server software you are hosting. Reboot and your server software is now offline. Install something new and you might have a conflict bringing your server software down. Lastly, if your website ever gets hacked, then your main PC also just got hacked, and your life may really suck. This is why you often see things like Raspberry Pis being used for self-hosting. It moves the server software on to separate hardware which can be updated/maintained outside a PC which is used for other purposes. And it gives any attacker on that box one more step to cross before owning your main PC. Granted, it’s a small step, but the goal there is to slow them down as much as possible.
That said, the process is generally straight forward. Though, there will be some variations depending on what you are hosting (e.g. webserver, nextcloud, plex, etc.) And, your ISP can throw a massive monkey wrench in the whole thing, if they use CG-NAT. I would also warn you that, once you have a presence on the internet, you will need to consider the security implications to whatever it is you are hosting. With the most important security recommendation being “install your updates”. And not just OS updates, but keeping all software up to date. And, if you host WordPress, you need to stay on top of plugin and theme updates as well. In short, if it’s running on your system, it needs to stay up to date.
The process generally looks something like:
Optionally, you may want to consider using a Dynamic DNS service (DDNS) (e.g. noip.com) to make reaching your server easier. But, this is technically optional, if you’re willing to just use an IP address and manually update things on the fly.
Good luck, and in case I didn’t mention it, install your updates.
No, because your vote won’t encourage investment in flipping the State. I agree that the current duopoly sucks. I was an ardent Bernie supporter and would very much like viable third parties. But, the DNC isn’t going to be looking at those third party votes. They need to believe that the Democrats have a chance of winning before they will invest in a State. If all they see are protest votes, then they won’t see a viable path to them winning and they will continue to ignore the State.
Ya, it sucks, but we really do need to just keep holding our nose and pulling the lever for the Democrat in the general election.
If you are in a deep red state, it will seem that your vote won’t matter. Because it mostly won’t. However, the way States vote changes over time. The closer the vote totals in a State, the more likely the National Democratic Party is to invest resources into building up and promoting candidates in those States. That sort of thing can shift the needle, if slowly. Keep in mind that California voted Republican from '68 to '88 (source) but shifted over time.
It sucks to vote and feel like you’re just pissing in the wind. But, each vote moves the needle just a bit more and maybe, eventually, things will swing.
I have to believe the actual poll and report aren’t as glaringly stupid as that headline. If you ask nearly anyone, “do you want peace?” They are going to respond with “yes.” The devil is always in the details though. Ask them, “should the war in Ukraine be ended by the Ukrainian Government capitulating to all Russian demands to secure an immediate peace?” And, you might find a lot of folks are suddenly less peaceful. This reminds me of the old saw:
There’s lies, damned lines and then there is statistics.
With a crafted question and a bit of p-hacking you can get a lot of results you want out of people.
And, it would appear that there is more coming in 2025.