I’m a huge fan of Caddy and I wish more people would try it. The utter simplicity of the config file is breathtaking when you compare it with Apache or Nginx. Stuff that takes twenty or thirty lines in other webservers becomes just one in Caddy.
I’m a huge fan of Caddy and I wish more people would try it. The utter simplicity of the config file is breathtaking when you compare it with Apache or Nginx. Stuff that takes twenty or thirty lines in other webservers becomes just one in Caddy.
Well, thanks to your guidance I was able to get my own server up and running. Converting the reverse proxy to Caddy was very easy, but then everything involving Caddy is stupidly easy. That also removed all the steps involving certs.
I’m going to try leaving out the subdomain for the S3 storage. Notesnook doesn’t seem to require it in the setup, whereas the other four addresses are specifically requested, and I feel like it would be better for security to not have Minio directly accessible over the web.
I also really want to try attaching their web app to this. They don’t seem to have their own docker release for it though, unless I missed something.
Hi, thank you so much for posting this. It’s a much better tutorial than the one provided by the Notesnook devs.
With that being said, I think it would be really helpful to have a bit more of a breakdown of what these individual components are doing and why. For example, what is the actual purpose of strapping a Monograph server onto this stack? Is that needed for the core Notesnook server to work, or is it optional? Does it have to be accessible over the web or could we leave that as a local access only component? Same questions for the S3 storage. Similarly, it would be good to get a better understanding of what the relationship is between the identity server and the main server. Why do both those components have to be web accessible at different subdomains?
This sort of information is especially helpful to anyone trying to adapt your process; for example, if they’re using a different reverse proxy, or if they wanted to swap in a different storage back-end.
Anyway, thanks again for all the time you put into this, it is really helpful.
Idk if there’s something like LineageOS for AndroidTV, that would be great.
Agreed, I would love this.
As others have suggested, OSMC is OK, but personally I prefer having Android so that I can use SmarttubeNext and access native apps for stuff like Jellyfin, Dropout, Nebula, etc. For years I played with various Linux options, but in the end I ditched it all for an Nvidia Shield and I couldn’t be happier with the results.
Your specific questions have already been answered elsewhere in this thread, but I just want to add my usual plea to not use Portainer.
I’ve spent a lot of time with Portainer, both in my homelab and at work, and in both environments I eventually replaced it with Dockge, which is far superior, both for experienced users and newbies.
Basically, the problem with Portainer is that it wants you to be in an exclusive relationship with it. For example, if you create containers from the command like like you described, Portainer only has very limited control over them. Dockge, on the other hand, is very comfortable switching back and forth between command line and UI. In Portainer when you do create your compose files from the UI, it then becomes very difficult to interact with them from the command line. Dockge doesn’t give a shit, and keeps all the files in an easy location you choose.
Dockge will also do what you described in 5) take a docker command and turn it into a compose file. And it gives you much better feedback when you screw up. All in all its just a better experience.
Ooh, I will be giving this a go!
I mean, for anything where you’re willing to trust the container provider not to push breaking changes, you can just run Watchtower and have it automatically update. That’s how most of my stuff runs.
There’s no good answer to that because it depends entirely on what you’re running. In a magical world where every open source project always uses the latest versions of everything while also maintaining extensive backwards compatibility, it would never be a problem. And I would finally get my unicorn and rainbows would cure cancer.
In practice, containers provide a layer of insurance that it just makes no sense to go without.
Personally, I always like to use containers when possible. Keep in mind that unlike virts, containers have very minimal overhead. So there really is no practical cost to using them, and they provide better (though not perfect) security and some amount of sandboxing for every application.
Containers mean that you never have to worry about whether your VM is running the right versions of certain libraries. You never have to be afraid of breaking your setup by running a software update. They’re simpler, more robust and more reliable. There are almost no practical arguments against using them.
And if you’re running multiple services the advantages only multiply because now you no longer have to worry about running a bespoke environment for each service just to avoid conflicts.
Yeah, my own experience of switching to containers was certainly frustrating at first because I was so used to doing things the old way, but once it clicked I couldn’t believe how much easier it made things. I used to block out several days for the trial and error it would take getting some new service to work properly. Now I’ll have stuff up and running in 5 minutes. It’s insane.
While I understand the frustration of feeling like you’re being forced to adopt a particular process rather than being allowed to control your setup the way you see fit, the rapid proliferation of containers happened because they really do offer astonishing advantages over traditional methods of software development.
I’ll add here that the “docker top” command allows you to easily see what kind of resources your containers are using.
If you prefer a UI, Dozzle runs as a container, is super lightweight, requires basically no setup, and makes it very easy to see your docker resource usage.
Correct on both counts, although it is possible to set limits that will prevent a single container using all your system’s resources.
Correct me if I’m wrong, but I don’t think pipx can allow you to just put a shebang at the top of a script that automatically installs all the required dependencies the first time you run it?
What I really like about this, unless I’m missing something, is that it basically lets you create Python scripts that run in exactly the same way as shell scripts. I work with a lot of people who have pretty good basic Linux knowledge, but are completely at a loss when it comes to python specific stuff. Being able to send them a script that they can just +x and run sounds like a huge hassle saver.
The practical limit to the number of containers you can run on one system is in the high hundreds or more thousands, depending on how you configure some things, and your available hardware. It’s certainly more than you’ll even use unless you get into some auto-scaling swarm config stuff.
The issue is more about resource limits, and access to shared resources. I’d start by trying to figure out if there are certain specific containers that don’t play well together. Bring your setup online slowly, one container at a time, and take note of when things start to get funky. Then start testing combinations of those specific containers. See if there’s one you can remove from the mix that suddenly makes things more stable.
“Angry” was the charitable read. Your conveyed tone, intentional or not, was that of someone who was either talking down to their interlocutor, or frustrated that they felt they weren’t being understood. I picked “angry” because if your intention was to talk down to me, that comes off so much worse for you.
Regardless, my previous point stands. I have asked a number of questions that you have answered in only the most minimal fashion possible. That is not the bahaviour of someone who is genuinely trying to engage in a learning process. You’re not actually making the effort, presumably because you want me to make it all for you, for free. That’s a pretty shitty way to behave, and it’s a bad way to get help with anything.
And you’re solving this by getting angry at the person trying to help you?
Learning is a process that you engage in. It’s not a thing that’s done to you. You can’t learn anything if you’re not willing to be a productive part of that process.
I get that you’re frustrated. Learning is often frustrating. But you’re only going to magnify your frustration by turning it on other people.
Seconding this, I really can’t see the point of encryption on local only connections. Are you ready worried about someone hacking your WiFi?
Anyway, if you do want to do a reverse proxy, I’ll make my usual recommendation of Caddy instead. It handles certificates for you, using Let’s Encrypt, so there’s no need to add exceptions in your browser. And reverse proxying with Caddy is literally a one line config.