

Superlior you say? Superl!
Superlior you say? Superl!
super productivity is pretty good.
You can also sync between your phone, desktop, etc using different sync options including Dropbox, webdav, local file, etc
Wow, that’s so messed up: I didn’t know HP did that… I think it might just be a matter of time before others follow suit.
Sounds very Wireshark worthy!
That would be cool.
Here’s my new setup that might not work for everyone, but I’d recommend thinking about if you’re able to.
Network printers are blocked from Internet by my router. They have static IP addresses allocated (permanent DHCP leases) for convenience.
I have some Canon laser printers. I don’t want to install Canon software across my devices, so I setup a cups print server (lxc container) where I installed the software.
I setup and shared the printers (local network only), made them discoverable.
I use the CUPS web GUI over ssh tunnel if I need to check on job queues and do maintenance/admin tasks (don’t usually have to).
Clients immediately find the printers on the server, no driver required.
As a bonus, I made the margins 0 on the CUPS ppd on the server so that I get to print without margins when so desired (Canon has fixed minimum margins otherwise).
The one caveat is that the Canon drivers don’t work on raspberry pi (arm), so while I have a to-do to get around that by using a virtualization layer, you need a separate Intel/AMD machine for the print server if your printer doesn’t support ARM.
If you want persistent messages, use a messaging app like another poster posted. KDE connect should work, but it doesn’t work for my setup for some reason.
If you just need transient messages, which is more of my usecase, and lightweight sending, use pairdrop.
snapdrop and pairdrop app from fdroid for Android, pairdrop website in desktop.
You can just use the website instead of app on phone too.
Sending over LAN is local - it doesn’t go outside your own network.
If devices are on same WiFi, no pairing required.
You can also send across networks by pairing.
Splunk is already very expensive to be honest, with their policy of charging based on indexed logs (hit by searches) as opposed to used logs, and the necessity for a lot of logs to be indexed for 'in case something breaks '. Bit of hearsay there - while I don’t work for the team that manages indexing, I have had quite a few conversations with our internal team.
I was surprised we were moving from splunk to a lesser known proprietary competitor (we tried and gave up on elasticsearch years ago). Splunk is much more powerful for power users , but the cost of the alternative was 7-10 times less, and most users didn’t unfortunately use splunk power user functionality to justify using it over the competitor.
Being a power user with lots of dashboards, my team still uses splunk for now, and I have background conversations to make sure we don’t lose it, I think Cisco would lose out if they jacked up prices, I think they’d add value to their infrastructure offerings using splunk as an additional value add perhaps?
Here’s a slightly more detailed description of my debugging experience over the years (also includes that of many coworkers implicitly… many of whom I’ve walked through the stages).
As someone who has done a lot of debugging in the past, and has also written many log analysis tools in the past, it’s not an ether/or, they complement each other.
I’ve seen a lot of times logs are dismissed in these threads recently, and while I love the debugger (I’d boast that I know very few people who can play with gdb like I can), logs are an art, and just as essential.
The beginner printf thing is an inefficient learning stage that people will get past in their early careers after learning the debugger, but then they’ll need to eventually relearn the art of proper logging too, and understand how to use both tools (logging and debugging).
There’s a stage when you love prints.
Then you discover debuggers, you realize they are much more powerful. (For those of you who haven’t used gdb enough, you can script it to iterate stl (or any other) containers, and test your fixes without writing any code yet.
And then, as your (and everyone else’s) code has been in production a while, and some random client reports a bug that just happened for a few hard to trace events, guess what?
Logs are your best friend. You use them to get the scope of the problem, and region of problem (if you have indexing tools like splunk - much easier, though grep/awk/sort/uniq also work). You also get the input parameters, output results, and often notice the root cause without needing to spin up a debugger. Saves a lot of time for everyone.
If you can’t, you replicate, often takes a bit of time, but at least your logs give you better chances at using the right parameters. Then you spin up the debugger (the heavy guns) when all else fails.
It takes more time, and you often have a lot of issues that are working at designed in production systems, and a lot of upstream/downstream issues that logs will help you with much faster.
This is the caveat for me for now.
I’ve got decent RAM on an I9, but my graphics card, which is what matters here, isn’t up to par.