Computing – b3n.org https://b3n.org Where Computers Meet Conviction -- Coram Deo Sat, 17 Jan 2026 19:29:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://b3n.org/wp-content/uploads/2023/03/apple-touch-icon-150x150.png Computing – b3n.org https://b3n.org 32 32 52630963 Proxmox Backup Strategy | to S3 https://b3n.org/proxmox-backup-strategy-to-s3/ https://b3n.org/proxmox-backup-strategy-to-s3/#comments Sat, 17 Jan 2026 19:14:35 +0000 https://b3n.org/?p=123871 My Proxmox backup strategy has been to backup all the VMs to an onsite Proxmox Backup Server (PBS). Then sync that backup daily to a remote PBS server–this works well–but it requires maintaining a remote server. However… Proxmox Backup Server (PBS) added Sync to S3 object storage as a Tech Preview with their latest 4.0. ... Read more

The post Proxmox Backup Strategy | to S3 appeared first on b3n.org.

]]>
My Proxmox backup strategy has been to backup all the VMs to an onsite Proxmox Backup Server (PBS). Then sync that backup daily to a remote PBS server–this works well–but it requires maintaining a remote server.

However…

Proxmox Backup Server (PBS) added Sync to S3 object storage as a Tech Preview with their latest 4.0. So I’ve been experimenting with this… and if it continues to test well, my backup strategy will be to backup Proxmox VMs to my local PBS server, then replicate those backups offsite to AWS S3.

AWS S3 Costs

For refrecene, my VM backups have about 400,000 objects; 500GB of data, and 100GB of that changes frequently. AWS S3 Standard is expensive, but I can use S3 Intelligent Tiering to automatically move infrequently accessed objects to cheaper storage… so while the first 6 months will cost more, this is going to be a decade-long backup solution. Based on what I’ve seen so far… my estimated prices will end up being…

S3 Archive Access Storage 301GB x $0.0036/mo = $1.08
S3 Infrequent Access Storage 152GB x $0.0125/mo = $1.90
S3 Frequent Access Storage 82GB x $0.023/mo = $1.87
S3 Standard (Object Overhead) 9GB x $0.023/mo = $0.21
Intelligent Tiering Automation Fee = 400,000 x $0.0000025 = $1

Total monthly cost = $6.08.

If I ever had to restore, I estimate the egress fees would be about $45. …not horrible for something I’ll need to do rarely. Likely, never.

S3 Deep Archive

Another way to drop the cost further is to allow the objects to lifecycle into S3 Glacier Deep Archive, which would bring it down to $5.27.

Borg Invasion

For me, a 48-hour delay is acceptable. If I’m restoring from S3 we’re in a catastrophic scenario (e.g. fire, multiple drive failure, or Borg invasion) where I’d have to order new hardware to restore to anyway. We’re still looking at the same egress fees because restoring Deep Archive files is free when using Intelligent Tiering.

AWS S3 Bucket Setup

To set this up in AWS, create an S3 bucket like normal. Since PBS is not capable of uploading to the Intelligent-Tiering storage class, create a lifecycle rule to transition all objects to Intelligent-Tiering on Day 1. I also setup a 7-day period to version objects after they’re deleted.

Then setup the archive configuration (optional). I set mine to transition to Archive Access Tier after 90 days, and Deep Archive Access tier after 180-days.

If you should ever need to restore, Proxmox is not expecting items to be in Deep Archive, so it’s best to restore them first which you can do with s3cmd–the thaw should be free since these objects were put into deep archive by intelligent-tiering.

s3cmd restore --recursive --restore-priority=bulk s3://yourpvebackupstor/

After 48 hours the files can all be restored.

Proxmox Backup Server Setup

Add an S3 Endpoint, us endpoint format:

{{bucket}}.s3.{{region}}.amazonaws.com

Add an S3 Datastore

Select the “AWS PBS Backup” Datastore (I called it s3-aws in the screenshot below), and add a Sync Pull Job. Select your local Proxmox Backup datastore “pbsbackup” to S3.

I should note, that this is still a tech preview. So use at your own risk. However, most of the issues I’ve read about seem to be related to an earlier release of PBS, or not using a tier 1 cloud provider like AWS, Google Cloud, Azure, etc. One provider in particular that I think would be good for this use case is Google Cloud. Their Archive tier is $0.0012/GB. It’s a little more expensive than Azure and AWS’s $0.00099, but with Google, restores from their archive class are instantaneous.

Proxmox Backup Server is the best open-source VM backup solution I’ve used. It is a robust and comprehensive solution, but easy to setup. Having an S3 target as a datastore makes the solution even simpler.

The post Proxmox Backup Strategy | to S3 appeared first on b3n.org.

]]>
https://b3n.org/proxmox-backup-strategy-to-s3/feed/ 2 123871
Ziply 5Gbps Fiber in Sandpoint Idaho https://b3n.org/ziply-5gbps-fiber-in-sandpoint-idaho/ https://b3n.org/ziply-5gbps-fiber-in-sandpoint-idaho/#respond Sat, 24 May 2025 13:42:15 +0000 https://b3n.org/?p=123581 After years of watching Ziply install Fiber all over North Idaho and being on the waiting list, Ziply Fiber (referral link) called. Fiber is available at my house. I already have 1Gbps symmetrical with Ting, but Ziply offers faster, cheaper, and a larger variety of plans. In Sandpoint they offer 100 Mbps, 300 Mbps, 1 ... Read more

The post Ziply 5Gbps Fiber in Sandpoint Idaho appeared first on b3n.org.

]]>
After years of watching Ziply install Fiber all over North Idaho and being on the waiting list, Ziply Fiber (referral link) called. Fiber is available at my house. I already have 1Gbps symmetrical with Ting, but Ziply offers faster, cheaper, and a larger variety of plans. In Sandpoint they offer 100 Mbps, 300 Mbps, 1 Gbps, 2 Gbps, 5 Gbps, 10 Gbps, and 50 Gbps–the plans range from $20 to $900/month. Needless to say I got the 50…. just kidding. But I did think about it.

All plans are symmetrical with no data caps.

I decided to get 5Gbps ($80) because it’s cheaper than Ting ($89) and probably pretty close to what the UDM SE firewall/router can handle with IPS (Intrusion Prevention System) turned on. Ziply’s Nutritional label says it is 5568 Mbps down and 5567 Mbps up (I have no idea why those are the numbers). Here’s the results from the UDM Pro’s automatic speed tests…

Speed test results showing download and upload speeds for May 2025, with values ranging from 3.56 Gbps to 4.71 Gbps for download and 5.08 Gbps to 5.27 Gbps for upload.

From the VMs on my Proxmox server, I couldn’t get SpeedTest to saturate 5Gbps (using the CLI version), but I was able to do it with iperf3.

Screenshot of iperf3 results showing network performance metrics and transfer rates over 10 seconds.  Birate is 4.79 Gbps

Getting Ziply installed was a bit chaotic–I had the install date moved up, a no-show on the new date, the next day Ziply sent me an email telling me they’d need to move the date out because they hadn’t run fiber from the street yet… minutes later someone called to tell me they were at my house right now, I was about to drive home, but I found out they got my address mixed up–they were installing it for someone else.

Finally, the dogs alerted me to a visitor. Then the doorbell rang. It was a guy wearing a Ziply Fiber hat. But it turned out to be a sales guy trying to sell Ziply Fiber. πŸ€·β€β™‚οΈ Sorry dogs. False alarm.

The day before my install date, I got an email saying they couldn’t get a hold of me–but I had no missed calls. Who knows when the installer will come?

Two dogs looking out the window waiting for the Ziply Fiber installer.
Scout & Blaze waiting for the Ziply installer to show up day after day…

(now, I’m not sure if this is Ziply’s problem, lots of people are overworked so it’s just how things work with most contractors in North Idaho…I have the same sort of scheduling issue when trying to get any work done on the house).

Nokia ONT device mounted on a wall with multiple indicator lights illuminated.

When the installer, John, did come (amazingly, on the install date!) it went smoothly–he said I was the first 5 Gigabit customer he had setup. At first he thought he couldn’t do the install because there wasn’t enough fiber from the street to get to my house–but I told him I wanted it in the garage (where the server rack is), and that’s right where the fiber came in anyway.

The Nokia ONT (Optical Network terminal) he installed works for 5Gbps plans and below–fiber comes in, and RJ45 port goes to your router and negotiates at 1, 2.5, (I’m assuming it can do 5 but didn’t try it), and 10 gigabit.

Nokia ONY with fiber, ethernet, and power connected.

He got the Ziply side up and I knew how to take it from there.

UniFi UDM SE ports

I already had Ting on port 9 (the 2.5 GbE port), so I put Ziply on Port 10 (10GbE SFP+ port). John said he couldn’t run Fiber from the ONT device into the SFP+ port of the router (I think that’s a possibility if you get the more expensive 10 or 50Gbps plan). Ziply runs fiber from the street to the ONT device, then converts that to 10Gbps ethernet. The UDM SE’s fastest ethernet port is 2.5 gigabit, so I had to get a 10Gtek SFP+ to RJ45 Transceiver (Amazon) to use the SFP+ port.

Unifi UDM SE with ethernet and SFP connected

Since the UDM SE can do dual-WAN, I thought I’d do some side-by-side comparisons of Ziply (5Gbps) and Ting (1Gbps). This is also very unscientific. I only had one house to test this from so your results may vary. This is not quite apples-to-apples from a speed perspective, but from a cost perspective but they are about the same price. Ziply 5Gbps is $80 12/months (but first month is free) then goes to $105) and Ting is $89/month–so over 2-years they come out even.

Latency for Ziply showing very consistent 11ms.
Ziply Latency at 11ms, very consistent
Ting latency ranges from 10 to 24ms
Ting Latency from 10-24ms (a bit of jitter which may impact VOIP calls)

The UDM SE continuously tracks latency to Microsoft, Google, and Cloudflare. Ziply is a very consistent (little to no jitter) 11ms to Google and Cloudflare and 14-16 to Microsoft. Ting is around 9-11ms to Cloudflare and Google (a bit more jitter) and fairly high latency to Google at 30-35ms. I watched both through various network loads from just a few Kbps to several hundred Mbps and they stayed pretty consistent.

Ziply Fiber latency screenshot showing  Microsoft at 14ms, Google at 11ms, and Cloudflare at 11ms.

Ting Fiber latency screenshot showing  Microsoft at 10ms, Google at 33ms, and Cloudflare at 10ms.

Global Ping ICMP Latency

I’ve you’ve never tried Globalping, it’s a way to benchmark your network using probes all over the world. It’s one of my favorite tools to test TTFB and load times in different regions for my blog. You can do ping, http, https, http/2, mtr, traceroute, DNS. You can also target certain regions, countries, states, or cities. For this first round I used Globalping’s http/2 test, this requires negotiating an SSL connection, and downloading a test page so it tests latency and upload speeds all at once. West coast was more or less the same, but we start to see faster routes with Ziply to the midwest and eastern United States. I would expect this since Ziply has their own 400Gbps link between Seattle and Chicago. Ziply was able to stay mostly under (and usually well under) 200ms while Ting was in the 400-600 range for eastern US. This is for a full GET request to a static page.

I should note that I repeated all these tests several times to make sure the results weren’t just an anomaly.

Globalping US http/2 test

(In all these screenshots, Ziply is the first image, Ting the second).

Globalping HTTP test screenshot showing green for Ziply across nodes in the US.
Globalping HTTP test screenshot showing green for Ting across nodes in the US but a few yellows in the Midwest and Eastern United States.

International is more or less a wash… Ting appears to have a slightly faster route to Asia while Ziply routes are better to Europe.

Globalping latency test screenshot

Globalping latency test screenshot

And for a latency (Ping) test we see a very slight advantage to Ziply the further East we get…

Globalping HTTP test screenshot showing solid green for Ziply across nodes in the US with a few yellows.
Globalping HTTP test screenshot showing solid green for Ting across nodes in the US with a few yellows and a couple oranges.

Now, for the most part latency won’t impact you. I suppose FPS (First Person Shooter) gamers might care, and it’s always good to shave a few milliseconds off of VOIP calls–if you’re using Mumble or FaceTime this will help a lot. If you’re using MS Teams there’s so much of a delay you won’t notice the gains. But if you’re browsing the web most content should be coming from a CDN which would have a local POP (Point of Presence), or if you’re hosting a webserver (like I am), you’d be pushing that content out across the world using a CDN like Cloudflare anyway–so latency from Western to Eastern US, and even globally shouldn’t be much of an issue.

I did not include a screenshot for a worldwide latency ping test because the two were so close I couldn’t discern a difference between them.

For the plans, I think Ziply is a better value than Ting, the 1Gbps Ziply plan is $50 (initially) while the same from Ting is $89, but that said Ting is fantastic. They came in and replaced Northland Cable and Frontier DSL who had both done very little innovation in Sandpoint… but then Ting stopped improving (I’m guessing when Tucows sold them off). That said, Ting Fiber also offers decent bundling allowing you to get their Verizon Wireless MVNO (Ting Mobile) plan with unlimited talk/text/data for $10/month). If you’ve got a family and everyone’s a heavy data user–it’s hard to pass that up.

Dual WAN Options in the UDM SE

You can set one WAN as the primary and the other as backup using Failover Only (incoming connections will still work on both even in Failover mode) or you can Load Balance and even pick a percentage of how you want to utilize each ISP (e.g. 15% to Ting and 85% to Ziply Fiber to essentially get 6Gbps). On your outbound routing if you want certain traffic to only use one ISP that can be accomplished with policy based routing on various parameters such as the source VLAN, device, IP range, destination website, destination country, etc.

IPv6

One thing that Ting has been lacking is IPv6. Well, Ziply doesn’t have it either… this actually does matter to me because when I deploy an AWS server I have to assign it an IPv4 (which costs extra), they give you IPv6 for free) just to ssh into it (I know I could setup tunnels or a VPN …but this is just simpler). But it looks like Ziply may start rolling out IPv6 (Reddit) where I haven’t seen any indication Ting hasn’t started on this yet (except for static IP customers in select areas).

Reliability

I have no idea how reliable Ziply Fiber will be–I’ll try to remember to update this section in a year. I’m hoping it will be as robust as Ting. So far in the 5 years we’ve had Ting Fiber I can recall one outage that was fixed pretty quickly after someone cut the line during construction.

On the Necessity of 5Gbps

Is 5Gbps faster than 1Gbps? Do I notice any difference? None whatsoever.

I honestly can’t tell the difference between 5Gbps on Ziply Fiber vs 1Gbps on Ting. I’m pretty certain I could go down to 300Mbps and not notice anything–I’ve looked at the bandwidth utilization in worse case scenarios …I’m on a Teams video call at work, Kris is on a video call with family, and a YouTube video is streaming… and I’m downloading some ISOs–I rarely see it spike even to 300Mbps and usually it stays well below 100Mbps. When you’re on gigabit, the bottleneck is almost always on the other side. …I think 100Mbps would be fine 99% of the time, but the 300Mbps plan is the sweet spot of price/noticeable performance and plenty of headroom.

Now, I’m also limited by wireless. I have a a U7 Pro (Amazon) access point than could theoretically saturate 5Gbps–but the uplink port is only 2.5GbE… and the PoE ports on my UDM SE are limited to 1GB… so other than my Proxmox VMs which are hooked up to the router using 10GbE fiber… nothing is going to push 5Gbps.

The only area I’ve see an improvement so far is cloud backup speeds to/from AWS S3.

Next

I’ve had a few ideas I’ve wanted to try where 1Gbps may not cut it, and a few others where dual-WAN would make things more reliable–With Ziply at 5Gbps I may drop to the cheapest Ting plan and use it as a backup. Or once the promotion ends I may drop Ziply to the 1Gbps plan. Having the option of 50 opens up some scenarios I’ll have to think about.

The post Ziply 5Gbps Fiber in Sandpoint Idaho appeared first on b3n.org.

]]>
https://b3n.org/ziply-5gbps-fiber-in-sandpoint-idaho/feed/ 0 123581
TrueNAS Backup Strategy https://b3n.org/truenas-backup-strategy/ https://b3n.org/truenas-backup-strategy/#comments Sat, 22 Mar 2025 16:17:39 +0000 https://b3n.org/?p=123323 I spent some time over Christmas break simplifying and reducing the cost of our cloud backups. I wrote about the 7 Backup Principals on the MacBook Backup Strategy post and the same applies here. My TrueNAS server consists of primarily SMB shares–videos, documents, files, old computer archives, and a webdav share which I use for ... Read more

The post TrueNAS Backup Strategy appeared first on b3n.org.

]]>
I spent some time over Christmas break simplifying and reducing the cost of our cloud backups. I wrote about the 7 Backup Principals on the MacBook Backup Strategy post and the same applies here.

My TrueNAS server consists of primarily SMB shares–videos, documents, files, old computer archives, and a webdav share which I use for DEVONthink (DT).

TrueNAS Backup Strategy

The rule of 2…

Meme of Yoda - Always two there are, n more, no less.

3-2-1 is popular, 3 copies of your data, 2 different types of storage, and 1 copy offsite. This is a good practice. I think what exactly 2 is refers to ambiguous to a lot of people, so here’s how I implement it in the cloud era.

Two Backup Technologies

TrueNAS Scale has several ways to synchronize data to the cloud. With any backup, I think it’s wise to use two distinct technologies. There are so many scenarios where backups can become corrupted, or you find out the backup program excluded a certain folder.

Two Offsite Destinations

It’s also a good idea to backup data to at least two distinct offsite destinations. Last year Google accidentally deleted the cloud account of a Pension fund (Ars) and in another case Scaleway lost object storage (Reddit). Even AWS S3 has lost data (Quora).

Spock quote - once you have eliminated the impossible whatever remains, however improbable, must be the truth.

Two Solutions

For TrueNAS backup solutions–I use 2.

  • TrueCloud Sync (Restic). This is new to TrueNAS, but is powered by Restic which has been around awhile, and only works with a StorJ target.
  • Cloud Sync (Rclone). This supports backing up to various cloud providers (such as S3, B2, etc.). You can think of it as the rsync equivalent but supporting object storage.

There are two other backup methods worth considering that I don’t currently use:

  • Rsync (Rsync). This supports synchronizing files to an Rsync server. Rsync has been around while and is probably one of the most reliable and tested file synchronization programs.
  • Replication Tasks (ZFS Replication). Can backup both ZVOL (block storage and datasets). If you are using ZVOL block storage (iSCSI or Virtual Machines on TrueNAS) replication tasks are the best way to back them up. I don’t currently use block storage so I’m not concerned about this, but if I did, I’d use ZFS replication.

Cloud Sync to AWS S3 Intelligent Tiering

Screenshot of cloud sync schedule

Cloud Sync is a robust synchronization option. It is based on Rclone which has been around awhile and is the de-facto standard for object storage synchronization. Rclone was never meant to be a backup tool, but you can combine Rclone’s sync with S3 versioning to make a backup solution.

Since all Cloud Sync does is copy/sync files, this method is dead-simple–there is very little that can go wrong and no database to corrupt.

I backup to AWS S3 – Intelligent Tiering class. On the TrueNAS side simply set the storage class to “Intelligent Tiering”. On AWS – when creating the bucket I enabled versioning. And I setup an Intelligent Tiering Archive configuration so that after 90 days objects are moved to the Archive Access Tier, and after 180 days they move to Deep Archive. I also setup a Lifecycle Policy to remove non-current versions after 180 days.

With intelligent tiering, while initially expensive at $23/GB, objects that don’t change eventually go down to the $0.00099/GB rate when they lifecycle to the S3 Deep Archive tier. Most of the data on our TrueNAS unit doesn’t change frequently so is charged the Glacier Deep Archive cheaper rate.

A few more thoughts & advantages of AWS:

A lot of people try to backup to the nearest AWS region–I decided to backup to a region far away in the Eastern US–just on the off-chance there’s a large geographic disaster.

Also, my hats off to AWS on performance–I have also used BackBlaze B2, StorJ, and others–and AWS outperforms all of them–it can nearly max out my gigabit connection. If you want fast backups use AWS (I am guessing GCP and Azure would offer the same performance).

Also, if you have a lot of data and slow internet, one option to upload or restore faster is using AWS Snowball (they will send you physical storage) to seed your backup or restore. It’s probably not needed for most people with modern internet speeds–but it’s a nice option to have if your internet is slow (or you are trying to restore without internet).

Cost is a main complaint of AWS, and I partially agree, but it depends on the scenario. I think for most situations if you’re using intelligent tiering and enabling lifecycle into the S3 Deep Archive storage class, unless all your data is hot, the cost should end up cheaper than most other options over the long-run. You may have a high restore cost in the year of disaster–but with years of lower costs on the Deep Archive tier–you probably will still come out ahead. That said, if most of your data is frequently changing StorJ and B2 are likely going to be cheaper.

TrueCloud Sync to StorJ

Screenshot of TrueCloud Backup Tasks schedule

The secondary backup I use is TrueCloud. TrueCloud (Restic) is fairly new to TrueNAS, but the underlying technology, Restic has been around awhile. One problem with Cloud Sync/Rclone cloud backups is the per object overhead which can drive up storage and transaction costs. Restic reduces per object costs by chunking small files together. I think Restic is an excellent tool to backup a lot of small files. It also handles versioning. It however, does not handle object-lock so it’s probably not as robust against some threats. TrueCloud Sync will backup your .zfs snapshot folders by default–so I suggest adding an exclude for “.zfs”.

If you’re familiar with Restic, you know that you can point multiple clients at the same Restic repository. Don’t do that. You do not want to do this on TrueNAS because the GUI has no way to filter by client. Instead, backup each TrueNAS server, and even each TrueCloud backup job to its own repository (which is determined by the remote folder in the TrueNAS GUI).

TrueNAS will only allow you to back up to StorJ, and only to StorJ buckets created through TrueNAS or created via the iX Systems link from here (TrueNAS). This is annoying since you have to give the TrueNAS API key wide access. But you can mitigate this by creating a dedicated StorJ Project for TrueNAS backups. Or generate a new key for TrueNAS with limited permissions after initial bucket creation.

Cloud Providers

I always backup to two cloud providers, primarily to: AWS S3, and StorJ. My overall tech strategy for cloud is AWS first. I don’t think it matters which one–Azure and Google are also good with similar pricing to AWS. BackBlaze B2 is another great option and cheaper if you have a lot of changing data. AWS S3 is the most widely used storage solution and the only one comprehensive enough that all of my backup software works with it (there is value in having all your backups in one place). It has been around the longest so I’m confident most of the bugs and edge cases have been worked out. I use StorJ as the second mainly because I have a lot of tokens from running a StorJ node, I like the idea of decentralized storage, and it’s the only provider that TrueNAS supports with TrueCloud Sync. Also, in both cases I can prepay. I think this is important to do for cloud backups so that if there is some unanticipated event occurs you have plenty of time before the account goes dormant. I probably have 5-years worth of StorJ tokens, and I can pre-pay AWS S3 several years out.

Backup Frequency

You can backup as often as every minute–if you’re willing to pay those transaction costs. I generally classify my information into “data” (almost everything current) and “archive” (I archive old projects and files about once a year) so for data I back it up daily and for archives I back them up weekly. I’m not concerned about losing archived files before the weekly backup runs again because I’m always moving files from “data” to “archive” so it would take awhile for those files to age out of the “data” backups.

Encryption and Key Management

For encryption it’s easy to have two layers–one is the cloud provider’s default managed encryption keys using AWS KMS and StorJ Encryption Keys. And the other is TrueNAS can encrypt the filenames and files before they are uploaded. Key management and distribution is essential–you don’t want to have your house burn down with the only copy of your encryption keys! You’ll want to backup your cloud credentials/keys (if applicable) and TrueNAS encryption keys and store those in a couple of offsite locations.

TrueNAS Backups

I do think TrueNAS is missing a decent backup solution for ZVOLs (Block Storage) which would cover VMs and iSCSI drives–you can backup to another TrueNAS server using zfs sync; and of course you can script something yourself, but it would be nice if TrueNAS had a GUI solution to backup ZVOLs to object storage.

Overall, I think TrueNAS offers excellent backup capabilities for a NAS. You have a number of robust solutions to select from rsync, rclone, restic, and zfs replication. This is all packaged into a GUI/dashboard and alerting system so you can get a notification whenever backups fail. It’s also reliable–I’ve been running backups like this for a long time and only recall a couple of times that the backup failed when the internet went out (but it picked up the next day).

The post TrueNAS Backup Strategy appeared first on b3n.org.

]]>
https://b3n.org/truenas-backup-strategy/feed/ 3 123323
MacOS Backup Strategy https://b3n.org/macos-backup-strategy/ https://b3n.org/macos-backup-strategy/#comments Sat, 25 Jan 2025 20:55:00 +0000 https://b3n.org/?p=123143 I simplified our backup strategy for our MacBooks. Here’s where I landed: 7 Backup Principles The most comprehensive yet essential list I’ve come across is the seven characteristics of a backup plan created by Ross Williams: I’m using two backup solutions: Time Machine Backup to TrueNAS (local) The MacBooks primarily backup to Apple Time Machine ... Read more

The post MacOS Backup Strategy appeared first on b3n.org.

]]>
I simplified our backup strategy for our MacBooks. Here’s where I landed:

Backup strategy showing a MacBook backing up to TrueNAS, AWS S3, and StorJ.

7 Backup Principles

The most comprehensive yet essential list I’ve come across is the seven characteristics of a backup plan created by Ross Williams:

  1. Coverage. A backup should be comprehensive. I try to err on the side of backing up everything and create exclusion lists.
  2. Frequency. A backup should be done often–this means it must be automated.
  3. Separation. Physical separation of at least one backup to protect against local, regional catastrophes (house fire, hurricane, etc.). I use cloud backups for this. But make sure you have your cloud login and encryption keys stored separately as well! You don’t want a house fire to wipe out your only copy of your cloud login info.
  4. History. The ability to pick a date and do a point-in-time restore. This is important to prevent your only backup being the latest version of a file that you corrupted!
  5. Testing. On World Backup Day (March 31st), I always do a spot check by restoring a few files. And when getting a new computer, I restore from backup (a great way to test).
  6. Security. Backups must be encrypted. But be careful you don’t forget the password. If relying on encryption keys, make sure those are distributed as broadly well as your backups!
  7. Integrity. Cold or immutable versioned backups are a must have. It’s the only way to recover after long-unnoticed data corruption.

I’m using two backup solutions:

  1. Time Machine (to TrueNAS) for local backups.
  2. Arq Backup (to AWS S3 and StorJ) for offsite backups.

Time Machine Backup to TrueNAS (local)

The MacBooks primarily backup to Apple Time Machine on my NAS. I have a local TrueNAS server with a special Time Machine backup share via SMB. All our Macs backup to Time Machine hourly. Those backups get pruned to daily and weekly as they age out. I set a 2TB quota for each Mac’s backup so that it doesn’t grow in size infinitely. TrueNAS automatically creates a ZFS snap upon SMB disconnect (at the completion of a backup). This ensures we have clean immutable snapshots of the backups.

Arq Backup (cloud) to AWS S3 and StorJ

One problem I’ve had with cloud backups in the past is the restore speed. But I’ve not found that to be a problem recently. With gigabit internet, restoring over the internet is as fast as the LAN.

πŸ§ͺ If one of Kris & Eli’s home school science experiments blew up the house, destroying the MacBooks and the TrueNAS server, and we somehow survived; maybe I’d want a copy of our insurance policy. I could run over to BestBuy, get a new Mac, go over to a friend’s house with gigabit internet, and download the few files I needed in minutes. Or I could do a complete restore in a few hours.

Arq Backup is designed to perform cloud backups. A family license covers 5 computers.

Arq supports object locks on S3, B2, and StorJ, which means it can make cloud backups immutable. It also chunks small files together which helps reduce cloud storage costs.

Cloud Backup Providers

I chose to backup to two cloud locations: AWS and StorJ.

AWS S3 Glacier graphic

AWS S3 Glacier Deep Archive storage class costs $0.00099/GB/month. The Arq backup data set for my MacBook is 370GB (this includes all my documents, Library, Photos, videos, etc.), so the cost to back it up is $0.37/month. It may be a little closer to $0.42 with the transaction costs.

Retrieval costs: A lot of people mention the retrieval delay and high AWS restore fees and egress fees. But it would only cost around $34.00 to do a bulk (48 hours) restore and download that out of AWS. That’s assuming I needed to restore everything. Chances are most if not all of the data would be available in iCloud. To restore all 3 of our laptops would be around $100. Most flat-fee cloud backup services cost more than that annually.

StorJ is distributed storage network with nodes all over the world. Anyone can run a node. When you upload an object to StorJ it is segmented and split up into 88 pieces–only 29 are needed to rebuild the file. StorJ runs $0.004/GB monthly plus a per-segment fee of $0.0000088. Download is $0.007/GB making it ideal if you need to restore frequently. Arq does a good job at chunking up small files to reduce the number of segments. I mostly use StorJ because I’ve been running a node, so I have a lot of StorJ coins. It’s like trading storage with others node operators.

World map showing StorJ node locations (there are a lot).

iCloud (not a backup)

iCloud is a sync and file sharing service. I don’t consider it a backup because its limited version history abilities. It’s better thought of as a sync service. I’ve also noticed it excludes syncing some folders like Videos and some application library data. I’m not confident it would have everything. But, I think it can still be considered as an extra partial copy of your data for some DR (Disaster Recovery) purposes.

Appendix A: MacOS System Restore from Encrypted Time Machine Backups

I use Encrypted Time Machine backups. For restoring individual files you can just use it like normal. Go into Time Machine mode, pick a point in time, and restore the file. Doing a system restore was tricky but here are the steps that worked for me:

Photo of Mac screen transferring information.  Showing 15% progress.
  1. Create account. Boot computer, create a temporary account (you don’t need to setup iCloud since it’ll be wiped out).
  2. Migration Assistant. I found if I used the Migration Assistant, it would try and fail to mount the backup with “mount failed” after entering credentials. In Finder, browse to the TrueNAS share and open up the Time Machine backup. Enter the encryption password when prompted and wait until it mounts. Then run the Migration Assistant. Ignore the TrueNAS server this time. You may need to wait a few minutes, but the mounted Time Machine backup will appear. Select that one.
  3. Select the latest point time to restore from.
  4. My backup (which was a nearly full 512GB drive with over a million files) took about 12 hours to fully restore over a gigabit wireless connection. The restore is slow. It doesn’t saturate a gigabit. But you can check the TrueNAS network graphs to see that data is transferring. On one of the MacBooks, it got stuck 8 hours in and just hung. I had to start over but it worked the second time.

Appendix B: My Arq Backup Exclusion List

I added exclusions to Arq’s default wildcard exclusion list in case anyone finds it useful… these are things I don’t need backed up. DevonThink3 is already synced to my TrueNAS server which is backed up so I don’t need each and every Mac to back it up, cache, tmp, and temp folders, Logos data, the trashcan, etc.

.DocumentRevisions-V100
.MobileBackups
.MobileBackups.trash
.Spotlight-V100
.TemporaryItems
.Trash
.Trashes
.dbfseventsd
.dropbox
.dropbox.cache
.fseventsd
.hotfiles.btree
.vol
Backups.backupdb
Cache
Caches
DerivedData
node_modules
*/iTunes/iTunes Media/Downloads
*/iTunes/iTunes Media/Podcasts
*/iTunes/Album Artwork
*/iTunes/Previous iTunes Libraries
*/Library/Application Support/CrashReporter
*/Library/Application Support/Dropbox
*/Library/Application Support/Google
*/Library/Application Support/MobileSync/Backup
*/Library/Application Support/com.apple.LaunchServicesTemplateApp.dv
*/Library/Biome
*/Library/Caches
*/Library/Containers/com.apple.mail/Data/Library/Mail Downloads
*/Library/Containers/com.apple.mail/Data/DataVaults
*/Library/Developer
*/Library/Google/GoogleSoftwareUpdate
*/Library/Metadata/CoreSpotlight
*/Library/Mirrors
*/Library/PubSub/Database
*/Library/PubSub/Downloads
*/Library/PubSub/Feeds
*/Library/Safari/Favicon Cache
*/Library/Safari/Icons.db
*/Library/Safari/Touch Icons Cache
*/Library/Safari/WebpageIcons.db
*/Library/Safari/HistoryIndex.sk
*/Library/VoiceTrigger/SAT
*/MailData/AvailableFeeds
*/MailData/BackingStoreUpdateJournal
*/MailData/Envelope Index
*/MailData/Envelope Index-journal
*/MailData/Envelope Index-shm
*/MailData/Envelope Index-wal
tmp
temp
*/Library/Weather
Cache.db
.DS_Store
Library/Application Support/Logos4
com.apple.milod/milo.db-wal
*/Library/Mail/V10/MailData/recentSearches.plist
*/Library/Application Support/Logos4
*/Data/com.apple.milod
*/Library/Assistant
*/Library/Group Containers/group.com.apple.siri.referenceResolution
*/Library/Group Containers/group.com.apple.AppleSpell
*/Library/Group Containers/group.com.apple.replicatord
*/Library/Group Containers/group.com.apple.tips
*/Library/Group Containers/group.com.apple.siri.remembers
*/Library/Group Containers/group.com.apple.spotlight
*/Library/Group Containers/group.com.apple.siri.sirisuggestions
*/Library/Group Containers/group.com.apple.chronod
*/Library/Group Containers/group.com.apple.feedbacklogger
*/Library/Group Containers/group.com.apple.tipsnext
*/Library/com.apple.icloud.searchpartyd
*/Library/Containers/com.apple.news.widget
*/Library/Containers/com.apple.lighthouse.*
*/Library/Containers/com.apple.Safari
*/Library/Containers/com.apple.stocks
*/Library/Containers/com.apple.stocks.widget
*/Library/Containers/com.apple.iCloudDriveCore.telemetry-disk-checker
*.dtBase2
*/Library/Suggestions
*/Library/DuetExpertCenter
*/Library/Saved Application State
*/Library/News
*/Library/Application Support/DEVONthink 3
*/Library/IntelligencePlatform

The post MacOS Backup Strategy appeared first on b3n.org.

]]>
https://b3n.org/macos-backup-strategy/feed/ 14 123143
Akismet to Turnstile https://b3n.org/akismet-to-turnstile/ https://b3n.org/akismet-to-turnstile/#respond Sat, 12 Oct 2024 19:11:00 +0000 https://b3n.org/?p=122574 I’ve always allowed comments on this blog, and even allow people to disagree. I rarely moderate comments except when they’re inappropriate. But one of the issues I have to deal with is comment spam. I moderated the comments by hand for well over a decade but it gradually turned into hours of work each week. ... Read more

The post Akismet to Turnstile appeared first on b3n.org.

]]>
I’ve always allowed comments on this blog, and even allow people to disagree. I rarely moderate comments except when they’re inappropriate. But one of the issues I have to deal with is comment spam. I moderated the comments by hand for well over a decade but it gradually turned into hours of work each week.

So, I installed Akismet. It costs $120/year to auto-filter spam. Well worth it. Well, recently I went over the limit–this site gets 14,000 spam checks per month. That puts me on the Enterprise plan, bringing the cost to at least $2,400/year. But I’m just one guy running a personal blog!

I have absolutely avoided captchas. I can’t imagine inflicting on visitors the pain of trying to identify the letters, find all the bicycles on rotating images, or solve a puzzle. Captcha’s have a real cost to humanity:

Wasting human capital is evil. I don’t want to be responsible for wasting 500 human years a day.

So I switched to Cloudflare Turnstile. It takes a different approach. Instead of filtering spam, it filters bots. Since most spammers are bots this works pretty well. First off–we have to ask when do I care that a visitor is a human vs a bot. I don’t care that a bot reads the content. That can be good. For the purposes of spam, I generally care to stop bots from entering comments. So there’s no degradation of experience at all to both bots and humans if they’re not leaving a comment–bots are welcome.

But as soon as you want to leave a comment, that’s a different story:

Scene from Star Wars Cantina where Droids were not allowed

Turnstile will run a few tests to see if you’re a human. If Turnstile can determine that you’re human behind the scenes, it’s not even going to make you solve a puzzle. No human capital wasted. You will see nothing at all and can just leave a comment. But if your IP address or behavior looks somewhat suspicious (more likely if you’re using a VPN or using TOR) then it will display a checkbox. Click the checkbox to prove you’re a human–which I think is very minimal cost to leave a comment.

After using Turnstile for a few months–it works really well and I’m going to keep it. Now I do think Akismet is better. Akisment blocks spam, Turnstile merely blocks bots. It so happens that most spam is from bots, but not all. Turnstile has let some non-automated spam through. 2 spam comments total were held in moderation out of 30,000. So I have a 0.000067% failure rate–I’m pretty sure they were left by a human. But to manually delete 2 spam comments every couple months I’ll take the $2,400!

The post Akismet to Turnstile appeared first on b3n.org.

]]>
https://b3n.org/akismet-to-turnstile/feed/ 0 122574
My Printer Bought $610 of Toner https://b3n.org/my-printer-bought-610-of-toner/ https://b3n.org/my-printer-bought-610-of-toner/#respond Sat, 21 Sep 2024 14:48:26 +0000 https://b3n.org/?p=122342 Did you know printers can order their toner? September of 2023; my Brother MFC-L3770CDW printer (b3n.org) kept printing awful black and magenta streaks across all the pages. I wasted reams of paper trying to fix it to no avail. It was time to replace it. I was just going to get another one, but there ... Read more

The post My Printer Bought $610 of Toner appeared first on b3n.org.

]]>
Did you know printers can order their toner?

September of 2023; my Brother MFC-L3770CDW printer (b3n.org) kept printing awful black and magenta streaks across all the pages. I wasted reams of paper trying to fix it to no avail. It was time to replace it. I was just going to get another one, but there was a supply shortage of Brother laserjets so I replaced it with a Canon ImageCLASS MF753Cdw (Amazon).

Auto-Replenishment

One of the options with Canon is to turn on Auto-replenishment–whenever the toner runs low it will auto-order more by itself. I don’t like to shop so this is a no-brainer.

I couldn’t find any documentation on the re-ordering logic, but I can tell you the timing on the Canon auto-replenishment isn’t intelligent. Despite printing off several reams a day the Printer waited until I had ~500 pages of black toner remaining before ordering more so I ended up running completely out and having to wait on FedEx to continue my print job. It’s too dumb to look at usage patterns. As far as I can tell a cron job runs at 6 am, if the printer sees that less than 500 estimated pages are left on any color toner, it will order more. This probably works for most people.

Canon charged $173 for a 7600-page monochrome cartridge which works out to 2.3 cents per page. That was a black cartridge. And that was last year.

Fast forward 12 months to 2024, I make some coffee β˜•οΈ, check my email, and see an order confirmation from Canon. Yikes!

Order confirmation showing a total of $610.85 for 3 color Canon Toner Cartridges

I checked my printer settings, and each color cartridge had about 500 pages remaining. Initially, I thought these were priced too high and I could get them much cheaper off Amazon or Best Buy, but because of the discounts, the order from Canon came in at a lower price.

It looks like color prints cost 11.1 cents per page (I originally thought it was 3.7 cents per page, but it’s 3.7 cents per page per color). This is still a lot cheaper than having prints done at Staples. At our current rate of color printing (522 color pages per year), I’m hoping it will be a decade before the printer auto-orders more!

OEM vs 3rd Party Toner

While it’s more expensive, I’ve changed my strategy from using cheap 3rd party toner to expensive OEM toner. While on the Brother, I used highly rated quality 3rd party toner–but it seems to me the aftermarket toner manufacturers make a quality product, get a high user rating, and then switch to lower quality components once they have market share. Once their reputation is tarnished they start over with a new brand name. I noticed Amazon stopped selling the top aftermarket Canon toner cartridges between the time I started and finished writing this post. I’m pretty sure bad toner and excessive toner spill is what killed my Brother printer. It’s just not worth the hassle to save a few dollars. And of course, if you run into any problems support will just blame it on your toner.

My Thoughts on Auto-Replenishment Services

I have no problem if machines autonomously order things for me, it’s one less thing to do πŸ€–. I would love it if my Canon would order paper as well! It already counts each page printed so it has the information, I think auto-paper ordering is a huge miss by printer manufacturers.

I’d like it if all of my appliances replenished themselves! But I also think designers should consider these 5 laws of automated replenishment.

The 5 Laws of Auto-Replenishment

  1. It must be for the benefit of the customer. Don’t purposefully make products for the sake of selling consumables. A computer mouse does not naturally have consumables so shouldn’t need auto-replenishment to refill the laser juice or a monthly subscription based on the number of clicks. But for a laser printer, it makes sense that toner is a consumable. βœ…
  2. The pricing must be the same or better than market rates. Don’t gouge the customer. If Amazon sells it for $200 you need to on average, match or beat it. I checked multiple stores including Amazon, Canon charged less than I could have bought genuine toner anywhere else. βœ…
  3. The customer must have the opportunity to cancel the order before it ships. I did have the option to cancel the Canon order before it shipped. βœ…
  4. The system must not over-order. It shouldn’t order blindly on an annual or monthly schedule creating needless stockpiles like Amazon subscribe and save. The Canon order was created based on the estimated toner remaining. βœ…
  5. The system must not be tied to the Borg collective (b3n.org). If the internet is lost (looking at you HP!) or the company goes out of business the product must continue to function autonomously and be able to use 3rd party supplies. The Canon printer continues to function with or without auto-replenishment and with or without internet, and there is an option in settings to disable only allowing genuine toner (which was disabled by default on mine). βœ…
The Borg

When Auto-Refill services do and don’t make sense

While I will use Canon’s auto-replenishment service, HP Instant Ink is an example of a service I would not use. It would be good if it charged you something like 5 cents a page and you got a bill at the end of each month. But on HP you’re paying a monthly fee to print up to a certain number of pages a month (which you constantly have to manage if you under or over-print) then they gouge you on overages, and if your internet goes down the printer can’t check to see if you have a subscription so won’t print at all! HP Instant Ink is the perfect example of what not to do.

I much prefer Canon’s straightforward auto-replenishment model of no monthly fees. The printer just orders toner when it needs it.

The post My Printer Bought $610 of Toner appeared first on b3n.org.

]]>
https://b3n.org/my-printer-bought-610-of-toner/feed/ 0 122342
TrueNAS vs Proxmox Homelab https://b3n.org/truenas-vs-proxmox/ https://b3n.org/truenas-vs-proxmox/#comments Sat, 24 Aug 2024 14:18:58 +0000 https://b3n.org/?p=122446 TrueNAS Scale and Proxmox VE are my two favorite appliances in my homelab. Overall, TrueNAS is primarily a NAS/SAN appliance, it excels at storage. It is the most widely deployed storage platform in the world. Proxmox on the other hand is a virtualization environment; so it is excellent at VM and software-defined networking. I agree ... Read more

The post TrueNAS vs Proxmox Homelab appeared first on b3n.org.

]]>
TrueNAS Scale and Proxmox VE are my two favorite appliances in my homelab. Overall, TrueNAS is primarily a NAS/SAN appliance, it excels at storage. It is the most widely deployed storage platform in the world. Proxmox on the other hand is a virtualization environment; so it is excellent at VM and software-defined networking. I agree with T-Bone’s assessment that TrueNAS is great at storage but if you want versatility Proxmox is a better option.

I consider TrueNAS a Network Appliance – it’s not just a NAS, but provides a platform for utilities, services, and apps (via KVM, Kubernetes, and Docker) on your network. On the other hand, I consider Proxmox an Infrastructure Appliance–it provides computing, networking, and storage. You’re going to have to build out your services and applications on top of the VMs and Containers it provides.

TrueNAS Scale – The Swiss Army Knife of the LAN. TrueNAS is capable of offering a lot of network services beyond a NAS. TrueNAS is the first thing I’d put on my network–even if it’s not needed for storage. All the random things you need can be deployed from the GUI in seconds. Cloudflare tunnels, WireGuard, a TFTP server, an S3 server, SMB, iSCSI, DDNS updater, Emby, Plex, etc. TrueNAS is a full-featured appliance so any of this can be configured and deployed in seconds.

Proxmox VE – Robust Infrastructure Platform – Proxmox is a virtualization platform providing KVM and LXC containers. You’ll have to deploy your applications on top of those servers. It is more work to build out a network on Proxmox since it doesn’t provide anything at the service or application level.

Rather than start from a feature list, I’m going to start from a few of my homelab services and consider how I’d implement them on each platform. This will be relevant if you’re running a homelab since you probably run similar services. Then I’ll finish up with my thoughts on the feature capabilities of each platform.

TrueNAS vs Proxmox image

Homelab Service Requirements – TrueNAS vs Proxmox

Samba Share

I have several terabytes of SMB shares on my network:

  1. There’s Emby media.
  2. A Data share with lots of misc files and folders.
  3. My ScanSnap and Canon Scanner scan files to SMB.
  4. I do have a few TBs of archived files and backups over the last 3 decades on SMB
  5. Time Machine backups for our Macs.
Samba Logo
Samba Configuration in TrueNAS

Samba – TrueNAS is built for this. Just create the ZFS datasets and Samba shares. All of this is configurable in the GUI. To configure backups you use Data Protection and set up a Cloud Sync task to point it at any S3 compatible cloud provider (Amazon S3, Backblaze B2, iDrive e2, etc.). You get full ZFS snapshot integration into Windows File Explorer so users can self-restore files or folders from a previous snapshot with a right-click to get to the version history menu. You can expose the .zfs folder for Mac and Linux clients to have access to snapshots. Even advanced SMB shares like TimeMachine backup are easy to set up. You can also deploy a WORM share (write once read many) which is great for archiving data that you want to become immutable. All this can be deployed in a matter of seconds.

Samba – Proxmox is better considered as infrastructure. It has zero NAS capability. In the Proxmox philosophy, a NAS is what you run separately from Proxmox, or on top of the PVE (Proxmox Virtual Environment) infrastructure, not on the Proxmox host itself. The Proxmox way is the separation of duties. Now, since Proxmox is just Debian you /could/ just apt-get install samba. That’s of course not a best practice, but you could do it. A slightly less not-best-practice idea is to create a ZFS dataset on Proxmox, then install an LXC container and do bind-mounts from the host, and run Samba in the LXC container. This will work but for configuration and backups, you’re on your own. And I can’t stress backups enough–if you put SMB on the host, Proxmox won’t back it up, you’ll likely forget about it until you need to restore your data and realize the scripts you put in place to run backups have been failing unnoticed the last eleven months.

You could install a VM with Ubuntu or Debian and install a Samba server there. In a production environment, separation of duties is essential. You could also install TrueNAS as a VM. I wouldn’t even bother with the complexity of passing drives to it as I did with my FreeNAS VMware setup–just provide it storage from Proxmox and use the TrueNAS VM as a NAS. Let Proxmox manage the disks and TrueNAS manage SMB, NFS, and iSCSI shares.

On running Ubuntu/Debian or TrueNAS under a VM on Proxmox–I think TrueNAS has an advantage for GUI configuration and alerting.

Score: TrueNAS is a better NAS. TrueNAS 1, Proxmox 0.

Emby Server

I run an Emby server (similar to Plex) which mounts SMB storage to serve video files to our TV.

Emby Logo


Emby on TrueNAS. TrueNAS has a built-in Emby Application powered by Docker, just install it and you’re good. It’ll create a local dataset for your Emby data–which you can also share via SMB right from TrueNAS to add/remove media.

TrueNAS Appliances Screen (showing list of appliances)

Emby on Proxmox. The Proxmox model would be to run the Emby server in a VM or LXC container.

Score: Simpler & faster setup with TrueNAS. TrueNAS 1, Proxmox 0.

Windows VM

Windows 11 Logo

Windows VM – this would be a KVM VM on either platform.

Windows VM – TrueNAS. KVM VM, this will work but getting to the console is a bit tricky requiring the use of VNC–and I’m not sure it’s that secure.

Windows VM – Proxmox. KVM VM, has a full range of options here and nice secure web-based console access.

Score: Both will work, but Proxmox is a better hypervisor for things like this. TrueNAS 0, Proxmox 1.

Minecraft Servers

I run a few Java Minecraft servers.

Minecraft Logo

Minecraft – TrueNAS. There is a built-in Minecraft app based on Docker. but this is where things get tricky. Here I have a service that is running on Java exposed to the world. I might be a little too cautious, but this seems risky to run in Docker. I know containerization has come a long way–but a compromise or crash of a Docker container kernel means a compromise or crash of the TrueNAS host kernel. And while I could put the application itself on a VLAN in my DMZ, the TrueNAS host (sharing the same kernel) is on my LAN. I think for such a service like Minecraft it’s more secure to run it in KVM. So, even though TrueNAS can run Minecraft as an app, I would do a VM. But if I were just running Minecraft for my LAN, a TrueNAS Docker application would be an option. Also, I’ve found in general that any GUI that tries to configure a Minecraft server is more difficult than CLI. For me (because I’m already familiar with the command line), it’s simpler to manage Minecraft in a VM and not try to figure out how to finagle the GUI to write the ‘server.properties’ file just right.

Minecraft – Proxmox. Would also run it in a VM. Working with both systems Proxmox has better networking tools, and virtual networking switches which make it easier to isolate my DMZ from my LAN. That said I think I could get it all working in a TrueNAS VM.

Score: TrueNAS gives you both options. TrueNAS 1, Proxmox 0.

Other Services…

StorJ – same as Minecraft, StorJ TrueNAS has a built-in app. But again for the sake of security, I’d run it in a VM on TrueNAS (and it would be a VM on Proxmox).

Factorio – doesn’t seem to have a Proxmox app for TrueNAS, so looks like a VM on either platform (could be a Docker app on TrueNAS).

Factorio Screenshot

Plausible (Website Analytics). I run this in a Docker container. Could be run in a VM on Proxmox, or directly as a Docker App on TrueNAS.

Cyberpanel + OpenLitespeed + PHP + MySQL Webserver. This would run in a VM on either platform.

TrueNAS Scale vs Proxmox VE Features

Moving from services to features.

Hypervisor

Proxmox outshines TrueNAS with more options as a hypervisor. While both use KVM behind the scenes, the hypervisor options Proxmox offers are unmatched–offering more storage options (Ceph or ZFS), machine settings, and better web VM/container console access. If you set up multiple Proxmox servers you can migrate VMs live.

Proxmox Screen showing list of VMs

Containerization

Containers allow the host to share the kernel with an isolated container environment. If I was running only LAN services I’d probably use containers a lot more. I love the density you can get with containers, but I also am not comfortable with the level of security yet. Proxmox also has some limitations with LXC containers–you can’t do live migration with them (which makes sense because they share the host kernel) so migrating from host to host requires downtime.

LXC Container Logo

Proxmox supports LXC containers. Containers can be deployed instantly and you essentially get your distribution of choice (I prefer Ubuntu, but you can just as easily deploy Debian or most other Linux distributions). It’s like a VM, but it shares the kernel with the host, and is a lot more efficient–you’re not having to emulate a machine (this is not a VM). You will notice that just like Proxmox has no NAS capability, it also has no Docker management capability. I think the lack of GUI for Docker management is a huge opportunity for Proxmox. But again, you have to realize that Proxmox is infrastructure. You run Docker in a VM or LXC container on top of Proxmox, not on Proxmox directly.

TrueNAS supports Apps which are iX or community-maintained configuration templates for docker images that can be set up and deployed on the GUI. If they don’t have the app you’re looking for, you can also deploy any normal docker image using the “Custom App” option.

TrueNAS screen showing list of apps you can install

TrueNAS: 1/2, Proxmox 1/2.

Storage

Proxmox supports ZFS or Ceph. It can also utilize network storage from a NAS such as NFS (it can even be served from a TrueNAS server). TrueNAS is focused on ZFS, but high-availability storage is only available with Enterprise appliances.

TrueNAS: 0, Proxmox: 1

High Availability

The Emby server must not go down. TrueNAS Scale High Availability is limited to when you’re running on iX Systems hardware and licensed for Enterprise, and I believe is limited to storage (I don’t think it has VM failover?). Proxmox has high availability (at both storage and hypervisor levels). You can also live-migrate VMs from one host to another in a cluster.

TrueNAS: 0, Proxmox 1.

Performance

Proxmox screen showing performance charts

In my experience, both are going to be fast enough so I wouldn’t pick one or the other for performance reasons. But if you are interested in performance take a look at Robert Patridge’s Proxmox vs TrueNAS performance tests (Tech Addressed) where Proxmox smokes TrueNAS.

TrueNAS 0, Proxmox 1.

Networking

Proxmox has robust SDN (Software Defined Networking) tools and virtual switches (including Open vSwitch if you need advanced networking). TrueNAS is a little more difficult to work with–especially when using Docker applications, I agree with Tim Kye’s networking assessment (T++) of the platforms–sometimes I feel like I’m fighting TrueNAS because of certain decisions iX made while Proxmox just works.

Proxmox screen showing networking

TrueNAS 0, Proxmox 1.

Backups

TrueNAS Backups – Backing up TrueNAS is very easy for ZFS datasets – you can backup at the file level to Cloud Storage (Amazon S3, Backblaze B2, iDrive e2, or any S3 compatible storage), to another ZFS host using Storage replication, or Rsync. TrueNAS lacks ZVOL (block device) backup capability except via ZFS Replication. So for a VM or iSCSI device, you need 3rd party backup software to get file-level backups. If you do things the TrueNAS way and run all your services using the built-in Docker-based applications this isn’t a problem because all the configuration is stored on ZFS datasets (not ZVOLs). You simply restore your TrueNAS config file, restore all the datasets from Cloud Sync, and point the applications and you’re good to go (in theory–sometimes permissions…)

TrueNAS screen showing Cloud Sync Task

S3 Compatible Object storage for backing up ZFS datasets is inexpensive. Backblaze B2 is $6/TB, and iDrive e2 is $5/TB–but you can pre-pay for e2 to bring the price to around $3.50/TB.

However, if you have any VMs or use iSCSI, you’re still going to need block storage to back up ZVOLs via ZFS replication. You can use rsync.net which starts around $12/TB/month, or set up a VM with block storage formatted as ZFS (see block storage options under Proxmox Backups in the next section below).

Proxmox Backups. The best backup solution for Proxmox is the Proxmox Backup Server (PBS). You’ll need a local PBS server (which can run in a VM–just make sure to exclude it from backup jobs so you don’t get into an infinite loop) and one remote.

Proxmox screen showing backup configuration

The best solution is to find a VPS provider that offers block storage, install a Proxmox backup server there, and replicate your backups from your local PBS to your remote PBS. BuyVM and SmartHost offer VPS with Block Storage around $5/TB range–with either service, you can upload the PBS ISO or provision a KVM VPS with Debian and install PBS on top of it.

One other advantage of Proxmox Backup Server is backups are deduplicated. When backing up multiple VMs, shared data is deduplicated, and backup versions only take up the space of the delta.

Data Encryption

TrueNAS Encryption. TrueNAS is capable of encrypting the entire pool (from the creation wizard). This is useful when sending failed disks back for warranty service. When replicating a pool the encryption is preserved so the destination server has zero ability to see your data (Make sure you keep a backup of your encryption keys offsite).

TrueNAS screen showing Encryption configuration

Backup Encryption: TrueNAS Cloud Sync can encrypt files and filenames when syncing to an S3-like cloud service. I should note there is an option in TrueNAS to backup the configuration file–this includes all the encryption keys so it’s useful to back this up periodically and store it in a password manager.

Proxmox Encryption. Proxmox does not support pool-level encryption from the GUI, but it can easily be done yourself using LUKS, prompting for a key on boot, or from TPM.

Proxmox screen showing backup encryption settings

Backup Encryption: When backing up, Proxmox can encrypt the data in such a way that the destination server (a remote Proxmox Backup Server) can’t read the data at all without the key.

TrueNAS 1, Proxmox 0. (no GUI for LUKS in Proxmox)

Disaster Recovery

I have done test disaster recoveries on both systems. I prefer a DR with Proxmox because you can restore high-priority VMs first then the rest. I just find the process a lot simpler. PBS also allows you to restore individual files from backups of Linux VMs. I’ve found TrueNAS a bit slower at restoring (especially if restoring from ZFS replication), and from Cloud Sync the permissions aren’t restored (one reason to virtualize your NAS is if you restore it at the block level there’s no chance you’ll miss permissions).

TrueNAS may have an advantage in that you can restore specific files if you need a couple of files after a disaster, but Proxmox Backup Server understands the Linux filesystem so it allows you to restore specific files from a VM as well.

That said, with both systems, I’ve been able to restore from scratch with success.

TrueNAS 1, Proxmox 1.

Storage Management

TrueNAS excels here when it comes to ZFS management. ZFS can be managed fully from the GUI in TrueNAS, including scheduling scrubs and replacing failed drives. In Proxmox you will need to drop into the command line. This isn’t a huge deal for me since I know the CLI well. Proxmox may have a better GUI for Ceph storage management but I haven’t used it.

TrueNAS 1, Proxmox 0.

Licensing

The single greatest cause of downtime is licensing failure. However, both Proxmox and TrueNAS are open source, so you won’t have any downtime caused by licensing issues, and won’t need to waste time going back and forth with sales or negotiating contracts.

TrueNAS licensing. The TrueNAS Scale edition is free and open source, requiring no keys. Generally speaking, if you’re sticking with open-source technology the Scale edition is all you need. It’s when you start needing integration with other proprietary systems such as vCenter, Veeam, or Citrix that you’ll pay for it. High availability clustering and support will require Enterprise. For any business, you should get an Enterprise edition from iX Systems.

Proxmox licensing. Proxmox VE is free and open source. However, you won’t get updates between each release unless you enable the Enterprise or No-Subscription repository (Proxmox). The lowest tier Enterprise subscription is a reasonable €110/year/CPU socket. For a business or anything important you should pay for enterprise, but for a poor man’s homelab you have two options:

Option 1: Only upgrade twice a year on the releases. You might be able to get away without interim updates at all. Proxmox VE is released twice a year, and having a consistent twice-a-year schedule to update your homelab is not unreasonable. Since Proxmox is based on Debian, you’re still getting distribution OS and kernel updates. The only updates you’re not getting are for Proxmox itself.

Option 2: Or you can enable the No-Subscription repository–you’ll get faster updates–but it’s not intended for production. If I was running LXC containers with services exposed to the internet I’d want to do this (or just pay for Enterprise) to get any security fixes fast.

And of course, there is also a nag if you don’t have enterprise licensing enabled. You can of course find scripts to disable the nag, but still, there’s a nag and there’s no way to disable it in the GUI.

TrueNAS 1, Proxmox 1.

Monitoring

The other day, as I walked into my garage, I saw red lights on several of my hard drives. The hard drives were too hot! I should have been using Stefano’s fork of my Supermicro Fan Speed control script which will kick up the fan RPMs when drive temps go high.

ZFS Storage Server

Proxmox didn’t alert me at all. Proxmox has some basic monitoring like SMART failures, drive failure, or if a drive is corrupting data. But TrueNAS takes it a step further and is proactive at hardware monitoring.

Portion of screen showing TrueNAS alerts

TrueNAS servers alert me to high drive temperatures, and in general, I’d say the alerting is more proactive and comprehensive than Proxmox. Sometimes you have to dig to find problems in Proxmox, but for TrueNAS you can check the alert notifications in the top-right of the GUI. Just some examples of things you may get alerts for A drive failure, CPU high temperature, HDD high temp, Cloud Sync failed, storage is running low, ZFS replication failed, IPMI firmware update available, SSL certificate about to expire, service crashed/failed, etc. From a hardware & software monitoring and alerting perspective, TrueNAS is a lot more capable.

I think Proxmox is probably more suited to be monitored in an environment where you have something like Zabbix. TrueNAS can be monitored with Zabbix, but it also doesn’t need it.

TrueNAS 1: Proxmox: 0.

GUI Completeness

Proxmox can do the initial setup and you can do most things through the GUI–probably 95%. But when you have to replace a failed drive in your pool or perform an upgrade you’ll have to go to CLI. I’m comfortable with this, but some people aren’t. TrueNAS by contrast can pretty much be run without ever using CLI.

Fleet Management

TrueNAS Scale fleets can be managed with TrueCommand (allows management of systems up to 50 drives total for free).

With Proxmox each node can manage the entire cluster without the need for a central management service. No limits on the free version. But Proxmox can’t centrally manage a cluster of clusters (yet).

Platform Stability & Innovation

One area where Proxmox excels is stability. They are deliberate and have well-thought-out changes, iterating slowly on their roadmap. The upgrade process is documented and thorough, and I’ve never had anything break. But since I’ve been on TrueNAS, they’ve switched the OS from FreeBSD to Linux, Kubernetes to Docker, and from KVM to Incus. In general, I’ve found that with TrueNAS it’s best to stay on the Mission Critical or Conservative track unless there’s a new feature I want sooner. However, TrueNAS innovates faster, so you’ll see newer technologies and features sooner, along with more things you can do in the GUI. Proxmox is simple and stable, while TrueNAS is broad (the Swiss Army Knife of the network) and innovative.

TrueNAS vs Proxmox Comparison Table

FeatureTrueNAS ScaleProxmox VE
NAS / SAN (SMB, NFS, iSCSI Shares)βœ…βŒ
Applications (Emby, Minecraft, S3 Server, TFTP server, DDNS updater, Tailscale, VPN, etc).βœ…βŒ
HA Clustering❌ Only Enterpriseβœ…
VMsβœ…βœ…
Advanced VM ManagementβŒβœ…
LXC ContainersβŒβœ…
Docker Containersβœ…βŒ
Storage – ZFSβœ…βœ…
Storage – CephβŒβœ…
Storage Managementβœ…βŒ
Networkingβœ…βœ…
Advanced NetworkingβŒβœ…
ZFS Encryptionβœ…βŒ
Backup Datasetsβœ…N/A
Backup ZVOLsβœ… N/A
Backup VMs & Containersβœ… βœ…
Deduplicated BackupsβŒβœ…
Zero Knowledge Encrypted Backupsβœ…βœ…
Basic Monitoring/Alertingβœ…βœ…
Comprehensive Monitoring/Alertingβœ…βŒ
Audit Logsβœ…βŒ
Hardware Management & Monitoringβœ…βŒ
Fleet / Cluster Managementβœ…βœ…
FeatureTrueNAS ScaleProxmox VE
TrueNAS Scale vs Proxmox VE Comparison

How Should Each Platform Improve?

In my opinion, if I had a number 1 and 2 priority to improve each platform it would be these items:

  • TrueNAS needs to add better KVM management
  • TrueNAS needs to add better networking (maybe Open vSwitch?)
  • Proxmox needs to add Docker management alongside VM and LXC management.
  • Proxmox needs to add GUI management of ZFS storage (replacing failed drives and such).

Survivability

One area that pertains to home labs is survivability in the event of your death. ☠️

Would your wife be able to get help maintaining or replacing what your system does? I think you’re more likely to find help with TrueNAS than Proxmox–even someone slightly technical could easily see from a few minutes of going through the menu options what services and applications are being used. Because TrueNAS is fairly opinionated and configured via GUI, things are done consistently–anyone who uses TrueNAS will feel right at home. Proxmox on the other hand lets you do whatever you want–you may have RedHat, Ubuntu, Windows, or Docker images running in a VM–who knows what insanity someone helping your family will discover? In both cases, make good notes.

What do I run in my homelab?

Honestly, I would be happy with either solution; I’m grateful that both iX and Proxmox provide a free solution. I have configured both systems for organizations.

What prompted me to write this was I have two physical servers: One is running TrueNAS Scale and one running Proxmox VE. However, I decided to simplify and pair it down to one system to make more room in my server rack. This forced me to think about it. Ultimately I decided Proxmox was better for a lot of the virtualization needs in my homelab. Part of my decision is I already have a lot built on Proxmox and it would take time I don’t have to re-implement on TrueNAS, for what I would consider a lateral (no real benefits) move.

But I like the NAS offering of TrueNAS and use a lot of the out-of-the-box features provided with TrueNAS. So I run Proxmox on bare metal, that is my core infrastructure. But put a TrueNAS VM under Proxmox to act as my NAS. And I feel like that NAS should sit under the Hypervisor–in a small environment that seems like where it should be.

For me, a rough (emphasis on the rough) rule of thumb for Proxmox vs TrueNAS is this; if I’m just providing primarily NAS and a few services for a home or office, I’ll likely deploy TrueNAS. If I’m deploying in a complex environment or planning to host applications to the world, then I’d probably use Proxmox.

The post TrueNAS vs Proxmox Homelab appeared first on b3n.org.

]]>
https://b3n.org/truenas-vs-proxmox/feed/ 14 122446
How to Install Minecraft Server on Ubuntu 24.04 https://b3n.org/minecraft-server-ubuntu-2404/ https://b3n.org/minecraft-server-ubuntu-2404/#comments Sat, 29 Jun 2024 03:35:26 +0000 https://b3n.org/?p=122103 How to install a Minecraft server (Java Edition) on Ubuntu 24.04 LTS. Yesterday, Eli told me 100 random people were on our Minecraft server. Incidentally I set the server limit to 100. One of the players told him it was advertised on a server list. That’s odd because this is a private Minecraft server and ... Read more

The post How to Install Minecraft Server on Ubuntu 24.04 appeared first on b3n.org.

]]>
How to install a Minecraft server (Java Edition) on Ubuntu 24.04 LTS.

Design decisions:

  1. As simple as possible
  2. Use the shell instead of web interface. GUI interfaces seem to create more maintenance and problems than just using a CLI.
  3. Abstract Java and Minecraft server installation and updating with snap.
  4. Use crontab for auto-start (systemd would be the proper way to do this, but this is fast and minimizes complexity).
  5. Self hosted. This gets very intermittent use so I don’t want to pay hundreds of dollars a month for a hosted solution, but when it’s used it needs some CPU. I tried to use usage based cloud instances but found them too sluggish, too expensive, or not near our region (high latency). I found 4 cores and 6GB memory in a Proxmox VM does well.
  6. Keep everything vanilla. No mods means fewer things that can break and not having to do builds to upgrade. If we run into issues I may switch to SpigotMC or such but currently it seems stable.

This setup uses James Tigert’s mc-server-installer snap to install Minecraft server, an expect script (your grandfather’s RPA) to interact with the startup menu, and cron to start the service on system boot.

Install mc-server-installer, and expect

$ sudo su
# snap install mc-server-installer
# apt install expect
# adduser minecraft
# su minecraft
$ cd ~
$ mc-server-installer
------------------------------------------

         MINECRAFT SERVER INSTALLER
                   MENU              

------------------------------------------

ATTENTION: Latest available version: 1.21

Select from the following options: 

1) Download latest (v1.21) server.jar
2) Agree to the EULA
3) Edit the server.properties file
4) Run MC server with max 2GB of RAM
5) Run MC server with max 4GB of RAM
6) Run MC server with max 6GB of RAM
7) Run MC server with max 8GB of RAM
8) Run MC server with max 16GB of RAM
9) View README
10) Back up your world
11) Run custom RAM settings
12) Run custom jar file and RAM settings
13) Quit

Choice: 

Choose Option 1 to download the server.jar, re-run and choose 2 to agree to the EULA, re-run and choose 3 to edit server.properties (if needed), re-run and choose an option depending on your memory requirements. At this point verify you are able to connect to the Minecraft server. If it’s working CTRL+C to kill it.

If you want to modify the server.properties and world files directly, look under /home/minecraft/snap/mc-server-installer/current

Create an Expect Script to Interact with the Menu

$ vim /home/minecraft/start_minecraft.sh

On line 9, I’m sending menu option “6”, but you can change it depending on your memory configuration.

#!/usr/bin/expect -f

set timeout -1
log_file /home/minecraft/minecraft.log
log_user 1
spawn mc-server-installer
expect {
    "Choice: " {
        send "6\r"
	exp_continue
    }
    timeout {
        puts "Error: Timeout"
        exit 1
    }
    eof {
        puts "Error: EOF"
        exit 1
    }
}

Set executable bit…

 $ chmod 755 /home/minecraft/start_minecraft.sh

I suggest running the script once to make sure it works…

$ ./start_minecraft.sh

(CTRL+C to kill it).

Add cron job to autostart Minecraft server on reboot

$ crontab -e

Add the following entry to crontab…

@reboot sleep 10 && export TERM=xterm && /home/minecraft/start_minecraft.sh

Reboot the Ubuntu server.
Tail the log to make sure it starts…

# sudo su minecraft
# tail -F ~/minecraft.log

And now you can connect to the Minecraft server.

After moving all the world and all the files over, the whitelist just worked out of the box on our new Minecraft servers.

The post How to Install Minecraft Server on Ubuntu 24.04 appeared first on b3n.org.

]]>
https://b3n.org/minecraft-server-ubuntu-2404/feed/ 8 122103
22182 Notes. Evernote to Apple Notes (Fail) to DEVONthink. https://b3n.org/22182-notes-evernote-to-apple-notes-to-devonthink/ https://b3n.org/22182-notes-evernote-to-apple-notes-to-devonthink/#comments Sat, 12 Aug 2023 16:11:09 +0000 https://b3n.org/?p=117060 I’m looking for a scan of a blue note, and I simply can’t find it in Evernote. Evernote versions 4 and 5 were the prime of Evernote. Kris and I used it extensively as our document management system, document knowledge management, and notetaking application. It was a great way to organize things. Every note, every ... Read more

The post 22182 Notes. Evernote to Apple Notes (Fail) to DEVONthink. appeared first on b3n.org.

]]>
I’m looking for a scan of a blue note, and I simply can’t find it in Evernote.

Evernote versions 4 and 5 were the prime of Evernote. Kris and I used it extensively as our document management system, document knowledge management, and notetaking application. It was a great way to organize things. Every note, every important piece of paper, we scanned into Evernote.

I’ve been a huge fan of Evernote, but over the last 6 years the product has become unusable. I suffered through it, but I really want to find my blue note. So, I moved everything to Apple Notes. Apple Notes couldn’t handle my 22182 notes without running out of storage, so I moved everything to DEVONthink (but left a few thousand with Apple Notes) and that seems to be working.

DEVONthink

If Evernote could have simply stopped making their program worse, and instead focused on making version 4 or 5 robust it would still be the most loved notetaking application today.

Application Tip #1. Don’t destroy yourself.

All the ways Evernote Destroyed Themselves since version 4 and 5

  1. Removed the Atlas feature. Now we can’t visually see where we created notes (I know where I scanned that blue note so if I had this feature I could zoom in on the location).
  2. Moving from Native C# and Objective-C apps to Electron. Electron is the worst possible way to write a program. The one thing that made Evernote successful was they released native desktop apps when everyone else was releasing webapps. They took their biggest competitive advantage and shredded it. I do not like Electron.

Application Tip #2. You may think rewriting your app in Electron is a good idea. You are wrong.

  1. Performance on desktop apps and mobile apps got significantly worse (because of Electron). 20,000 notes is not that big, and when I have to wait 3 minutes for the app to respond and I’m trying to take a quick note while on the go the moment is gone and lost.
  2. Removed the thumbnail view–if I’m looking for that rectangle-shaped blue document I could easily find it via thumbnail. Snippets are not thumbnails.
  3. Capped exporting notes to 100 at a time (used to be unlimited).
  4. Can’t select more than 100 notes at once (used to be unlimited).

  5. Removed the local copy of the remote database. So… combined with the 100 export cap, just how are we supposed to backup Evernote in under 600 clicks?
  6. Removed all of the OS integration. This prevents people from printing directly to Evernote or doing manipulation via AppleScript.
  7. Removed API access.

  8. Added a chat – why does a notetaking app need chat?
  9. Used RC4 (I am not making this up) for inside note “encryption”.
  10. Never implemented Encrypted Notebooks.
  11. Never allowed versioned notes to persist across notebooks.

  12. Removed offline cache. Evernote now takes forever to load.
  13. And finally… after making all those bad changes. I just got a notice that my subscription is increasing from $70/person/year to $170/person/year. We’re already pressured from inflation, it’s hard to justify paying a huge increase like that. Evernote argues they’ve added new features to justify the cost, but there have been zero new features that I want and many of the features I did want have been taken away!

Evernote to Apple Notes Migration (Fail)

Apple Notes iOS

So, I attempted to Migrate all our notes to Apple Notes. The new Evernote client only lets you export 100 notes at a time. I’m not going to manually create 220 export files, so I downloaded an old unsupported version of Evernote which lets you export everything at once. I created an Export file per Notebook and imported each into Apple Notes. The notes all imported perfectly which surprised me. I imported about 4,000 notes per day and then let Apple Notes catch up.

After importing, my laptop and phone got hot from trying to process all of those notes. For each set of 4,000 notes my laptop which normally lasts several days ran out of battery within a few hours, and I had to charge my phone (which normally lasts 2 days) multiple times per day. After a week or so, and a few reboots, Apple Notes finally settled down and it was looking good.

Application Tip #3. Not every single note needs to exist on the mobile device.

But, I ran into one problem. I had a final batch of notes to import. I found out Apple Notes is not storage efficient. My 30GB Evernote Database became 60Gb and my iPhone ran out of space. Apple Notes claimed it was taking up 418GB (I don’t think that’s right). My 64GB iPad had no hope at all, and my MacBook also started running out of space. I left my phone in this state for several days to see if it would clear…but it never could sync all the data.

Apple Notes out of space on iPhone

One of the big problems with Apple Notes is there is no way to exclude certain folders from syncing to your mobile devices. If I could have brought in a few thousand notes to the mobile devices, that would have been enough to have access on the phone and the rest would have been fine on the MacBook.

Rule #1 for Apple Notes. Always buy the devices with a lot of storage capacity.

I think Apple Notes would have worked if I had more capacity. I think it could handle my 22,000 notes if I had more storage. But I would have to upgrade 2 MacBooks, 2 iPads, and 2 iPhones to the 1TB models. That starts to get a little expensive.

That said it has good OCR, you can scan to a document using your phone. There is not a good way to scan from my SnanSnap into Apple Notes, but I could have automated it. There is also not really a good clipper.

DEVONthink

Ultimately I decided to leave only 2,000 notes in Apple Notes, and move the rest to DEVONthink. DEVONthink Pro is $200 for the MacOS (2 device license which is perfect for me and Kris) and $50 for the mobile app (family sharing). This is actually cheaper than Evernote considering DEVONthink is a perpetual license. I expect to pay for a discounted upgrade every 5-7 or so years. It’s certainly going to be less than $170/person/year.

DEVONthink

I had run across DEVONthink nearly a decade ago and decided to take a second look and saw the product has improved significantly. I also like to support local businesses, DEVONthink is located in Couer’d Alene, Idaho.

Import Process

DEVONthink

Application Tip #4. If people keep having to download the legacy version of your app to do basic things, like export their data, you are doing something wrong.

The import process is to install the Legacy version of Evernote and let it fully sync to your computer, then DEVONthink will simply bring in every note perfectly. Every piece of metadata is preserved. Geotagging, source url, created/modified dates, etc. I found no mistakes at all.

It took about half a day for DEVONthink to process all the new notes. It was indexing, running OCR, generating thumbnails, probably going some AI stuff, etc. But once complete, it is fast.

A few notes on DEVONthink

  • DEVONthink Pro comes bundled with Abby Finereader’s OCR so I set it up to automatically OCR every PDF that comes in.
  • Sync: supports iCloud (CloudKit), Dropbox, CloudMe, or WebDAV. I started with CloudKit but found you can’t share it with your wife, so I ended up setting up the sync on our WebDAV on our TrueNAS server.
  • Mobile devices can search the entire database (including OCRed documents) without pulling down all the entire database. You can sync the entire database. I set mine to keep the last 100 opened notes on the device so it doesn’t take up much space.
  • For backups and versioning, the entire database is stored locally on MacOS so it is still backed up to iCloud and TimeMachine. Also it’s easy to backup via Cloud Sync using TrueNAS’s tools and version using ZFS snapshots.
  • DEVONthink databases are actually available as a filesystem in Finder and indexed in Spotlight.
  • Tags on the MacOS filesystem are available in DEVONthink, and tags in DEVONthink are available to MacOS. So if you tag a note in DT it shows up in a finder search for that tag.
  • DEVONthink has it’s own database, but you can also add folders from the MacOS filesystem (so files act like they’re in DEVONThink without moving them) so I can access my filesystem from DEVONthink as well. This is actually incredibly convenient with AI.
  • AI Classification. Any new document coming in can be automatically filed. The AI learns your filing method and is pretty accurate. In fact, it files things better than I do because I sometimes forget about the folders. The AI Classification can also be used to file documents on the filesystem.
  • AI finds similar documents. Open any document and the DEVONthink AI finds similar or related documents (even if no keywords are shared). I’m impressed at how good it is.
  • Annotations not as good as Evernote. In Evernote, the PDF was always embedded in a note. You can create a new note and link it, but that’s a bit tedious. DEVONthink annotations work but are not as visible as Evernote notes.
  • No automatic versioning. DEVONthink has no document versioning and certain actions cannot be undone. That said since it’s all stored on TimeMachine you can get to previous versions but it’s not as good as Evernote’s versioning which is well integrated and easy to restore from.
  • It is fast compared to Evernote or Apple Notes. DEVONthink is instant when creating notes.
  • There’s no mobile document scan. There is the camera to take a photo but that’s not very good for documents. You can do a scan from iOS Files and add that to DEVONthink but that’s cumbersome compared to Evernote or Apple Notes document scan which uses the camera then processes it as a document.
  • Multiple database support–so you can separate personal and work, or different major projects. I ended up creating one for me and Kris, and one for work, and one that just references my filesystem documents.
  • Automatic Geotagging works perfect. It also imported all the geotagging from Evernote, but now I have an Atlas view so I can zoom in on a location and see the notes I created there.
DEVONthink Atlas
  • The Notes themselves are not as good Apple Notes or Evernote. You can create a note, but there is no concept of some features I’d expect like checklists in DEVONthink. Ultimately I decided to use Apple Notes for most notes (especially if they have action-items) and DEVONthink for documents and more reference type notes (no action-items).
  • You can import Apple Notes into DEVONthink which makes it a great way to archive them.
  • Can import / archive emails from Apple Mail or Outlook
  • Extremely fast. I can load all 22,000 notes and scroll through them with no lag.

Application Tip #5. Always pre-generate all the thumbnails to allow for fast scrolling

  • Encryption. Data is encrypted on cloud storage. So all data is encrypted with a key before being loaded to the WebDAV server. Additionally the DEVONthink databases can be encrypted locally with a key so if for some reason you can’t enable FileVault and Advanced Data Protection you can still fully encrypt the database.
  • Onboard PDF editing. It is so nice to be able to edit a PDF (rotate or flip a page, re-order pages, delete a blank page, etc.). This is a feature lacking in Evernote.
  • All of the features are local. There is no cloud service you are relying on. Even AI processing is all done using Apple’s onboard processors. It puts Evernote’s model of being entirely cloud-based with no encryption to shame.
  • One other drawback to DEVONthink is it’s not really meant to be a multi-user application. You can share it with a handful of users…but if I had more than 5 users I’d be looking at something else that had better a revisioning/recovery and per note or per folder access controls.
  • The Evernote clipper is bar-none the best web clipper. DEVONthink’s web clipper isn’t terrible, but it could be a lot better.
  • DEVONthink is fairly complex. I’d say it has a steeper learning curve than Evernote, but in the long-run it will save you time.

Overall it is much better than Evernote. But I’d like to see five features added:

  1. Add checklists to the notes
  2. A better way to do annotations or embed PDFs into notes like what Evernote does. Grouping things together is not the same.
  3. Add support for iCloud CloudKit database sharing with multiple users.
  4. Better Clipper.
  5. Better camera document scanner.

But those are fairly minor. I mean, just scrolling through the thumbnails (that Evernote took away) is like flipping through a file folder. I’m a visual person. I don’t always know what I named a note or what keywords to search for. But I remember the shape and color, so it’s nice to quickly scan through my notes visually.

And look! There’s the blue note I was looking for.

DEVONthink notes

The post 22182 Notes. Evernote to Apple Notes (Fail) to DEVONthink. appeared first on b3n.org.

]]>
https://b3n.org/22182-notes-evernote-to-apple-notes-to-devonthink/feed/ 8 117060
Engineering WordPress for 10,000 Visitors per Second https://b3n.org/engineering-wordpress-for-10000-visitors-per-second/ https://b3n.org/engineering-wordpress-for-10000-visitors-per-second/#respond Sat, 17 Jun 2023 15:09:09 +0000 https://b3n.org/?p=114970 Here’s how I configured my WordPress server to handle huge traffic spikes. It’s easier than you think. For those who have seen smoke coming from your server in the garage, you know. But for those who haven’t, here’s a bit of history as I remember it: In the 1990s Slashdot used to link to interesting ... Read more

The post Engineering WordPress for 10,000 Visitors per Second appeared first on b3n.org.

]]>
Here’s how I configured my WordPress server to handle huge traffic spikes. It’s easier than you think.

For those who have seen smoke coming from your server in the garage, you know.

But for those who haven’t, here’s a bit of history as I remember it:

In the 1990s Slashdot used to link to interesting websites and blogs. If you wrote about using Linux as a NAT router for your entire University, assembled a huge aircraft carrier out of Legos, built a nuclear reactor in your basement, or made a roller coaster in your backyard; you’d end up on Slashdot.

Slashdotted. The problem is Slashdot became so popular and drove so much traffic to small websites, it started crashing sites just by linking to them! It was similar to a denial of service (DoS) attack. This is the “Slashdot Effect“.

Twenty years later, a number of sites have been known to generate enough traffic to take a site down. Drudge Report, Reddit, Twitter, etc. This is notoriously known as the “Internet Hug of Death“.

There are plenty of hosting providers that will charge $2,000/month to handle this kind of load. But I’m a bit thrifty; it’s simple and inexpensive to engineer for this kind of traffic.

Small traffic spike from Hacker News

Here are the four steps I took to handle traffic spikes:

Step 1. Get a fast physical server

Although, I think step 4 would let you get away with a Pi, it doesn’t hurt to have a fast server. I have this site configured in a VM with 4-cores on a modern Xeon CPU and 8GB memory, which seems to be plenty, if not overkill. The physical host has 28 cores and 512GB memory, so I can vertically scale quite a bit if needed. Very little traffic actually hits the server because I use Cloudflare, but I like it to be able to handle the traffic just in case Cloudflare has problems. I run the server on Ubuntu 20.04 LTS under Proxmox.

Step 2. Use an efficient web server

I learned the hard way that Apache runs out of CPU and memory when handling large concurrent loads. NGINX or OpenLiteSpeed are much better at serving a large number of simultaneous requests. I use OpenLiteSpeed, because it integrates well with the LiteSpeed Cache WordPress plugin. I believe LiteSpeed Cache is the most comprehensive WordPress caching system that doesn’t cost a dime.

Step 3. Page caching.

Use a page cache like WP-Rocket, FlyingPress, or if you’re cheap like me, LiteSpeed Cache to reduce load. This turns dynamic pages generated from PHP into pre-generated static pages ready to be served.

Now, just these three steps are enough to handle a front-page hit on Reddit, Slashdot, Twitter or Hacker News. Some sites can generate around 200 visitors per second (or 120,000 visitors per minute at peak). But it’s better to overbuild than regret it later which brings us to step 4…

A general rule of thumb is to overengineer websites by 100x.

Step 4. Edge Caching

A final layer to ensure survival is to use a proxy CDN such as Cloudflare, QUIC.cloud, or BunnyCDN. Those take the load off your origin server and serve cached dynamic content from edge locations. I use Cloudflare. Cloudflare has so many locations you’re within 50ms of most of the population–this site gets cached at these locations:

I configured Cloudflare to do Page Caching, and use Cache Rules instead of Page Rules following the CF Cache Rules Implementation Guide (I don’t user Cloudflare Super Cache, but their guide works fantastic with LiteSpeed Cache). This allows you to cache dynamic content while making sure not to cache data for logged-in users.

A Warning about CDNs — so I’ve tried to use CDNs to optimize and host images in the past, but CDNs seem to have problems delivering images under heavy load. So I host images myself and use ShortPixel’s Optimizer to pre-generate and store multiple optimized copies of each image. This seems more reliable for my scenario. Cloudflare still caches images but not generated on the fly.

Reserve Caching. I enabled an additional layer, CF Reserve Caching, which saves items evicted from the cache into R2 Storage as a layer 3 cache. This is pretty inexpensive, my monthly bill ends up being a little over $5 for this service, but it takes a huge load off my origin server.

As a result, I see a 95% cache hit ratio. And if it misses, it’s just going to hit the LiteSpeed cache, and if that misses, Memcached–so there’s minimal load on the server.

There are actually quite a number of caching layers in play here:

  1. The Browser cache (if the visitor has recently been to my site)
  2. Cloudflare T2 – nearest POP (Point of Presence) location to visitor
  3. Cloudflare T1 – nearest POP (Point of Presence) location to my server
  4. Cloudflare R2 – Reserve cache
  5. LS Cache – LiteSpeed cache on my server
  6. Memcached – Memcached (for database queries)
  7. ZFS L2 ARC – Level 2 Cache on SSDs on my server
  8. ZFS ARC – Level 1 Cache in memory on my server

If all those caching layers are empty, it goes to spinning rust.

Browser --> CF T2 --> CF T1 --> CF R2 --> origin --> LSCache --> Memcached --> ZFS L2ARC --> ZFS ARC --> spinning rust.

Load Testing

loader.io lets you test a website with a load of 10,000 clients per second. I tested my setup, you can see the load was handled gracefully with a 15ms response time.

The egress rate is 10.93Gbps (ten times faster than my fiber connection).

I could probably optimize it a little more, but this already qualifies for 100x overengineered, and we’re way past the 80/20 rule. Good enough.

To handle a hug of death, you’ll want:

  1. Beefy Hardware
  2. Modern Webserver
  3. Page Caching
  4. Edge Caching

Ecclesiastes 1:7 ESV –
All streams run to the sea,
but the sea is not full;
to the place where the streams flow,
there they flow again.

The post Engineering WordPress for 10,000 Visitors per Second appeared first on b3n.org.

]]>
https://b3n.org/engineering-wordpress-for-10000-visitors-per-second/feed/ 0 114970