<![CDATA[Neil Panchal]]>https://neil.computer/https://neil.computer/favicon.pngNeil Panchalhttps://neil.computer/Ghost 5.130Sat, 14 Mar 2026 02:53:41 GMT60<![CDATA[How to Build a Minimal ZFS NAS without Synology, QNAP, TrueNAS]]>https://neil.computer/notes/how-to-setup-minimal-zfs-nas-without-truenas/696743775a67c705af8f3546Sat, 24 Aug 2024 06:10:12 GMTHow to Build a Minimal ZFS NAS without Synology, QNAP, TrueNAS
ZFS System by Oracle
How to Build a Minimal ZFS NAS without Synology, QNAP, TrueNAS

If you need a basic NAS and don't care about GUI features, it is suprisingly simple to set up a ZFS dataset and share it over the network using Samba.

Scope:

Scope & Requirements
Raid Level RAIDZ1 (1 Drive Redundancy)
Operating System Debian 12 Bookworm
Encryption None
ZFS Implementation OpenZFS, zfs-2.1.1
CPU 4 Cores, Xeon Server CPU can be had for cheap
RAM ECC RDIMM RAM 16 GB
Storage 4x4TB NVMe SSD
Backups Not covered, use ZFS Backup Scheduler
Skills Basic familiarity with Linux
Skill Level Beginner/Easy

I am using this article to document it for future myself, feel free to adopt it for your needs. Problem with TrueNAS is that it is a full-featured, supposedly enterprise-grade, software suite. While it may be simple to set it up (I've never tried), I just don't need any of the bells and whistles it offers. It's the mismatch between what I need and what it offers; not something inherently wrong with TrueNAS. There is also something to be said about a system you know everything about and not having to rely on yet another thing.

ZFS's best feature that's never explained or written anywhere

ZFS filesystem is self contained. If your OS is nuked suddently, simply take all disks to another machine or install a new OS, install zfs, run zfs import and get back your data. This freedom is underrated and not well understood. It is also not explained anywhere.

It's worth emphasizing: All configuration/details about ZFS is stored on the disks themselves. If you've setup a RAIDZ2 (Raid 6) with 6 disks, they are self contained. Move them to a new machine with zfs tools installed, and simply run zfs import. Boom, they'll show up as RAIDZ2. This is an amazing feature that no matter what happens to the host OS, machine, etc; as long as the disks are not damaged, your data is fine.

Step 1. Locate and Organize Disks

List all disks on a linux machine using lsblk -d -o TRAN,NAME,TYPE,MODEL,SERIAL,SIZE command.

[root@sys ~]# lsblk -d -o TRAN,NAME,TYPE,MODEL,SERIAL,SIZE
TRAN   NAME     TYPE MODEL                        SERIAL           SIZE
       sda      disk Virtual disk                                   40G
nvme   nvme5n1  disk Samsung SSD 990 PRO 4TB      XXXXXXXXXXXXXXX  3.6T
nvme   nvme2n1  disk Samsung SSD 990 PRO 4TB      XXXXXXXXXXXXXXX  3.6T
nvme   nvme6n1  disk Samsung SSD 990 PRO 4TB      XXXXXXXXXXXXXXX  3.6T
nvme   nvme3n1  disk Samsung SSD 990 PRO 4TB      XXXXXXXXXXXXXXX  3.6T

These are brand new NVMe drives from Samsung so they should be completely unallocated.

The disks are also mapped to an ID, running ls -lh /dev/disk/by-id:

[root@sys ~]# ls -lh /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root 10 May 10 09:34 dm-name-rhel-root -> ../../dm-0
lrwxrwxrwx. 1 root root 10 May 10 09:34 dm-name-rhel-swap -> ../../dm-1
lrwxrwxrwx. 1 root root 10 May 10 09:34 lvm-pv-uuid-DGBpev-Na0C-tY20-YY6E-tpL3-epNA-8Ts3Y0 -> ../../sda3
lrwxrwxrwx. 1 root root 13 May 10 09:34 nvme-Samsung_SSD_990_PRO_4TB_XXXXXXXXXXXXXXX -> ../../nvme2n1
lrwxrwxrwx. 1 root root 13 May 10 09:34 nvme-Samsung_SSD_990_PRO_4TB_XXXXXXXXXXXXXXX -> ../../nvme5n1
lrwxrwxrwx. 1 root root 13 May 10 09:34 nvme-Samsung_SSD_990_PRO_4TB_XXXXXXXXXXXXXXX -> ../../nvme3n1
lrwxrwxrwx. 1 root root 13 May 10 09:34 nvme-Samsung_SSD_990_PRO_4TB_XXXXXXXXXXXXXXX -> ../../nvme6n1

Notice that they are symlinked to their disk names in /dev/.

We can create a /etc/zfs/vdev_id.conf that maps an alias to these IDs:

[root@sys ~]# vim /etc/zfs/vdev_id.conf

# Add these lines in the vdev_id.conf file
alias nvme0 /dev/disk/by-id/nvme-Samsung_SSD_990_PRO_4TB_XXXXXXXXXXXXXXX
alias nvme1 /dev/disk/by-id/nvme-Samsung_SSD_990_PRO_4TB_XXXXXXXXXXXXXXX
alias nvme2 /dev/disk/by-id/nvme-Samsung_SSD_990_PRO_4TB_XXXXXXXXXXXXXXX
alias nvme3 /dev/disk/by-id/nvme-Samsung_SSD_990_PRO_4TB_XXXXXXXXXXXXXXX

Run udevadm trigger to set the alias (or you can reboot the machine). We can verify that the aliases have been mapped by running ls -lh /dev/disk/by-vdev:

lrwxrwxrwx.  1 root root  13 May 10 10:28 nvme0 -> ../../nvme2n1
lrwxrwxrwx.  1 root root  13 May 10 10:28 nvme1 -> ../../nvme5n1
lrwxrwxrwx.  1 root root  13 May 10 10:28 nvme2 -> ../../nvme3n1
lrwxrwxrwx.  1 root root  13 May 10 10:28 nvme3 -> ../../nvme6n1

Alias mapping is completely optional, you can if you'd like use the full ID /dev/disk/by-id/nvme-eui.002538414143c248 of the disk when creating the zpool as we will do in the next section. Using an alias makes it nice. However, please don't use /dev/nvme1, /dev/nvme2, ... as the order is not guaranteed, especially if you mount a new drive to the system. Creating a vdev_id.conf ensures that the serial number of the drive is tied to the alias.

Remember when we discussed there is no configuration needed on the host OS? /etc/zfs/vdev_id/conf is not necessary and only used when creating a zpool for convenience. If your OS gets nuked, and you lose vdev_id.conf, it won't matter at all.

Step 2. Create ZPOOL

For this tutorial, I am creating a RAIDZ1 (RAID 5) zpool. That means 1 drive redundancy in-case of failure. It's up to you if you'd like additional redundancy, RAIDZ2 (RAID 6) would certainly be more risilient.

First, we need to install zfs on the linux machine. Please refer to the OpenZFS documentation on how to install it. It's usually as straight forward as, in my case, dnf install zfs on RHEL 9.

I recommend setting the ashift=12 option when creating the zpool as this is your last chance to do so. Most disks report 512kB sector size to OS due to backwards compatibility reasons, but large disks such as Samsung 990 Pro has a sector size of 4KB or even 8KB. ashift=12 represents a sector size of 4KB which will substantially improve performance.

[root@sys ~]# ls /dev/disk/by-vdev
nvme0 nvme1 nvme2 nvme3
[root@sys ~]# zpool create -o ashift=12 s16z1 raidz1 nvme0 nvme1 nvme2 nvme3
[root@sys ~]# zpool status s16z1
  pool: s16z1
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	s16z1       ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    nvme0   ONLINE       0     0     0
	    nvme1   ONLINE       0     0     0
	    nvme2   ONLINE       0     0     0
	    nvme3   ONLINE       0     0     0

errors: No known data errors
[root@sys ~]#

Perfect! I chose s16z1 as the name as it describes the size and type of raid. Most tutorials will use tank as the name, as it relates to pool. Corny, I reject this.

We're not done yet. zpool is a disk abstraction, and zfs is the file system. When we ran zpool create, it created a zfs file system with it.

List all properties of the zfs file system by running: zfs get all s16z1. To check whether our zpool is properly configured with ashift=12, we can run:

[root@sys ~]# zdb | grep ashift

ashift: 12

Before we share the system, configure compression (optionally) as well as the default mount point.

[root@sys ~]# zfs set mountpoint=/mnt/s16z1 s16z1
[root@sys ~]# zfs set compression=lz4 s16z1

Next, let's create a couple of zfs datasets under s16z1 root dataset. We will share them using Samba in the next section.

[root@sys ~]# zfs create s16z1/docs
[root@sys ~]# zfs create s16z1/backups

docs for documents, backups for time machine backup. You can create as many datasets as you'd like. Try to keep them at the top level. If you're wondering what is the difference between just a regular filesystem folder and a dataset—a zfs dataset is a way more than just a folder. You can manage zillion properties of a dataset, encrypt it, send and replicate a dataset, take snapshots, etc.—essentially, the entire ZFS feature set. Therefore, it is a good idea to create individual datasets for large categories of your files. docs or backups is a good abstraction level for a dataset. If you want to send just docs to a another remote server as a backup, you can do that without sending the whole s16z1 root dataset.

We will discuss sharing s16z1/docs as a general purpose share as well as ceating a proper Time Machine storage (for Apple systems) by sharing s16z1/backups with special properties.

Step 3. Share disk on the network

We'll use Samba for this, and the type of file sharing system is orthogonal to ZFS. ZFS doesn't care as long as it is mounted on the host system.

Install Samba:

[root@sys ~]# apt install samba

Create a UNIX user specifically for samba, we'll call it john:

[root@sys ~]# useradd -m john

and create a UNIX password for john:

[root@sys ~]# passwd john

Next, associate the UNIX user john to Samba by creating a Samba password for john . This password will then be used by hosts connecting to the share as SYS\john.

[root@sys ~]# smbpasswd -a john

john is also added to the Samba user group. You can verify john as a SMB user by running:

[root@sys ~]# pdbedit -L -v john

Unix username:        john
NT username:
Account Flags:        [U          ]
User SID:             S-1-5-21-880039843-1994218806-4034623300-1001
Primary Group SID:    S-1-5-21-880039843-1994218806-4034623300-513
Full Name:            john
Home Directory:       \\SYS\john
HomeDir Drive:
Logon Script:
Profile Path:         \\SYS\john\profile
Domain:               SYS
Account desc:
Workstations:
Munged dial:
Logon time:           0
Logoff time:          Wed, 06 Feb 2036 08:06:39 MST
Kickoff time:         Wed, 06 Feb 2036 08:06:39 MST
Password last set:    Fri, 23 Aug 2024 23:46:08 MST
Password can change:  Fri, 23 Aug 2024 23:46:08 MST
Password must change: never
Last bad password   : 0
Bad password count  : 0
Logon hours         : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

To delete john SMB user, pdbedit -x -u john

Now that the UNIX user and Samba user is setup, configuring Samba service is extremely straight forward.

Edit /etc/samba/smb.conf (delete everything and replace with the following):

[docs]
   path = /mnt/s16z1/docs
   browseable = yes
   read only = no
   guest ok = no
   valid users = john
   create mask = 0755
[backups]
   path = /mnt/s16z1/backups
   read only = no
   guest ok = no
   inherit acls = yes
   spotlight = yes
   fruit:aapl = yes
   fruit:time machine = yes
   vfs objects = catia fruit streams_xattr
   valid users = john

Test it out on macOS by mounting it with cmd+K in Finder app and using the following url format: smb://10.0.0.6/docs and smb://10.0.0.6/backups where 10.0.0.6 is the IP of the server sharing the SMB share.

To test it on Debian systems or similar, install apt install smbclient and run:

[root@sys ~]#  smbclient -U john //10.0.0.6/docs -c 'ls'

Mount the smb://10.0.0.6/backups and it will show up as a Time Machine share on macOS. Once mounted, start Time Machine backups by adding the share to macOS > Settings > General > Time Machine.

That's all for now. I plan to write another article expanding on the encryption and very powerful zfs dataset replication features.

Cover image credit to Oracle ZFS systems. God, it is beautiful: https://docs.oracle.com/cd/E78901_01/html/E78910/gqtmb.html

]]>
<![CDATA[Chart of Accounts for Startups and SaaS Companies]]>

Accounting is fundamental to starting a business. You need to have a basic understanding of accounting principles and essential bookkeeping. I had to learn it. There was no choice. For filing taxes, your CPA is going to ask you for an Income Statement (also known as P/L statement). If

]]>
https://neil.computer/notes/chart-of-accounts-for-startups-and-saas-companies/696743775a67c705af8f3545Fri, 15 Sep 2023 23:55:59 GMTChart of Accounts for Startups and SaaS CompaniesChart of Accounts for Startups and SaaS Companies

Accounting is fundamental to starting a business. You need to have a basic understanding of accounting principles and essential bookkeeping. I had to learn it. There was no choice. For filing taxes, your CPA is going to ask you for an Income Statement (also known as P/L statement). If you're doing taxes by yourself, then the Income Statement serves as a crucial report for filing taxes.

There are many resources out there to get started with the Bookkeeping process, but I had to spend a lot of time organizing chart of accounts. You need to organize this upfront as it will be a huge mess to change later.

I searched for a high quality chart of accounts, but it was either A) Disorganized mess, B) Too broad with 500+ accounts for every possible business C) Organized but without Chart of Account numbers D) Meant for massive C-Corps. You won't be able to chatGPT out of it, trust me.

Account Numbers are a must. I am generally obsessed with naming things right, it has always served me well. When you're communicating to your CPA, you can refer to them precisely by numbers.

Here is mine, it's meant for a software based startup with a single-member LLC (passthrough tax entity). If you have more than one members in the LLC, just duplicate the "Owner's" accounts for each member; rename "Owner" to "Member". If it is an S-Corp, change "Owner" to "Shareholders". There are a few differences in terminology (e.g. Distributions vs. Draw), but you'll figure that out later. Accounts can be renamed at any time, just make sure the structure and type of account is correct. I use it for Quickbooks Desktop (not sold anymore), but there is no reason why it can't be used in any accounting software.

Account # Account Description Type
1010 Stripe Account Bank
1020 Checking Account #1 Bank
1030 Checking Account #2 Bank
1040 Savings Account #1 Bank
1100 Clearing Account Bank
1200 Inventory Asset Other Current Asset
1410 Brokerage Account #1 Other Current Asset
1411 Cash Other Current Asset
1412 Investments Other Current Asset
1413 Unrealized Gains and Losses Other Current Asset
1490 ACH and Wire Clearing Other Current Asset
1500 Computers and Equipment Fixed Asset
1510 Computers Fixed Asset
1520 Computers Acc Depreciation Fixed Asset
1600 Furniture and Fixtures Fixed Asset
1610 Furniture Fixed Asset
1620 Furniture Acc Depreciation Fixed Asset
2000 Accounts Payable Accounts Payable
2100 Credit Card #1 Credit Card
2600 Due to Owner Short Term Liability
2610 Expense Reimbursement Short Term Liability
3000 Opening Balance Equity Equity
3100 Owner's Capital Equity
3110 Owner's Investments Equity
3120 Owner's Asset Contributions Equity
3200 Members Equity Equity
3300 Owner's Distributions Equity
3310 Owner's Withdrawals Equity
3320 Personal Expenses Equity
3400 Unrealized Gains and Losses Equity
4000 Revenue Income
4010 Non-Recurring Sales Income
4020 Subscription Sales Income
4030 Refunds and Allowances Income
5000 Cost of Goods Sold Cost of Goods Sold
5500 Payment Processing Expense
5510 Transaction Fees Expense
5520 Transaction Fee Allowances Expense
5530 Payment Services Expense
6000 Hosting and Cloud Computing Expense
6010 Datacenter Colocation Expense
6020 Cloud Services Expense
6030 Website and Domains Expense
6100 Software and Subscriptions Expense
6110 Software Applications Expense
6120 Software Subscriptions Expense
6130 Memberships Expense
6200 Office Supplies and Equipment Expense
6210 Stationery Expense
6220 Beverages and Canteen Expense
6230 Shipping and Postage Expense
6240 Office Electronics Expense
6250 Small Office Equipment Expense
6300 Education and Training Expense
6310 Books Expense
6320 Media Subscriptions Expense
6330 Courses and Seminars Expense
6400 Utilities Expense
6410 Internet Service Expense
6420 Telephone Expense
6430 Mechanical and Electrical Expense
6500 Maintenance and Repair Expense
6510 Appliance Maintenance Expense
6600 Meals Expense
6610 Meals with Clients Expense
6700 Travel Expense
6710 Transportation Expense
6720 Lodging Expense
6800 Automobile Expense
6810 Mileage Reimbursement Expense
6820 Repairs and Maintenance Expense
6900 Legal and Professional Expense
6910 Taxes and Accounting Expense
6920 LLC Operations and Management Expense
7000 Advertising and Marketing Expense
7010 Social Media Campaigns Expense
7020 Website Advertisement Expense
7030 Search Advertisement Expense
7200 Research and Development Expense
7210 Materials and Supplies Expense
7600 Bank Service Charges Expense
7610 Credit Card Allowances Expense
7620 Bank Fees Expense
7700 State Taxes Expense
7800 Depreciation Expense
7810 Furniture Depreciation Expense
7820 Computer Depreciation Expense
7900 Rent and Lease Expense
7910 Office Rent Expense
8000 Non-Operating Income Other Income
8010 Interest Income Other Income
8020 Dividend Income Other Income
8030 Other Non-Operating Income Other Income
9000 Ask My Accountant Other Expense

Feel free to use it in your company as long as following conditions are met:

In short: use it as you please for your company, no need to attribute me, don't sue me and don't sell this chart of accounts.

  • Grant of License: The licensor hereby grants you a perpetual, worldwide, non-exclusive, no-charge, and royalty-free license to use the Chart of Accounts template for any purpose, including for commercial purposes and within a company for bookkeeping, subject to the conditions herein.
  • No Warranty & No Liability: The Chart of Accounts template is provided "as is" without any warranties of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability or fitness for a particular purpose.
  • Restrictions: You are expressly prohibited from selling the Chart of Accounts template, or any derivative thereof, as a standalone product. Redistribution of the Chart of Accounts in its original or altered form is not permitted unless it's a part of a larger system or product where the Chart of Accounts is not the primary value.

Copyright 2023, Neil Panchal.

]]>
<![CDATA[Eternal Robustness]]>https://neil.computer/notes/eternal-robustness/696743775a67c705af8f3544Sun, 10 Sep 2023 21:34:29 GMTEternal RobustnessEternal Robustness

Delaware Law requires that any company establish in the State of Delaware (LLC, S-Corp or C-Corp) must have a Registered Agent physically located in Delaware. For a yearly charge, hiring such a Registered Agent is straightforward. They provide a physical mailing address, officially represent as a company agent, and forward any official notices.

While searching for such a service, I came across this gem:

We guarantee your annual Registered Agent Fee will remain fixed at $50 per company, per year, for the life of your company. All you need to do is pay on time each year. For discounted pricing, you have the option of paying for two years in advance for $90 or three years in advance for $125. Our Registered Agent Fee has remained unchanged since 1981.

This soothes the soul.

There is something so peaceful about permanency and eternal robustness, combined with a historical track record. This marketing copy signifies: "Look, you can trust us. We can be depended upon. We are serious about our products and we will serve you with excellence year after year. Your involvement with us will be predictable and stable."

I wish more software and design people would embrace this philosophy. This is an undervalued value proposition.

]]>
<![CDATA[Favorite Perfumes]]>A quick condensed list based on over 15 years of exploration in the world of perfumery. I didn't include obvious classics such as Polo Green and Aramis. If I were to pick 3 perfumes to wear for life, it would be: Hermes Bel Ami, Chanel Antaeus and Serge

]]>
https://neil.computer/notes/favorite-perfumes/696743775a67c705af8f3543Fri, 18 Aug 2023 08:14:57 GMTA quick condensed list based on over 15 years of exploration in the world of perfumery. I didn't include obvious classics such as Polo Green and Aramis. If I were to pick 3 perfumes to wear for life, it would be: Hermes Bel Ami, Chanel Antaeus and Serge Lutens Chergui. Absolutely insane what's possible with perfumery.

I'll add more as I discover them. For now:

HousePerfumeType/CategoryYear
CaronYataganWoody Chypre1976
CartierPasha de Cartier Édition NoireWoody Aromatic2013
CartierSantos de CartierOriental Woody1981
ChanelAntaeusWoody Chypre1981
ChanelCoromandelOriental2007
Frédéric MalleMusc RavageurOriental Spicy2000
GuerlainJickyFougère Aromatic1889
GuerlainVetiverWoody Earthy1961
GuerlainHabit RougeOriental Woody1965
GuerlainHeritageOriental Woody1992
GuerlainShalimarOriental1925
HermesEquipageWoody Chypre1970
HermesEquipage GeraniumChypre Floral2015
HermesRocabarWoody Spicy1998
HermesBel AmiLeather1986
HermesBel Ami VetiverWoody Chypre2013
KnizeKnize TenLeather1924
Serge LutensMuscs Koublai KhanOriental Musky1998
Serge LutensCherguiOriental Woody2005
Serge LutensSantal MajusculeOriental Woody2012
Serge LutensFumerie TurqueOriental2003
Serge LutensFille En AiguillesOriental Spicy2009
Serge LutensDaim BlondLeather2004
Tom FordTobacco OudOriental Woody2013
Tom FordTuscan LeatherLeather2007
Truefitt & HillGraftonWoody Spicy1983
]]>
<![CDATA[How to install PostgreSQL in a custom directory]]>

When you install postgresql using apt-get, it runs initdb and automatically creates a main cluster. Typically, the default data directory location is in /var/lib/postgresql/<version>/<cluster>/.

There are three ways to install PostgreSQL in a custom directory. Options 1 and 2 are not ideal.

]]>
https://neil.computer/notes/how-to-install-postgresql-in-a-custom-directory/696743775a67c705af8f3541Sat, 08 Oct 2022 02:13:34 GMTHow to install PostgreSQL in a custom directoryHow to install PostgreSQL in a custom directory

When you install postgresql using apt-get, it runs initdb and automatically creates a main cluster. Typically, the default data directory location is in /var/lib/postgresql/<version>/<cluster>/.

There are three ways to install PostgreSQL in a custom directory. Options 1 and 2 are not ideal.

  1. Install as default in /var/lib/postgresql/<version>/<cluster> and then move it somewhere else. Update location of the data directory in /etc/postgresql/<version>/<cluster>/postgresql.conf. This is not ideal because PostgreSQL ensures that the data directory is only accessible by postgres linux user when a cluster is created. You can misconfigure permissions by moving the data directory yourself.

  2. Run initdb again, however, that's not quite possible unless you manually wipe out directories. initdb command will complain that there is already a cluster main located in /var/lib/postgresql/<version>/<cluster>.

Proper way

The proper way to go about this is to prevent PostgreSQL from running initdb until you're ready to create a cluster.

  1. On a fresh system (without PostgreSQL), install postgresql-common package first. Do not install postgresql yet.
sudo apt install postgresql-common
  1. Edit the /etc/postgresql-common/createcluster.conf and configure a custom data directory as follows:
# Default values for pg_createcluster(8)
# Occurrences of '%v' are replaced by the major version number,
# and '%c' by the cluster name. Use '%%' for a literal '%'.

# Create a "main" cluster when a new postgresql-x.y server package is installed
create_main_cluster = true

# Default start.conf value, must be one of "auto", "manual", and "disabled".
# See pg_createcluster(8) for more documentation.
#start_conf = 'auto'

# Default data directory.
data_directory = '/my/awesome/location/postgresql/%v/%c'
  1. Install PostgreSQL:
sudo apt install postgresql

You should be all set. New database files will be created at /my/awesome/localtion/.

Unrelated but make sure to create a password for postgres superuser as follows:

sudo -u postgres psql postgres

psql (14.5 (Debian 14.5-1.pgdg110+1))
Type "help" for help.

postgres=# \password
Enter new password for user "postgres":

The end.

Image source: https://www.wwf.org.uk/learn/wildlife/african-elephants

]]>
<![CDATA[Neofetch – Server Administration System]]>I ssh into a lot of machines. Dozens of times a day. I need a quick way to see the health of the system as soon as I log in and with zero friction. I am obliged to use the one and only tool: neofetch. Neofetch is a pillar of

]]>
https://neil.computer/notes/neofetch/696743775a67c705af8f3540Tue, 04 Oct 2022 06:26:48 GMT

I ssh into a lot of machines. Dozens of times a day. I need a quick way to see the health of the system as soon as I log in and with zero friction. I am obliged to use the one and only tool: neofetch. Neofetch is a pillar of r/unixporn. It is the Altair of aesthetics, the expression of unix swaggery, the very manifestation of what it means to be a unix fan. It is a perfect tool for my ultimate nix flex: the Berkeley Graphics – Server Administration System (SAS).

Neofetch – Server Administration System
Berkeley Graphics - Server Administration System

As you can see, we have critical system information presented to the terminal user at a glance. Fitted with a fine typeface and a bright industrial-grade color scheme, the SAS offers a bespoke experience of ssh-ing into a machine. Of particular importance is the Memory and Disk usage, greatly enhanced by the bar graph representation for immediate visual capture.

If you'd like to build emulate Berkeley Graphics SAS, please find the attached neofetch configuration file for your perusal. This goes into the ~/.config/neofetch/config.conf location. For a complete visualization extravaganza, Berkeley Mono Typeface can be purchased from our storefront. While it is of decorative nature, it does function as a great way to see what's going on the remote server. The only downside is that neofetch is a little slow and approximately adds 220ms.

Sometimes, it is fine to take a break; decorate your world, systemize your frequent interactions and organize your tools. It is a sign of ardent craftsmanship, we all do it and that's good. I hope I can leave you with some inspiration.

PS – SAS is a bit of tongue-in-cheek, next I need to write an article about my fascination for hyperformalization on this Thought Organization and Publication System (TOPS), aka my blog.

]]>
<![CDATA[Dear Spotify. Can we just get a table of songs?]]>Dear Spotify. I tried to search for podcasts on your Desktop app. I know you're into fancy cross-platform Electron framework. I've come to terms with it. It's fine. It'll do. But, your understanding of interface design seems like it needs a bit

]]>
https://neil.computer/notes/dear-spotify-can-we-just-get-table-of-songs/696743775a67c705af8f353dSat, 04 Jun 2022 08:09:48 GMT

Dear Spotify. I tried to search for podcasts on your Desktop app. I know you're into fancy cross-platform Electron framework. I've come to terms with it. It's fine. It'll do. But, your understanding of interface design seems like it needs a bit of a history lesson. Back in iTunes Good Ol' Days™ edition, we'd get our search results in a list like this:

Dear Spotify. Can we just get a table of songs?
iTunes Good Ol' Days™ Edition

Stick something in the search bar. Get results in a list. Life is good. Clear, concise, dense, and most importantly: FUNCTIONAL. It works.

But, when I try to do nothing more than search in your shiny app, I get this:

Dear Spotify. Can we just get a table of songs?
Spotify UI/UX search

Hmm. What the fuck is this!? Why are you trying to be edgy? Ok fine, I'll scroll down to the bottom and click "Episodes". Looking for some Gene Kranz podcasts.

Dear Spotify. Can we just get a table of songs?
Spotify UX/UI Episodes

Album grid. A giant wall of Podcast cover art. If there is a piece of information about a podcast that is the least useful, that would be the cover art. If there is a piece of information that is the most useful, that would be the title of the podcast as well as the name of Podcaster; both of those things appear to be either cut off or not visible. There is no other way to go through 18 pages of podcast search results. No list view. No compact view. This is all you get. I don't do memes on my blog, but if I were, it'd be full of them. That's the feeling right now.

There are so many terrible UX/UI patterns everywhere in these tech companies. Apple, Google, Spotify, Netflix, Microsoft, Amazon, etc; I wish someone would pay me a yearly salary to just write about them. I would. I want to go yell in these companies. Hire me and hand me the entire UX/UI department. It would be called the "Department of Functional Interfaces".

And, I'm not even mad. Just desensitized by your assault, Spotify. Your buddy, Apple Music, sucks too.

Foobar2000? Where are you? Do you still have that tattoo on your arm that screams "FUNCTIONALISM"?

]]>
<![CDATA[Dear JetBrains. Don't mess with your UI.]]>So we have yet a new UI overhaul. This time, bringing consumer-grade UI practices to the world of professionals.

Announcement:
https://blog.jetbrains.com/idea/2022/05/take-part-in-the-new-ui-preview-for-your-jetbrains-ide/

New Jet Brains interface

We're professionals. We can handle the complexity and the density of the current UI. You made

]]>
https://neil.computer/notes/dear-jetbrains-dont-mess-with-your-ui/696743775a67c705af8f353cFri, 03 Jun 2022 06:29:58 GMT

So we have yet a new UI overhaul. This time, bringing consumer-grade UI practices to the world of professionals.

Announcement:
https://blog.jetbrains.com/idea/2022/05/take-part-in-the-new-ui-preview-for-your-jetbrains-ide/

Dear JetBrains. Don't mess with your UI.
New Jet Brains interface

We're professionals. We can handle the complexity and the density of the current UI. You made a contract with us about what the UI would be like for a foreseeable future. A user interface is like an API for your eyeballs. Your vision system expects things in a certain place. The way we revise APIs, we ought to do the same with UIs. The cost of changing anything even if it is objectively better through whatever A/B testing you've done, has to also account for the breakage of the contract it has made over years of daily usage. The bar needs to be really high.

So, what have we got here? Oh, the usual. Honest to god, I was playing bingo the moment I saw the headline on Lobsters. It is going to be something along these lines:

  • More negative space. I've yet to see any modern UI changes that go in the opposite direction. Literally, everything just got fatter. More padding and margin across the board.
  • Removal of color from icons. Yup, iconography and the designers that used to work on them are long extinct. Everything is open source so no one has the time and effort to work on icons anymore. We continue to use terrible simplistic icons everywhere. They're always made in a hurry using line segments to create a rough outline of the icon. I don't blame the designers one bit, it is just the situation. Designing icons is a full time job.
  • Rounding of corners. Religion at this point.
  • Homogenization. JetBrains wants to be like VSCode. Being different is not cool anymore. If we look at interface design across the board between 2010 and today, everything has become homogenized. The largest, and most powerful companies inject their consumer-grade ad-tech taste into everything and anything.
  • Animations. Haven't seen them but it will be filled with animations. The whole interface is going to feel like turbulent molasses. Mark my words.

A double whammy. Not only are they breaking the UI, but they're also doing it for all the wrong reasons. A bit of spanking is needed in this space.

Goddamn it, I love PyCharm. What now? vim + billion plugins?

]]>
<![CDATA[Berkeley Mono Font Variant Popularity]]>
Berkeley Mono Typeface

Absolutely thrilled by the response after Hacker News launch of Berkeley Mono Typeface. I am truly humbled by the feedback, interest and people that have keen interest in typography.

Berkeley Mono Font Variants
Berkeley Mono Font Variants

Quick update on people asking what's the most popular variant. Seems like

]]>
https://neil.computer/notes/berkeley-mono-font-variant-popularity/696743775a67c705af8f3539Sat, 05 Mar 2022 21:32:55 GMTBerkeley Mono Font Variant Popularity
Berkeley Mono Typeface
Berkeley Mono Font Variant Popularity

Absolutely thrilled by the response after Hacker News launch of Berkeley Mono Typeface. I am truly humbled by the feedback, interest and people that have keen interest in typography.

Berkeley Mono Font Variant Popularity
Berkeley Mono Font Variants

Quick update on people asking what's the most popular variant. Seems like Zero (Dot) and Seven (Regular) are winning! I personally use the most popular combination.

Berkeley Mono Font Variant Popularity
Grafana Statistics for Berkeley Mono Font Variants

Check out Berkeley Mono Typeface if you haven't, it is now publicly available.

]]>
<![CDATA[Berkeley Mono February Update]]>Hey Gang! First of all, thank you to everyone that participated in the Beta program. Feedback is very much appreciated.

Here is a quick update on the progress:

Website - Berkeleygraphics.com

It is already up: https://berkeleygraphics.com but does not have the Berkeley Mono pages yet.

Stack is

]]>
https://neil.computer/notes/berkeley-mono-february-update/696743775a67c705af8f3536Sun, 06 Feb 2022 04:58:00 GMT

Hey Gang! First of all, thank you to everyone that participated in the Beta program. Feedback is very much appreciated.

Here is a quick update on the progress:

Website - Berkeleygraphics.com

It is already up: https://berkeleygraphics.com but does not have the Berkeley Mono pages yet.

Stack is extremely simple:

  • Python Flask
  • Docker
  • AWS S3
  • Completely server-side (No JS at all)
  • Payments through Stripe
  • Sentry (self-hosted) for alerts
  • AWS Cloudwatch for logs
  • Debian Bullseye 11 (self-hosted server)
  • Cloudflare + Cloudflare Analytics
  • No cookies

I had fun designing this little download module (only visible after you purchase Berkeley Mono). It allows you to customize the font files for particular stylistic-set defaults. Basically, no need to change anything in your application, font files will be prebuilt for your liking:

Berkeley Mono February Update
Berkeley Mono - Customer Account - Download Fonts

Berkeley Mono Specimens

This is the marketing bit. It's fun but also extremely time-consuming:

Berkeley Mono February Update
Berkeley Mono Type Specimen
Berkeley Mono February Update
Berkeley Mono Type Specimen
Berkeley Mono February Update
Berkeley Mono Type Specimen
Berkeley Mono February Update
Berkeley Mono Type Specimen

Italics

I am thrilled to announce, absolutely classic 16º obliques. I think Oblique works better for code than true Italics. Also, in the context of Berkeley Mono, it is difficult to design flamboyant Italics while also being sufficiently legible, straightforward, and code-friendly. Personally, I am not a fan of Operator Mono from Hoefler & Co. although many people seem to like that sort of a thing.

Berkeley Mono February Update
Berkeley Mono Oblique

Designing proper obliques takes a long time because essentially it is designing a new typeface from scratch (almost). It is not a matter of slanting glyphs and calling it a day. Here is the most basic example. It gets quite involved for complicated glyphs such as f or R:

Berkeley Mono February Update
Hand-tuning of Obliques

With a simple shear transformation, the corners get disproportionately heavy. This needs to be hand-tuned with a combination of shear + rotation (clockwise) + elbow grease.

Why 16 degrees? It is basically the classic oblique angle of Univers. Modern Univers from Linotype (now Monotype) uses a modest 12º italic angle. I think that's too boring. Not 70's and 80's enough. More importantly, you want to easily distinguish between Roman and Italics. If they're too similar, it misses the point entirely.

Broad Language support

Berkeley Mono will have full support for the following European languages:

  • Albanian
  • Croatian
  • Czech
  • Danish
  • Dutch
  • English
  • Estonian
  • Finnish
  • Georgian
  • German
  • Icelandic
  • Irish
  • Italian
  • Latvian
  • Lithuanian
  • Norwegian (Bokmål + Nynorsk)
  • Polish
  • Portuguese
  • Romanian
  • Romansh
  • Serbian
  • Slovak
  • Slovenian
  • Spanish
  • Swedish
  • Turkish
  • Welsh

As well as Asian anglicization of:

  • Filipino
  • Indonesian

Berkeley Mono complies with the following ISO 8859 standards:

  • ISO 8859-1 Latin-1 Western European
  • ISO 8859-2 Latin-2 Central European
  • ISO 8859-3 Latin-3 South European
  • ISO 8859-4 Latin-4 North European
  • ISO 8859-9 Latin-5 Turkish
  • ISO 8859-10 Latin-6 Nordic
  • ISO 8859-13 Latin-7 Baltic Rim
  • ISO 8859-15 Latin-9 Finnish, Estonian
  • ISO 8859-16 Latin-10 South-Eastern European

Basic Latin Support

Berkeley Mono February Update
Berkeley Mono - Basic Latin Set

Western European Support

Berkeley Mono February Update
Berkeley Mono - Western European Set

Central European Set

Berkeley Mono February Update
Berkeley Mono - Central European Set

South-Eastern European Set

Berkeley Mono February Update
Berkeley Mono - South-Eastern European Set

I am also planning for Greek language support but it seems to have stalled for now due to other activities:

Berkeley Mono February Update
Berkeley Mono - Greek Language Support

That's all for now. Stay tuned!

The end.

]]>
<![CDATA[Bell Labs Org Chart]]>

I've always been curious about the story of Bell Labs – how it was formed, why it was successful, its challenges and struggles, innovation engine, people, its organizational structure, operations, and its legacy. The Idea Factory by Jon Gertner is an excellent albeit slightly inaccurate summary of what,

]]>
https://neil.computer/notes/bell-labs-org-chart/696743775a67c705af8f3535Sun, 23 Jan 2022 07:27:47 GMT

I've always been curious about the story of Bell Labs – how it was formed, why it was successful, its challenges and struggles, innovation engine, people, its organizational structure, operations, and its legacy. The Idea Factory by Jon Gertner is an excellent albeit slightly inaccurate summary of what, how, and why of Bell Labs. After finishing the book, I came across a curious listing on eBay. Without hesitation, I instantly purchased the Bell Labs Pictorial Directory of the Design Engineering Center (Dept 375) published on May 1, 1980. The cover design by itself is eye-catching with the good ol' Ma Bell logo on the top-right corner.

This pictorial directory gives an abstract glimpse into the operations of Bell Labs' "Idea Factory" simply through the titles of the departments and how they were organized. A few of these departments and teams are obsolete (e.g. Drafting group), nevertheless, there is much to learn/drool about.

I was curious – how does Big Tech research compare to Bell Labs? I can't name a single thing from Google X or Amazon 126, Apple or Microsoft Research departments. Perhaps a faint idea of drones deliveries, balloon weather projects, AR/VR tech, failed chat apps, quantum something, and a lot of PR. Listing Bell Labs' major inventions really put things into perspective: Laser, Solar-cells, Communications Satellites, Touch-tone telephones, Transistor, UNIX, C language, Digital Signal Processing (DSP), Cellular Telephones, Data Networking, Charge-coupled device (CCD), Information Theory, Television, Sound motion pictures, and a total of 8 Nobel Prizes in Physics.

Bell Labs was massive and operational for multiple decades, but it wouldn't hurt for the current Big Tech leadership to look back at the history and have a hard contemplation of what's going on with the current research. I think it would be difficult to replicate Bell Labs, if ever. Needless to say, I would kill to work at Bell Labs in the heydays, probably in the Precision Graphics and Computer Applications Group as I wear these shiny rose-tinted glasses. I want to work with all these super cool people of the bygone era.

Here is the entire Design Engineering Center org-chart. Mind you, this is just Dept 375. I don't know how many other departments existed at Bell Labs. It was just huge:

Original scans:

Bell Labs Pictorial Directory

Download as a PDF.

The end.


Edit 1: Big Bang Theory was actually not discovered at Bell Labs, but its discovery was facililated by Bell Labs. Thank you to Ray Osborne for the correction.

]]>
<![CDATA[Teaching how to code is broken]]>

Typically:

  • Chapter 1: Types
  • Chapter 2: Variables
  • Chapter 3: Operators/Math
  • Chapter 4: Control structures
  • Chapter 5: Arrays
  • Chapter 6: Functions
  • Chapter 7: Structs
  • Chapter 8: Classes and Objects
  • Chapter 9: Methods
  • Chapter 10: Inheritance and Polymorphism
  • Chapter 11: Some advanced thing X
  • Chapter 12: Some esoteric thing Y
  • Chapter
]]>
https://neil.computer/notes/teaching-how-to-code-is-broken/696743775a67c705af8f3534Wed, 12 Jan 2022 06:50:50 GMTTeaching how to code is brokenTeaching how to code is broken

Typically:

  • Chapter 1: Types
  • Chapter 2: Variables
  • Chapter 3: Operators/Math
  • Chapter 4: Control structures
  • Chapter 5: Arrays
  • Chapter 6: Functions
  • Chapter 7: Structs
  • Chapter 8: Classes and Objects
  • Chapter 9: Methods
  • Chapter 10: Inheritance and Polymorphism
  • Chapter 11: Some advanced thing X
  • Chapter 12: Some esoteric thing Y
  • Chapter 13: No one reaches the end so let's introduce concurrency here

This is fine for a reference book for an experienced professional. It is a terrible format for tutorials or any guidance material that intends to teach people how to code, specifically beginners.

Why? Because none of these chapters answer the most important question a reader has, the entire time, WHY!? Why is all this important and what problems does it solve? When should I use this thing that I learned? Imagine if aliens landed on earth and we teach them everything about the shape of a fork, its material, how its made, typical ways of holding it, various shapes of forks, its history, and etymology, but never tell them that a fork is used to pick up food and stick it in the mouth.

Teaching how to code should be about problem-solving and effectively using the tools of a programming language to solve it. Say for example modeling a card game. Even better if the example is a continuous improvement starting from the very basics, hard coding stuff, and then increasing the scope as follows:

  • Chapter 1: Cards (Constants, literals - hard code all cards)
  • Chapter 2: Suits (String concatenation, Int vs literal string)
  • Chapter 3: Ordered Ranks (Arrays, Variables to store each suit)
  • Chapter 4: Deck (Multidimensional arrays)
  • Chapter 5: Shuffling (Loops, Control flow, Member access, Assignment)
  • Chapter 6: Reusability of Shuffling algorithm (Functions, Pass by Value/Reference)
  • Chapter 7: Modeling a card game (I/O, Cards as Objects, Classes/Structs)
  • Chapter 8: Modeling a complex card game (Abstraction, Inheritance, Polymorphic behavior, Overriding methods, etc.)
  • Chapter 9: Multiplayer card game (Sockets and Networking, Server-Client communication, Concurrency)

There are very few resources out there that truly embrace this type of teaching – e.g. the absolutely brilliant Nature of Code for beginners. Another one for experienced software engineers is Architecture Patterns with Python (models an e-commerce business from scratch).

I've found that modeling or simulating something real like a card game is very effective. It teaches students about how to translate a real-world situation/problem/thing/phenomenon into code. Teaches them about assumptions and limitations of the model they've built. Also, it teaches them about how to interrogate requirements imposed by the real world through simplification/abstraction or simply rejecting some aspects of it as "Too complex, costly and impractical to model with no real benefit or a clear use case" :-).


Image credit: https://commons.wikimedia.org/wiki/File:Croneberg_and_Stokoe.png Deafhistory101, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

]]>
<![CDATA[ZFS RAIDZ2 - Achieving 157 GB/s]]>

Update: See Note 5 below . 157 GB/s is a misleading bandwidth due to the way fio lib handles the --filename option. Actual bandwidth is approximately 22 GB/s, which is still mighty impressive.


I built a new server that's also going to serve as a NAS. It

]]>
https://neil.computer/notes/zfs-raidz2/696743775a67c705af8f352fMon, 13 Dec 2021 03:31:34 GMT

Update: See Note 5 below . 157 GB/s is a misleading bandwidth due to the way fio lib handles the --filename option. Actual bandwidth is approximately 22 GB/s, which is still mighty impressive.


I built a new server that's also going to serve as a NAS. It consists of 8 NVMe drives from Samsung (970 Evo 2TB) with a total capacity of 10.2 TB in RAIDZ2 configuration (2 drive redundancy).

Server specs:

Neil's Lab Server Specifications
CPU Model Intel® Xeon® Gold 6326, 16 Cores (32 Threads), 2.90 GHz (Base), 3.50 GHz (Turbo)
CPU Cooler Noctua NH-U12S DX-4189
Motherboard Supermicro X12SPi
RAM Samsung 6x16GB (96 GB) DDR4-3200 RDIMM ECC PC4-25600R Dual Rank
NIC (On board) Intel X550 2x 10G Base-T
NIC (PCIe) Supermicro AOC-SGP-I4 4x 1GbE
OS NVMe 2x1TB(2TB) Samsung 970 Pro
OS NVMe Carrier Supermicro AOC-SLG3-2M2 PCIe x8
NAS NVMe 8x2TB(16TB) Samsung 970 Evo
NAS NVMe Carrier 2x Quad Gigabyte GC-4XM2G4 PCIe x16
Power Supply EVGA 750 Watt 210-GQ-0750-V1
Chassis NZXT H510i Flow

PCIe Bifurcation

I am using 2x Gigabyte GC-4XM2G4 PCIe M.2 carrier cards, each can hold upto 4 NVMe drives. These cards have native 4x4x4x4 bifurcation mode in Supermicro BIOS. That means, they're presented to the OS as individual 4 lane PCIe devices, unlike HBA or RAID cards. All 32 PCIe lanes are directly hooked up to the CPU I/O, without having to go through South Bridge chipset.

PCIe Bifurcation M.2 Carrier Cards
PCIe Bifurcation M.2 Carrier Cards

Build a ZFS raid on Ubuntu 20.04 server:

All NVMe drives are in PCIe Passthrough mode on VMWare ESXi:

VMWare ESXi PCIe Passthrough Configuration
VMWare ESXi PCIe Passthrough Configuration

Install ZFS utils:

sudo apt install zfsutils-linux

Check if it is installed:

neil@ubuntu:~$ lsmod | grep zfs
zfs                  4034560  6
zunicode              331776  1 zfs
zlua                  147456  1 zfs
zavl                   16384  1 zfs
icp                   303104  1 zfs
zcommon                90112  2 zfs,icp
znvpair                81920  2 zfs,zcommon
spl                   126976  5 zfs,icp,znvpair,zcommon,zavl

Check physical disks:

neil@ubuntu:~$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0                       7:0    0 70.3M  1 loop /snap/lxd/21029
loop1                       7:1    0 55.4M  1 loop /snap/core18/2128
loop2                       7:2    0 32.3M  1 loop /snap/snapd/12704
sda                         8:0    0  100G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   99G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   99G  0 lvm  /
nvme0n1                   259:0    0  1.8T  0 disk
nvme5n1                   259:1    0  1.8T  0 disk
nvme4n1                   259:2    0  1.8T  0 disk
nvme2n1                   259:3    0  1.8T  0 disk
nvme1n1                   259:4    0  1.8T  0 disk
nvme6n1                   259:5    0  1.8T  0 disk
nvme3n1                   259:6    0  1.8T  0 disk
nvme7n1                   259:7    0  1.8T  0 disk

Optionally check for bad block sectors on the disk (I skipped this step since it takes a long time):

sudo badblocks -b 512 -sw /dev/nvme7n1
Testing with pattern 0xaa: 3.68% done, 0:57 elapsed. (0/0/0 errors)

Locate disks by ID:

neil@ubuntu:~$ ls -lh /dev/disk/by-id
lrwxrwxrwx 1 root root 10 Dec  6 04:45 dm-name-ubuntu--vg-ubuntu--lv -> ../../dm-0
lrwxrwxrwx 1 root root 10 Dec  6 04:45 dm-uuid-LVM-Iw3mFxuF9uwlCMJ0yucrWHZwG82z1I6uX0tK6D7CT0Yfb4GXANWiCSjy3E4BoNos -> ../../dm-0
lrwxrwxrwx 1 root root 10 Dec  6 04:45 lvm-pv-uuid-In8DDs-U2jd-TdgQ-BZM3-iACR-HPF3-PT61BX -> ../../sda3
lrwxrwxrwx 1 root root 13 Dec  6 04:45 nvme-eui.00253858119138c7 -> ../../nvme1n1
lrwxrwxrwx 1 root root 13 Dec  6 05:12 nvme-eui.00253858119138ca -> ../../nvme7n1
lrwxrwxrwx 1 root root 13 Dec  6 04:45 nvme-eui.0025385811913a89 -> ../../nvme5n1
lrwxrwxrwx 1 root root 13 Dec  6 04:45 nvme-eui.002538581191b1f1 -> ../../nvme3n1
lrwxrwxrwx 1 root root 13 Dec  6 04:45 nvme-eui.002538581191b32b -> ../../nvme4n1
lrwxrwxrwx 1 root root 13 Dec  6 04:45 nvme-eui.002538581191c362 -> ../../nvme6n1
lrwxrwxrwx 1 root root 13 Dec  6 04:45 nvme-eui.002538581191c369 -> ../../nvme0n1
lrwxrwxrwx 1 root root 13 Dec  6 04:45 nvme-eui.002538581191c472 -> ../../nvme2n1

Create a file /etc/zfs/vdev_id.conf and add the following aliases:

alias nvme0 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R849603J
alias nvme1 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R835621F
alias nvme2 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R849868J
alias nvme3 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R846665V
alias nvme4 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R846979M
alias nvme5 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R836071F
alias nvme6 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R849596Y
alias nvme7 /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_2TB_S59CNM0R835624D

Run sudo udevadm trigger or simply reboot the machine. Aliases that we created will now show up under /dev/disk/by-vdev.

neil@ubuntu:~$ ls -lh /dev/disk/by-vdev
lrwxrwxrwx 1 root root 13 Dec  6 05:35 nvme0 -> ../../nvme0n1
lrwxrwxrwx 1 root root 13 Dec  6 05:35 nvme1 -> ../../nvme1n1
lrwxrwxrwx 1 root root 13 Dec  6 05:35 nvme2 -> ../../nvme2n1
lrwxrwxrwx 1 root root 13 Dec  6 05:35 nvme3 -> ../../nvme3n1
lrwxrwxrwx 1 root root 13 Dec  6 05:35 nvme4 -> ../../nvme4n1
lrwxrwxrwx 1 root root 13 Dec  6 05:35 nvme5 -> ../../nvme5n1
lrwxrwxrwx 1 root root 13 Dec  6 05:35 nvme6 -> ../../nvme6n1
lrwxrwxrwx 1 root root 13 Dec  6 05:35 nvme7 -> ../../nvme7n1

Create a zpool.

neil@ubuntu:/dev/disk/by-vdev$ ls
nvme0  nvme1  nvme2  nvme3  nvme4  nvme5  nvme6  nvme7
neil@ubuntu:/dev/disk/by-vdev$ sudo zpool create tank raidz2 nvme0 nvme1 nvme2 nvme3 nvme4 nvme5 nvme6 nvme7
neil@ubuntu:/dev/disk/by-vdev$ zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    nvme0   ONLINE       0     0     0
	    nvme1   ONLINE       0     0     0
	    nvme2   ONLINE       0     0     0
	    nvme3   ONLINE       0     0     0
	    nvme4   ONLINE       0     0     0
	    nvme5   ONLINE       0     0     0
	    nvme6   ONLINE       0     0     0
	    nvme7   ONLINE       0     0     0

errors: No known data errors
neil@ubuntu:/dev/disk/by-vdev$

For detailed status, run zpool list -v:

neil@ubuntu:/dev/disk/by-id$ zpool list -v
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank       14.5T   274K  14.5T        -         -     0%     0%  1.00x    ONLINE  -
  raidz2   14.5T   274K  14.5T        -         -     0%  0.00%      -  ONLINE
    nvme0      -      -      -        -         -      -      -      -  ONLINE
    nvme1      -      -      -        -         -      -      -      -  ONLINE
    nvme2      -      -      -        -         -      -      -      -  ONLINE
    nvme3      -      -      -        -         -      -      -      -  ONLINE
    nvme4      -      -      -        -         -      -      -      -  ONLINE
    nvme5      -      -      -        -         -      -      -      -  ONLINE
    nvme6      -      -      -        -         -      -      -      -  ONLINE
    nvme7      -      -      -        -         -      -      -      -  ONLINE

Great. Now we have a zpool.


Create a ZFS filesystem:

Now that we have a zpool named tank, we can create a file system, enable lz4 compression and mount it for performance testing:

neil@ubuntu:/dev/disk/by-vdev$ sudo zfs create tank/fs
neil@ubuntu:/dev/disk/by-vdev$ zfs list
NAME      USED  AVAIL     REFER  MOUNTPOINT
tank      229K  10.5T     47.1K  /tank
tank/fs  47.1K  10.5T     47.1K  /tank/fs
neil@ubuntu:/dev/disk/by-vdev$ sudo zfs set compression=lz4 tank/fs
neil@ubuntu:/dev/disk/by-vdev$ sudo zfs set mountpoint=/home/neil/mnt/disk tank

Excellent. We should have a file system fs located at ~/mnt/disk/fs.


Performance testing using FIO:

FIO testing suite is quite nifty for doing all sorts of I/O testing.

Install FIO tools:

sudo apt install fio

And run the benchmark with block size of 512k and 28 workers:

sudo fio --name=read_test \
        --filename=/home/neil/mnt/disk/dummy \
        --filesize=20G \
        --ioengine=libaio \
        --direct=1 \
        --sync=1 \
        --bs=512k \
        --iodepth=1 \
        --rw=read \
        --numjobs=28 \
        --group_reporting

157 GB/s is insane (sequential reads). That's with lz4 compression and RAIDZ2 configuration! It is trivial to start a sharing service such as nfs , afp or smb in Ubuntu. No need for FreeNAS or TrueNAS.

FIO disk benchmark - 157 GB/s Sequential Reads
FIO disk benchmark - 157 GB/s Sequential Reads

I don't think we're close to saturating the PCIe bandwidth but using more vCPUs doesn't help with the bandwidth. So, it might be due to the limitations of NVMe controllers.

Hopefully, this is NAS is going to last a decade or so. ZFS is very portable and even if I lose VMWare ESXi image or something goes wrong with the system, I can pullout the PCIe cards housing the NVMe drives, insert them into another system and it will work.

By no means is this a comprehensive disk test. I fiddled around with many different knobs: numjobs, blocksize, direct, sync, etc. Most tests were around 70-100 GB/s sustained sequential reads. I suspect some of the speed is from caching in the RAM, which would explain the absurdity of 157 GB/s. Well, that's even more impressive that ZFS can do that! YMMV.

I'd love to do a more comprehensive test but this thing needs to be permanently mounted and I need to transfer data over from the god-awful[1] QNAP NAS. It is making grumpy SATA noises.


Thanks to Will Yager and Steve Ruff for help during this adventure!

[1] Not because of the performance, but because of how bloated it really is. It literally takes 10 solid minutes to boot. With an attack surface so high (All kinds of apps, docker stuff, plugins, QNAP-cloud thingy, etc), it is completely ridiculous and I can't wait to get rid of it.

Note 1

I am not sure, I think some of it is cached in the RAM. I did a longer sustained test and I am getting about 70 GB/s with a 200GB file - this is definitely larger than 64GB RAM. Is it to do with lz4 compression? Fetching smaller blocks from disk and then decompressing it, thus inflating the bandwidth?

The initial file was created with fio rw=randwrite, so the dummy data is random. I verified with ext4 fs on a single NVMe drive and no matter what knobs I turn, I am getting around around 3.5 GB/s.

Note 2

I just did another 100GB file test with fio, full dump as follows getting about 80 GB/s. This test was run fresh after a system reboot and clearing the drive (no cache). How does this work?:

neil@ubuntu:~/mnt/disk/fs$ sudo fio --name=read_test         --filename=/home/neil/mnt/disk/fs/dummy         --filesize=100G         --ioengine=libaio         --direct=1         --sync=1         --bs=512k         --iodepth=1         --rw=read         --numjobs=28         --group_reporting
read_test: (g=0): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=1
...
fio-3.16
Starting 28 processes
read_test: Laying out IO file (1 file / 102400MiB)
Jobs: 28 (f=28): [R(28)][100.0%][r=78.0GiB/s][r=160k IOPS][eta 00m:00s]
read_test: (groupid=0, jobs=28): err= 0: pid=405516: Mon Dec 13 06:00:25 2021
  read: IOPS=152k, BW=74.4GiB/s (79.8GB/s)(2800GiB/37653msec)
    slat (usec): min=20, max=107934, avg=180.88, stdev=1535.31
    clat (nsec): min=248, max=9018.5k, avg=1114.70, stdev=9262.53
     lat (usec): min=20, max=107938, avg=182.57, stdev=1535.47
    clat percentiles (nsec):
     |  1.00th=[   334],  5.00th=[   422], 10.00th=[   494], 20.00th=[   620],
     | 30.00th=[   708], 40.00th=[   796], 50.00th=[   892], 60.00th=[   996],
     | 70.00th=[  1112], 80.00th=[  1272], 90.00th=[  1512], 95.00th=[  1768],
     | 99.00th=[  2800], 99.50th=[  8640], 99.90th=[ 32128], 99.95th=[ 47872],
     | 99.99th=[116224]
   bw (  MiB/s): min=46896, max=93840, per=99.96%, avg=76117.26, stdev=463.15, samples=2100
   iops        : min=93792, max=187680, avg=152233.31, stdev=926.29, samples=2100
  lat (nsec)   : 250=0.01%, 500=10.55%, 750=23.97%, 1000=25.87%
  lat (usec)   : 2=36.82%, 4=2.02%, 10=0.35%, 20=0.24%, 50=0.14%
  lat (usec)   : 100=0.03%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%
  cpu          : usr=1.59%, sys=54.53%, ctx=3119239, majf=0, minf=3916
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5734400,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=74.4GiB/s (79.8GB/s), 74.4GiB/s-74.4GiB/s (79.8GB/s-79.8GB/s), io=2800GiB (3006GB), run=37653-37653msec
neil@ubuntu:~/mnt/disk/fs$

Note 3

Running the entire VM with 2GB RAM reveals the story. It's due to RAM, but I still don't understand it. With 2GB RAM, we get ~ 20 GB/s which is close to combined bandwidth of all drives (RAIDZ2 would be 6x 3.5GB/s or close to ~20 GB/s). It still doesn't explain how we're able to reach absurd 80 GB/s on a long sustained test.

neil@ubuntu:~/mnt/disk/fs$ sudo fio --name=read_test         --filename=/home/neil/mnt/disk/fs/dummy         --filesize=100G         --ioengine=libaio         --direct=1         --sync=1         --bs=512k         --iodepth=1         --rw=read         --numjobs=28         --group_reporting
[sudo] password for neil:
read_test: (g=0): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=1
...
fio-3.16
Starting 28 processes
^Cbs: 28 (f=28): [R(28)][22.6%][r=17.5GiB/s][r=35.9k IOPS][eta 02m:55s]
fio: terminating on signal 2

read_test: (groupid=0, jobs=28): err= 0: pid=4712: Mon Dec 13 06:11:57 2021
  read: IOPS=41.2k, BW=20.1GiB/s (21.6GB/s)(1042GiB/51742msec)
    slat (usec): min=22, max=115406, avg=674.35, stdev=1331.96
    clat (nsec): min=296, max=7427.7k, avg=1850.79, stdev=14919.02
     lat (usec): min=23, max=115409, avg=677.17, stdev=1332.58
    clat percentiles (nsec):
     |  1.00th=[   548],  5.00th=[   676], 10.00th=[   772], 20.00th=[   916],
     | 30.00th=[  1032], 40.00th=[  1176], 50.00th=[  1320], 60.00th=[  1496],
     | 70.00th=[  1688], 80.00th=[  1944], 90.00th=[  2320], 95.00th=[  2672],
     | 99.00th=[  7584], 99.50th=[ 14656], 99.90th=[ 88576], 99.95th=[144384],
     | 99.99th=[301056]
   bw (  MiB/s): min=11908, max=59509, per=100.00%, avg=20627.88, stdev=256.00, samples=2884
   iops        : min=23815, max=119018, avg=41255.16, stdev=512.00, samples=2884
  lat (nsec)   : 500=0.49%, 750=8.39%, 1000=18.05%
  lat (usec)   : 2=55.11%, 4=16.39%, 10=0.82%, 20=0.35%, 50=0.21%
  lat (usec)   : 100=0.10%, 250=0.07%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%
  cpu          : usr=0.66%, sys=25.64%, ctx=4108148, majf=0, minf=3923
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2133831,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=20.1GiB/s (21.6GB/s), 20.1GiB/s-20.1GiB/s (21.6GB/s-21.6GB/s), io=1042GiB (1119GB), run=51742-51742msec
neil@ubuntu:~/mnt/disk/fs$

Note 4

Turning lz4 compression didn't make much difference.

Last login: Mon Dec 13 06:09:18 2021 from 10.0.0.42
neil@ubuntu:~$ sudo zfs set compression=off tank/fs
[sudo] password for neil:
neil@ubuntu:~$ sudo fio --name=read_test         --filename=/home/neil/mnt/disk/fs/dummy         --filesize=100G         --ioengine=libaio         --direct=1         --sync=1         --bs=512k         --iodepth=1         --rw=read         --numjobs=28         --group_reporting
read_test: (g=0): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=1
...
fio-3.16
Starting 28 processes
Jobs: 28 (f=28): [R(28)][100.0%][r=78.4GiB/s][r=161k IOPS][eta 00m:00s]
read_test: (groupid=0, jobs=28): err= 0: pid=4520: Mon Dec 13 06:29:25 2021
  read: IOPS=156k, BW=76.2GiB/s (81.8GB/s)(2800GiB/36751msec)
    slat (usec): min=19, max=105788, avg=176.71, stdev=1535.16
    clat (nsec): min=247, max=16074k, avg=993.84, stdev=11978.23
     lat (usec): min=20, max=105793, avg=178.24, stdev=1535.33
    clat percentiles (nsec):
     |  1.00th=[   330],  5.00th=[   398], 10.00th=[   462], 20.00th=[   564],
     | 30.00th=[   644], 40.00th=[   716], 50.00th=[   788], 60.00th=[   884],
     | 70.00th=[   988], 80.00th=[  1128], 90.00th=[  1352], 95.00th=[  1576],
     | 99.00th=[  2256], 99.50th=[  4448], 99.90th=[ 27776], 99.95th=[ 47872],
     | 99.99th=[128512]
   bw (  MiB/s): min=20186, max=96350, per=100.00%, avg=78083.60, stdev=532.92, samples=2044
   iops        : min=40373, max=192700, avg=156166.03, stdev=1065.84, samples=2044
  lat (nsec)   : 250=0.01%, 500=13.73%, 750=31.22%, 1000=25.80%
  lat (usec)   : 2=27.66%, 4=1.06%, 10=0.22%, 20=0.16%, 50=0.09%
  lat (usec)   : 100=0.03%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
  cpu          : usr=1.54%, sys=54.57%, ctx=2605775, majf=0, minf=3927
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=5734400,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=76.2GiB/s (81.8GB/s), 76.2GiB/s-76.2GiB/s (81.8GB/s-81.8GB/s), io=2800GiB (3006GB), run=36751-36751msec
neil@ubuntu:~$

Note 5

I was able to get to the bottom of this madness. 157 GB/s (or even 70 GB/s) is a misleading number. Essentially, fio requires a separate file for each thread using the : symbol in the --filename argument.

I create separate files for each thread. Ran the test with 4, 8, and 16 jobs. Looks like it gets worse after 8 threads.

neil@ubuntu:~/mnt/disk$ ls
  dummy0  dummy10  dummy12  dummy14  dummy2  dummy4  dummy6  dummy8
  dummy1  dummy11  dummy13  dummy15  dummy3  dummy5  dummy7  dummy9
  neil@ubuntu:~/mnt/disk$ sudo fio --name=read_test \
  >         --filename=/home/neil/mnt/disk/dummy0:/home/neil/mnt/disk/dummy1:/home/neil/mnt/disk/dummy2:/home/neil/mnt/disk/dummy3:/home/neil/mnt/disk/dummy4:/home/neil/mnt/disk/dummy5:/home/neil/mnt/disk/dummy6:/home/neil/mnt/disk/dummy7:/home/neil/mnt/disk/dummy8:/home/neil/mnt/disk/dummy9:/home/neil/mnt/disk/dummy10:/home/neil/mnt/disk/dummy11:/home/neil/mnt/disk/dummy12:/home/neil/mnt/disk/dummy13:/home/neil/mnt/disk/dummy14:/home/neil/mnt/disk/dummy15 \
  >         --filesize=20G \
  >         --ioengine=libaio \
  >         --direct=1 \
  >         --sync=1 \
  >         --bs=512k \
  >         --iodepth=1 \
  >         --rw=read \
  >         --numjobs=16 \
  >         --group_reporting

This results in a modest ~ 15 GB/s sequential read speed. If I use --numjobs=8, I get around ~22 GB/s which is quite believable. Modest, but still mighty impressive! Thanks to folks on HN and Lobste.rs for chiming in.


Discuss on HN → Discuss on Lobste.rs →]]>
<![CDATA[Berkeley Mono December Update]]>It's been a while since I had announced the release of Berkeley Mono. The more I examine it under scrutinous eyes, the more I find that there is more work to be done. At one point, I had the following masters going through the final tuning:

Berkeley Mono Masters
Berkeley Mono
]]>
https://neil.computer/notes/berkeley-mono-december-update/696743775a67c705af8f352eSat, 11 Dec 2021 23:23:29 GMTIt's been a while since I had announced the release of Berkeley Mono. The more I examine it under scrutinous eyes, the more I find that there is more work to be done. At one point, I had the following masters going through the final tuning:

Berkeley Mono Masters
Berkeley Mono Masters

One of the evenings, I was reading about true type fonts history on Wikipedia and Andale mono captured my attention. The fitting in Andale mono was unlike any other monospaced font.

It would be interesting to capture some of the glyph widths (which dictates fitting in monospaced typefaces) in Berkeley Mono.

Andale Mono vs. Berkeley Mono widths
Andale Mono vs. Berkeley Mono widths

Modifying the widths of Berkeley Mono uppercase glyphs to follow Andale Mono widths (some exceptions, notably due to the open apertures of 'C' and 'S' glyphs), we get the following result.

Before adjusment:

Before width adjustment
Before width adjustment

After adjustment:

After width adjustment
After width adjustment

I think this experiment failed, but it is hard to tell. I'll mess with it a little more and see what happens. Typography is a frustrating endeavor because of decisions of this sort, there is no right or wrong answer and any attempt at objectivity fails horribly. It is very much 'trust your eyes and instinct, everything else is wrong' craft.

Perhaps the silver lining is the satisfaction of using the typeface; knowing that I built the 'translation layer' between abstract data (words and symbols) into two-dimensional representation – combining centuries of hieroglyphic standards (the written English language) and aesthetic, functional, and engineering requirements – is kinda cool. It very much feels like an art-school equivalent of developing a layer in the TCP/IP network stack. What a strange thing to say! Or, with spoken language, I am thinking – Abstract idea from person 1 -> Preprocessing (Frontal cortex) -> Language or compression (Fusiform gyrus? I don't know) -> Electrical signals (Facial muscles, Voicebox) -> Transmission over the air modulated by pressure levels -> Decoding (Hearing) and decompression (Comprehension) -> Abstract idea to person 2 (with some loss of fidelity). With typography, we are just swapping out the physical layer from sound to light.

I should really get back to work.

]]>
<![CDATA[Bare Metal vs. Virtualization Performance]]>I just built a new homelab server. Specs are as follows:

Neil's Lab Server Specifications
CPU Model Intel® Xeon® Gold 6326, 16 Cores (32 Threads), 2.90 GHz (Base), 3.50 GHz (Turbo)
CPU Cooler Noctua NH-U12S DX-4189
Motherboard Supermicro X12SPi
RAM Samsung 6x16GB (96 GB)
]]>
https://neil.computer/notes/bare-metal-vs-virtualization-performance/696743775a67c705af8f352dTue, 07 Dec 2021 06:50:18 GMTI just built a new homelab server. Specs are as follows:

Neil's Lab Server Specifications
CPU Model Intel® Xeon® Gold 6326, 16 Cores (32 Threads), 2.90 GHz (Base), 3.50 GHz (Turbo)
CPU Cooler Noctua NH-U12S DX-4189
Motherboard Supermicro X12SPi
RAM Samsung 6x16GB (96 GB) DDR4-3200 RDIMM ECC PC4-25600R Dual Rank
NIC (On board) Intel X550 2x 10G Base-T
NIC (PCIe) Supermicro AOC-SGP-I4 4x 1GbE
OS NVMe 2x1TB(2TB) Samsung 970 Pro
OS NVMe Carrier Supermicro AOC-SLG3-2M2 PCIe x8
NAS NVMe 8x2TB(16TB) Samsung 970 Evo
NAS NVMe Carrier 2x Quad Gigabyte GC-4XM2G4 PCIe x16
Power Supply EVGA 750 Watt 210-GQ-0750-V1
Chassis NZXT H510i Flow

I was trying to find performance differences of running an ESXi (or Proxmoxx) hypervisor and comparing it with bare metal performance. Google search yields garbage results. I've used VMWare Workstation Pro as well as VMWare Fusion before, but both of those are type 2 hypervisors.

Type 1 Hypervisor:
    [Guest OS] [Guest OS]
    [ Type 1 Hypervisor ]
    [      Hardware     ]

Type 2 Hypervisor:
    [Guest OS] [Guest OS]
    [ Type 2 Hypervisor ]
    [      Host OS      ]
    [      Hardware     ]

So naturally, I was curious what is the performance impact of running a virtualized OS on a Type 1 hypervisor? The most popular options appear to be:

  • VMWare ESXi (vSphere Hypervisor)
  • Microsoft Hyper-V
  • Xen
  • Proxmoxx

I was able to get a VMWare vSphere 7.0 Enterprise Plus license from UC Berkeley, so I went with VMWare ESXi. vSphere comes with a bazillion packages to manage a datacenter, hypervisor being one of the pieces.

With regards to benchmarking, I just used Geekbench 5.4.3 version for linux to simplify things. Ubuntu Server 20.04 was installed as a bare metal OS as well as a virtual machine on vmfs file system. Both running on Samsung 970 Pro 1TB NVMe drives.

VMWare ESXi - New Virtual Machine Wizard
VMWare ESXi - New Virtual Machine Wizard

Geekbench5 results are as follows:

Geekbench5 - Bare Metal (Baseline) vs. VM
Geekbench5 - Bare Metal (Baseline) vs. VM

I am quite pleased with these results. Letting go 5% of the bare metal performance to have the flexibility of running multiple VMs with cool features such as snapshots is definitely a no-brainer for me. ESXi is rock solid and very much enterprise-grade piece of software. I'll compare Proxmoxx when I get a chance next.

For completeness, the full Geekbench results can be accessed here:

Bogus Ops test (stress-ng)

Using the stress-ng utility, the results are even more impressive:

*********************
SINGLE CORE BOGUS OPS
*********************

VM :~$ sudo stress-ng --matrix 1 -t 60s --metrics-brief
 dispatching hogs: 1 matrix
 successful run completed in 60.00s (1 min, 0.00 secs)
 stressor       bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
                           (secs)    (secs)    (secs)   (real time) (usr+sys time)
 matrix           237270     60.00     60.00      0.00      3954.50      3954.50

BARE METAL :~$ sudo stress-ng --matrix 1 -t 60s --metrics-brief
 dispatching hogs: 1 matrix
 successful run completed in 60.00s (1 min, 0.00 secs)
 stressor       bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
                           (secs)    (secs)    (secs)   (real time) (usr+sys time)
 matrix           240077     60.00     59.99      0.00      4001.28      4001.95

*********************
MULTI CORE BOGUS OPS
*********************
VM: $ sudo stress-ng --matrix 0 -t 60s --metrics-brief
 dispatching hogs: 32 matrix
 successful run completed in 60.01s (1 min, 0.01 secs)
 stressor       bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
                           (secs)    (secs)    (secs)   (real time) (usr+sys time)
 matrix          4531734     60.00   1919.50      0.00     75529.30      2360.89

BARE METAL: $ sudo stress-ng --matrix 0 -t 60s --metrics-brief
 dispatching hogs: 32 matrix
 successful run completed in 60.01s (1 min, 0.01 secs)
 stressor       bogo ops real time  usr time  sys time   bogo ops/s   bogo ops/s
                           (secs)    (secs)    (secs)   (real time) (usr+sys time)
 matrix          4578028     60.00   1919.57      0.00     76300.66      2384.92


*********************
MULTI CORE STRESS TEST
*********************
VM:~$ sudo stress-ng --cpu 32 --cpu-method all --perf -t 60
 dispatching hogs: 32 cpu
 successful run completed in 61.30s (1 min, 1.30 secs)
 cpu:
                          0 Cache L1D Read                  0.00 /sec
                          0 Cache L1D Read Miss             0.00 /sec
                          0 Cache L1D Write                 0.00 /sec
                          0 Cache L1I Read Miss             0.00 /sec
                          0 Cache DTLB Read                 0.00 /sec
                          0 Cache DTLB Read Miss            0.00 /sec
                          0 Cache DTLB Write                0.00 /sec
                          0 Cache DTLB Write Miss           0.00 /sec
                          0 Cache ITLB Read Miss            0.00 /sec
                          0 Cache BPU Read                  0.00 /sec
                          0 Cache BPU Read Miss             0.00 /sec
          1,918,611,096,224 CPU Clock                      31.30 B/sec
          1,918,611,245,760 Task Clock                     31.30 B/sec
                     26,592 Page Faults Total             433.84 /sec
                     26,592 Page Faults Minor             433.84 /sec
                          0 Page Faults Major               0.00 /sec
                      1,376 Context Switches               22.45 /sec
                          0 CPU Migrations                  0.00 /sec
                          0 Alignment Faults                0.00 /sec
                          0 Emulation Faults                0.00 /sec
                     26,528 Page Faults User              432.79 /sec
                        128 Page Faults Kernel              2.09 /sec
                      3,424 System Call Enter              55.86 /sec
                      3,392 System Call Exit               55.34 /sec
                        256 TLB Flushes                     4.18 /sec
                          0 Kmalloc                         0.00 /sec
                          0 Kmalloc Node                    0.00 /sec
                          0 Kfree                           0.00 /sec
                         32 Kmem Cache Alloc                0.52 /sec
                          0 Kmem Cache Alloc Node           0.00 /sec
                         32 Kmem Cache Free                 0.52 /sec
                     25,376 MM Page Alloc                 414.00 /sec
                          0 MM Page Free                    0.00 /sec
                  1,124,672 RCU Utilization                18.35 K/sec
                        448 Sched Migrate Task              7.31 /sec
                          0 Sched Move NUMA                 0.00 /sec
                      1,408 Sched Wakeup                   22.97 /sec
                          0 Sched Proc Exec                 0.00 /sec
                          0 Sched Proc Exit                 0.00 /sec
                          0 Sched Proc Fork                 0.00 /sec
                          0 Sched Proc Free                 0.00 /sec
                          0 Sched Proc Hang                 0.00 /sec
                          0 Sched Proc Wait                 0.00 /sec
                      1,376 Sched Switch                   22.45 /sec
                         32 Signal Generate                 0.52 /sec
                         32 Signal Deliver                  0.52 /sec
                        928 IRQ Entry                      15.14 /sec
                        928 IRQ Exit                       15.14 /sec
                    408,768 Soft IRQ Entry                  6.67 K/sec
                    408,768 Soft IRQ Exit                   6.67 K/sec
                          0 Writeback Dirty Inode           0.00 /sec
                          0 Writeback Dirty Page            0.00 /sec
                          0 Migrate MM Pages                0.00 /sec
                          0 SKB Consume                     0.00 /sec
                          0 SKB Kfree                       0.00 /sec
                          0 IOMMU IO Page Fault             0.00 /sec
                          0 IOMMU Map                       0.00 /sec
                          0 IOMMU Unmap                     0.00 /sec
                          0 Filemap page-cache add          0.00 /sec
                          0 Filemap page-cache del          0.00 /sec
                          0 OOM Compact Retry               0.00 /sec
                          0 OOM Wake Reaper                 0.00 /sec
                          0 Thermal Zone Trip               0.00 /sec


BARE METAL:~$ sudo stress-ng --cpu 32 --cpu-method all --perf -t 60
 dispatching hogs: 32 cpu
 successful run completed in 61.30s (1 min, 1.30 secs)
 cpu:
          6,299,757,819,808 CPU Cycles                      0.10 T/sec
          5,413,717,415,168 Instructions                   88.31 B/sec (0.859 instr. per cycle)
            993,084,418,080 Branch Instructions            16.20 B/sec
             21,493,168,320 Branch Misses                   0.35 B/sec ( 2.16%)
             47,817,010,688 Bus Cycles                      0.78 B/sec
          5,546,730,077,536 Total Cycles                   90.48 B/sec
             12,255,527,424 Cache References                0.20 B/sec
                164,721,152 Cache Misses                    2.69 M/sec ( 1.34%)
            557,545,772,192 Cache L1D Read                  9.10 B/sec
             64,777,397,440 Cache L1D Read Miss             1.06 B/sec
            390,375,844,512 Cache L1D Write                 6.37 B/sec
              1,156,206,112 Cache L1I Read Miss            18.86 M/sec
              3,278,834,912 Cache LL Read                  53.49 M/sec
                  3,822,464 Cache LL Read Miss             62.36 K/sec
                 85,624,000 Cache LL Write                  1.40 M/sec
                 47,737,408 Cache LL Write Miss             0.78 M/sec
            515,595,101,888 Cache DTLB Read                 8.41 B/sec
                    279,424 Cache DTLB Read Miss            4.56 K/sec
            369,159,797,632 Cache DTLB Write                6.02 B/sec
                  1,280,000 Cache DTLB Write Miss          20.88 K/sec
             10,606,716,288 Cache ITLB Read Miss            0.17 B/sec
            988,307,728,992 Cache BPU Read                 16.12 B/sec
             21,333,012,864 Cache BPU Read Miss             0.35 B/sec
                  9,740,960 Cache NODE Read                 0.16 M/sec
                          0 Cache NODE Read Miss            0.00 /sec
                 51,092,544 Cache NODE Write                0.83 M/sec
                          0 Cache NODE Write Miss           0.00 /sec
          1,917,173,399,968 CPU Clock                      31.27 B/sec
          1,917,190,695,168 Task Clock                     31.28 B/sec
                     26,624 Page Faults Total             434.32 /sec
                     26,624 Page Faults Minor             434.32 /sec
                          0 Page Faults Major               0.00 /sec
                    160,672 Context Switches                2.62 K/sec
                          0 CPU Migrations                  0.00 /sec
                          0 Alignment Faults                0.00 /sec
                          0 Emulation Faults                0.00 /sec
                     26,560 Page Faults User              433.27 /sec
                        128 Page Faults Kernel              2.09 /sec
                      3,936 System Call Enter              64.21 /sec
                      3,904 System Call Exit               63.69 /sec
                        256 TLB Flushes                     4.18 /sec
                          0 Kmalloc                         0.00 /sec
                          0 Kmalloc Node                    0.00 /sec
                         64 Kfree                           1.04 /sec
                         64 Kmem Cache Alloc                1.04 /sec
                          0 Kmem Cache Alloc Node           0.00 /sec
                        224 Kmem Cache Free                 3.65 /sec
                     25,344 MM Page Alloc                 413.44 /sec
                         32 MM Page Free                    0.52 /sec
                  1,294,656 RCU Utilization                21.12 K/sec
                        128 Sched Migrate Task              2.09 /sec
                          0 Sched Move NUMA                 0.00 /sec
                    160,832 Sched Wakeup                    2.62 K/sec
                          0 Sched Proc Exec                 0.00 /sec
                          0 Sched Proc Exit                 0.00 /sec
                          0 Sched Proc Fork                 0.00 /sec
                          0 Sched Proc Free                 0.00 /sec
                          0 Sched Proc Hang                 0.00 /sec
                          0 Sched Proc Wait                 0.00 /sec
                    160,672 Sched Switch                    2.62 K/sec
                         32 Signal Generate                 0.52 /sec
                         32 Signal Deliver                  0.52 /sec
                        992 IRQ Entry                      16.18 /sec
                        992 IRQ Exit                       16.18 /sec
                    366,624 Soft IRQ Entry                  5.98 K/sec
                    366,624 Soft IRQ Exit                   5.98 K/sec
                          0 Writeback Dirty Inode           0.00 /sec
                          0 Writeback Dirty Page            0.00 /sec
                          0 Migrate MM Pages                0.00 /sec
                          0 SKB Consume                     0.00 /sec
                          0 SKB Kfree                       0.00 /sec
                          0 IOMMU IO Page Fault             0.00 /sec
                          0 IOMMU Map                       0.00 /sec
                          0 IOMMU Unmap                     0.00 /sec
                          0 Filemap page-cache add          0.00 /sec
                          0 Filemap page-cache del          0.00 /sec
                          0 OOM Compact Retry               0.00 /sec
                          0 OOM Wake Reaper                 0.00 /sec
                          0 Thermal Zone Trip               0.00 /sec

]]>