https://developer.isy.io/blog UD Developer Docs Blog 2025-01-15T00:00:00.000Z https://github.com/jpmonette/feed UD Developer Docs Blog https://developer.isy.io/img/favicon.png <![CDATA[Run Cursor AI on eisy!]]> https://developer.isy.io/blog/cursor-ai 2025-01-15T00:00:00.000Z eisy

Introductions

Cursor AI is a fabulous tool that helps speed up your plugin development using AI. In addition, we are using it to add more AI features into all our products and services.

This said, Cursor AI does not have a FreeBSD version. So, in order to get it to run on your eisy, you will need to perform a few tasks.

Download and Extract Cursor AI

  1. Download cursor from https://www.cursor.com/ for Linux
  2. Extract the AppImage file using the following command:
cursor-1.20.0-x86_64.AppImage --appimage-extract

This will create a directory called squashfs-root in the current directory.

Install Dependencies

Since Cursor AI will be running in linux compatibility mode, you need to install the following packages (Rocky Linux). Here's the command:

sudo pkg install linux-rl9-alsa-lib linux-rl9-at-spi2-atk linux-rl9-at-spi2-core linux-rl9-atk linux-rl9-avahi-libs linux-rl9-brotli linux-rl9-cairo linux-rl9-cairo-gobject linux-rl9-cups-libs linux-rl9-dbus-libs linux-rl9-dri linux-rl9-elfutils-libelf linux-rl9-elfutils-libs linux-rl9-expat linux-rl9-flac-libs linux-rl9-fontconfig linux-rl9-freetype linux-rl9-fribidi linux-rl9-gdk-pixbuf2 linux-rl9-gnutls linux-rl9-graphite2 linux-rl9-gsm linux-rl9-gtk3 linux-rl9-harfbuzz linux-rl9-icu linux-rl9-jbigkit-libs linux-rl9-jpeg linux-rl9-libdrm linux-rl9-libepoxy linux-rl9-libevent linux-rl9-libgcrypt linux-rl9-libglvnd linux-rl9-libgpg-error linux-rl9-libidn2 linux-rl9-libogg linux-rl9-libpciaccess linux-rl9-libpng linux-rl9-libproxy linux-rl9-librsvg2 linux-rl9-libsigsegv linux-rl9-libsndfile linux-rl9-libstemmer linux-rl9-libtasn1 linux-rl9-libthai linux-rl9-libtiff linux-rl9-libtracker-sparql linux-rl9-libunistring linux-rl9-libvorbis linux-rl9-libwebp linux-rl9-libxkbcommon linux-rl9-libxml2 linux-rl9-llvm linux-rl9-lz4 linux-rl9-nettle linux-rl9-nspr linux-rl9-nss linux-rl9-openal-soft linux-rl9-p11-kit linux-rl9-pango linux-rl9-pixman linux-rl9-python39 linux-rl9-sqlite linux-rl9-systemd-libs linux-rl9-wayland linux-rl9-xorg-libs linux_base-rl9

Update Library Path for Linux Compatibility Mode

  1. Edit the /compat/linux/etc/ld.so.conf file to include the path to the extraction directory (i.e. squashfs-root). After edit, the contents of the file will look something like this:
/usr/home/admin/squashfs-root  <--- This is the path to the extraction directory
include ld.so.conf.d/*.conf
  1. Run the following command to update the ld.so.conf file:
sudo /compat/linux/sbin/ldconfig

Enable and Start the Linux Compatibility Mode

  1. Enable Linux Compatibility mode:
sudo sysrc linux_enable="YES"
  1. Start linux compatibility:
sudo service linux start

Run the Cursor!

  1. Run the cursor executable ensuring that you use --no-sandbox
/usr/home/admin/squashfs-root/cursor --no-sandbox &
  1. To login, make sure you use chrome --no-sandbox
]]>
Michel Kohanim https://www.linkedin.com/in/michelkohanim/
<![CDATA[The eisy way!]]> https://developer.isy.io/blog/eisy-way 2024-02-21T00:00:00.000Z eisy

Introductions

Well, it's become obvious that our instructions to supercharge your eisy was just a little too geeky for many. You see, when you are a geek, you presume things that are usually not true.

First and foremost, please accept our apologies. Secondly, in this blog we will walk you through supercharging your eisy the eisy way! Using our latest, not only you will be able to boot from NVMe but you will get to provide the size of the partition that you want to use for a VM (say Home Assistant) as well as decide what to do with the mirror.

Before we proceed, for parts and assembly instructions as well as the reasons for supercharing your eisy, please see here. If you are ready, let's get going ...

Configure the OS

danger

The procedures below will wipe out everything you might already have on your NVMe!

tip
Although these procedures were designed to be lossless, it's  
always recommended to backup your ZMatter USB, your IoX,
and your PG3x.

ssh to eisy

  1. On Mac, open the terminal app (search | terminal)
  2. On Windows, open the command prompt (search | cmd)
  3. Type:
  1. When prompted, type in the password. Default is admin. If you have not already changed the password, change it immediately.

All sudo may prompt you for the password. Please use the password in step 4.

Get udx

Make sure you have udx version 3.5.5_4 or above:

    pkg info udx | head | grep -i version

You should see this:

    Version        : 3.5.5_4

If your udx version is not 3.5.5_4, you will need to install it from our staging repo. No worries, we do have a tool that allows you to easily switch your repo:

    curl -s https://pkg.isy.io/script/repo.util -O

This will download repo.util script and saves it into your current directory. You will now need to make it executable:

    chmod +x repo.util

Now, you are ready to change the repo to staging:

    sudo ./repo.util

You will be asked whether or not you want to switch to staging, type Y (case sensitive):

    Do you want to switch to Staging? (Y,n)
>>Y
Updating udi repository catalogue...
Fetching meta.conf: 100% 163 B 0.2kB/s 00:01
Fetching packagesite.pkg: 100% 9 KiB 9.2kB/s 00:01
Processing entries: 100%
udi repository update completed. 11 packages processed.
All repositories are up to date.

Now, install udx from this repo:

    sudo pkg install -fy udx

And watch it install and restart udx!

Immediatly switch back to our production repo:

    sudo ./repo.util
tip
repo.util is included in the latest udx package and in your path, so you don't have to deal with curl in the future!

Now, you are ready to bring your NVMe to life!

Configure NVMe

And this is the eisy part! We now have a script that helps you with configuration! It's called fl.ops and it's in your path. All you have to do is:

    sudo fl.ops setup.nvme.boot 

You will get:

    executing from /usr/local/etc/udx.d/static
nvme0: WD Blue SN580 1TB
nvme0ns1 (953869MB)
WARNING: your nvme has existing paritions and data.
And it is bootable!!!!
This script will wipe everything on your nvme; it is irreversible!
Are you sure you want to run this script?
Please answer with Y to proceed with this irreversible operation.
Anything else cancels this operation.

If you are ready to take the plunge, click on Y (case sensitive).

danger
If your NVMe is already configured as bootable without a mirror, 
the script will detect and warn you. It will also ask you whether
or not you want the script to try and fix it. If you say Yes,
the script will do its best to clean things up and then ask you
to reboot. Once rebooted just redo:
sudo fl.ops setup.nvme.boot

At this juncture, you are now asked whether or not you want to create a zfs partition for your virtual machine (VM).

If you say yes, the script will automatically create a partition of your desired size, and if you wish, it will install zfs filesystem on it, and mounts it to /storage. This way, you don't have to do anything but to run the vm script such as the one described here.

In this example, we are going to assign 256G for storage. Please note that the size you choose must leave at least 256GB for IoX.

    nvd0 created
Thu Feb 22 02:54:33 PST 2024|/usr/local/etc/udx.d/static/fl.ops: calcuating size ... keeping 8GB for recovery
Thu Feb 22 02:54:33 PST 2024|/usr/local/etc/udx.d/static/fl.ops: size, excluding swap and ud specific, is 919 GB ...
Would you like to add an extra ZFS partition for your own purposes? (Y/n)
>>Y
Please provide the size in GB. Please note that you must leave at least 256GB for iox
>>256
Thu Feb 22 02:59:54 PST 2024|/usr/local/etc/udx.d/static/fl.ops: will create a ZFS user size of 256 ...
Thu Feb 22 02:59:54 PST 2024|/usr/local/etc/udx.d/static/fl.ops: adding the boot partition to nvd0 ...
nvd0p1 added
Thu Feb 22 02:59:54 PST 2024|/usr/local/etc/udx.d/static/fl.ops: adding the iox/zfs partition of size 663GB...
nvd0p2 added
Thu Feb 22 02:59:54 PST 2024|/usr/local/etc/udx.d/static/fl.ops: creating ms dos file system on partition 1 ...
/dev/nvd0p1: 129022 sectors in 129022 FAT32 clusters (512 bytes/cluster)
BytesPerSec=512 SecPerClust=1 ResSectors=32 FATs=2 Media=0xf0 SecPerTrack=63 Heads=255 HiddenSecs=0 HugeSectors=131072 FATsecs=1008 RootCluster=2 FSInfo=1 Backup=2
Thu Feb 22 02:59:54 PST 2024|/usr/local/etc/udx.d/static/fl.ops: mouting ms dos file system (for boot) ...
Thu Feb 22 02:59:54 PST 2024|/usr/local/etc/udx.d/static/fl.ops: adding boot loader to boot partition ...
Thu Feb 22 02:59:54 PST 2024|/usr/local/etc/udx.d/static/fl.ops: unmounting boot parition ...
Thu Feb 22 02:59:54 PST 2024|/usr/local/etc/udx.d/static/fl.ops: attching ssd to existing zudi pool for copying files from emmc to ssd ...
Thu Feb 22 02:59:55 PST 2024|/usr/local/etc/udx.d/static/fl.ops: adding the swap partition ...
nvd0p3 added
Thu Feb 22 02:59:55 PST 2024|/usr/local/etc/udx.d/static/fl.ops: adding user/zfs partition of size 256GB ...
nvd0p4 added

You will have to wait a few seconds for the system to mirror the eMMC to your newly minted NVMe:

    please continue to wait while creating a copy in nvd0P2 (resilvering)...

Once the mirroring is completed, you will be asked whether or not you want to create a zfs pool called storage.

The reason you might want to consider this is that you won't have to worry about paritions, labels, pools, zfs, etc. when you are configuring your vm. The storage parition is immedidately available in /storage directory.

    would you like to create a zfs pool in the storage space you just created? (Y/n)
>>Y

And finally, you will be asked whether or not you want to keep the mirror. Initially, and till you are happy with everything, it's best to keep the mirror. Don't worry, you can always remove it later with one simple command:

    sudo fl.ops remove.emmc.mirror

While deciding on whether or not to keep the mirror, think about what you would like your configuration to look like:

  1. High Performance + Redundancy, or
  2. Highest performance and no Redundancy.

As with everything else, there are Pros and Cons:

Keeping the mirror (Redundancy)

In this scenario, you will have NVMe SSD and the eMMC mirror each other.

Pros- You will have a mirror of everything - Will automatically switch to eMMC is NVMe SSD failes - You can boot from either one (needs a monitor)Cons- Regardless of your NVME capacity, you are limited to the size of the eMMC - The slowest (eMMC) performance decides the system throughput - You cannot use eMMC for anything else

Highest performance

In this scenario, all partitions on eMMC are deleted and destroyed.

Pros- Higher performance - Less things to maintainCons- If it fails, it needs to be replaced.

You are done!

Once you reboot, and regardless of the choice you made for mirror, your eisy will boot from your nvme. So, why don't you reboot?

    sudo shutdown -r now

Additional commands

Remove the emmc mirror

Let's say you chose to keep the mirror and now regret your decision! Well, now worries, you can remove the mirror with one simple command:

    sudo fl.ops remove.emmc.mirror

Make sure you reboot after:

    sudo shutdown -r now
tip
Although the IoX Partition should expand on its own after the miror ir removed, you can also do it on your own by issuing:
sudo fl.ops expand.iox.pool
(see Expanding IoX pool)

Revert to emmc boot

Let's say you have the mirror and booting from nvme. For whatever reason, you want to start booting from eMMC intead:

    sudo fl.ops revert.to.emmc.boot 

Make sure you reboot after:

    sudo shutdown -r now

Expand IoX pool to take over the whole partition

There might be cases where the IoX pool does not expand to the max size of the partition. Fear not! One simple command is all you need:

    sudo fl.ops expand.iox.pool                                                                                                                                              

Make sure you reboot after:

    sudo shutdown -r now
]]>
Michel Kohanim https://www.linkedin.com/in/michelkohanim/
<![CDATA[Increase Performance or Add Redundancy?]]> https://developer.isy.io/blog/Supercharged 2024-02-08T00:00:00.000Z nvme

Introduction

Now that you've enhanced your eisy with added storage and a virtual machine (VM), you may wonder how to further boost its performance and reliability. Well, wonder no more! In this blog, we'll guide you through the process of configuring your eisy to boot from an NVMe SSD, with the option to use the onboard eMMC drive as a mirror for redundancy. Are you ready? Let's go ...

Increased performance?

Yes, by having the OS run from a high performance SSD, you can expect increased throughputs especially for random read/write operations. The following is statistics from running eisy from the onboard eMMC drive:

Transfer rates:
outside:    102400 kbytes in   0.695155 sec =   147305 kbytes/sec
middle: 102400 kbytes in 0.662695 sec = 154521 kbytes/sec
inside: 102400 kbytes in 0.645822 sec = 158558 kbytes/sec
Asynchronous random reads:
sectorsize:     18050 ops in    3.021072 sec =     5975 IOPS
4 kbytes: 13222 ops in 3.029609 sec = 4364 IOPS
32 kbytes: 6519 ops in 3.063780 sec = 2128 IOPS
128 kbytes: 2584 ops in 3.154414 sec = 819 IOPS
1024 kbytes: 487 ops in 4.041280 sec = 121 IOPS

Now, take a look at the same when eisy is run from a 1T PCIe 4, NVMe SSD:

Transfer rates:
outside:    102400 kbytes in   0.074777 sec =  1369405 kbytes/sec
middle: 102400 kbytes in 0.069383 sec = 1475866 kbytes/sec
inside: 102400 kbytes in 0.069882 sec = 1465327 kbytes/sec
Asynchronous random reads:
sectorsize:   1384389 ops in    3.000543 sec =   461379 IOPS 
4 kbytes: 1236376 ops in 3.000293 sec = 412085 IOPS
32 kbytes: 160586 ops in 3.002363 sec = 53487 IOPS
128 kbytes: 39947 ops in 3.009799 sec = 13272 IOPS
1024 kbytes: 5128 ops in 3.076463 sec = 1667 IOPS

As you can see, transfer rates (read/write operations) on the NVMe SSD are about 10 times more than those on the eMMC. Furthermore, random reads operations/second are significantly higher on the NVMe as opposed to the eMMC.

In essence, if you have many chatty plugins, you will definitely see notice immeidate performance improvements.

Assembly

If you already have an NVMe SSD installed, you may skip this section.

Get an NVMe M.2 SSD

You can get pretty much any M.2 NVMe SSD with the limitation that the capacity cannot be more than 1TB. For our test purposes, we chose Western Digital Blue SN580. Of course, their WD_BLACK SN850X will have better performance.

Install the SSD

eisy-nvme

  1. Press and hold the power button so that the light turns red
  2. Unplug the power cord
  3. Open the 4 screws at the bottom and gently remove the bottom cover
  4. Remove the screw that holds the SSD in place (top right on a stand)
  5. Plugin the SSD and screw in the screw in 4
  6. Gently put the bottom cover back ensuring that the label points to the back (where the connectors are)
  7. Screw in the 4 screws you took out in 3
  8. Plugin the power cord

That's it!

Configure the OS

It is much easier to use our latest tools/and instructions outlined here. But, if you insist on doing things yourself, please go ahead!

danger

The procedures below will wipe out everything you might already have on your NVMe!

tip
The procedures outlined below will be included as a command in 
the upcoming udx release of 3.5.5_4:
sudo fl.ops setup.nvme.boot
tip
Although these procedures were designed to be lossless, it's  
always recommended to backup your ZMatter USB, your IoX,
and your PG3x.

ssh to eisy

  1. On Mac, open the terminal app (search | terminal)
  2. On Windows, open the command prompt (search | cmd)
  3. Type:
  1. When prompted, type in the password. Default is admin. If you have not already changed the password, change it immediately.

All sudo may prompt you for the password. Please use the password in step 4.

Create a script

Open a file and call it setup_nvme_boot.sh. Then, copy/paste the following, save, and exit

#!/bin/sh

remove_emmc_mirror()
{
echo "You have chosen to remove the eMMC mirror ..."
echo "Are you sure? (Y/n)"
read answer
if [ "$answer" != "Y" ]
then
echo "didn't accept final warning to remove emmc ..."
return
fi

cd /
echo "accepted final warning to remove emmc ..."
echo "removing freebsd-boot from the mirror ..."
gpart delete -i 1 mmcsd0
echo "removing efi from the mirror ..."
gpart delete -i 2 mmcsd0
echo "removing freebsd-zfs from the mirror ..."
zpool detach zudi mmcsd0p3
gpart delete -i 3 mmcsd0
echo "removing swap from the mirror ..."
swapoff /dev/gpt/swap0
gpart delete -i 4 mmcsd0
echo "destroying mmcsd0 ..."
gpart destroy -F mmcsd0
echo "completed removing the mirror ... "

echo "Do you want to reboot (you don't have to)? Y/n"
read answer
if [ "$answer" = "Y" ]
then
reboot
fi
}

setup_nvme_boot()
{

nvmecontrol devlist
if [ $? -ne 0 ]
then
echo "error: no nvme devices found ..."
return
fi
# prepare new NVMe, assuming no VMs here yet and it’s virgin.

nvme_p=$(gpart show | grep nvd0)
if [ "$nvme_p" != "" ]
then
echo "WARNING: removing all remnants ... "
zpool detach zudi nvd0p2
gpart destroy -F nvd0
if [ $? -ne 0 ]
then
echo "error: failed removing old nvd0 partition ..."
return
fi
fi

#create the gpt partition
echo "adding GTP partition to nvd0 ..."
gpart create -s GPT nvd0
if [ $? -ne 0 ]
then
echo "error: failed creating gpt partition ..."
return
fi

echo "adding the boot partition to nvd0 ..."
gpart add -s 64M -t efi -l efi nvd0
if [ $? -ne 0 ]
then
echo "error: failed creating boot partition ..."
return
fi

echo "calcuating size ... keeping 13GB free"
MAIN_SIZE=`gpart show nvd0 | grep GPT | awk '{printf ("%.0f\n", ($3-42)/(2*1024*1024)-13)}'`

echo "size, excluding swap is $MAIN_SIZE GB ..."

echo "adding the os/zfs partition using the whole disk ... "
gpart add -s ${MAIN_SIZE}G -t freebsd-zfs nvd0
if [ $? -ne 0 ]
then
echo "error: failed creating gpt partition ..."
return
fi

# make new EFI on that drive
echo "creating ms dos file system on partition 1 ..."
newfs_msdos -F 32 -c 1 /dev/nvd0p1
if [ $? -ne 0 ]
then
echo "error: failed ms dos file system ..."
return
fi

echo "mouting ms dos file system (for boot) ... "
mkdir -p /boot/efi
mount -t msdos /dev/gpt/efi /boot/efi
if [ $? -ne 0 ]
then
echo "error: failed mounting ms dos file system ..."
return
fi

echo "adding boot loader to boot partition ... "
mkdir -p /boot/efi/EFI/BOOT
cp /boot/loader_lua.efi /boot/efi/EFI/BOOT/BOOTx64.efi
if [ $? -ne 0 ]
then
echo "error: failed copying boot files ..."
return
fi

echo "unmounting boot parition ..."
umount /boot/efi

# attach new drive to existing pool
echo "attching ssd to existing zudi pool for copying files from emmc to ssd ..."
zpool attach -f zudi /dev/mmcsd0p3 /dev/nvd0p2
if [ $? -ne 0 ]
then
echo "error: failed attaching ssd to zpool ..."
return
fi

echo "adding the swap partition ..."
gpart add -t freebsd-swap -l swap0 -s 4G nvd0
if [ $? -ne 0 ]
then
echo "error: failed adding swap partition ..."
return
fi

echo "please continue to wait while creating a copy from emmc to ssd (resilver) ..."
while true;
do
echo -n "."
sleep 5
resilvering=$(zpool status | grep nvd0p2 | grep resilvering)
if [ -z "$resilvering" ]
then
break
fi
done


echo "Do you want to keep the mirror?"
echo "Pros:"
echo "- You will have a mirror of everything"
echo "- You can boot from either once (needs monitor)"
echo "Cons:"
echo "- Regardless of your NVME capacity, you are limited to the size of eMMC"
echo "- The slowest (eMMC) performance decides the system throughput"
echo "Please answer with N to remove the mirror. Anything else means yes."
read answer

if [ "$answer" = "N" ]
then
remove_emmc_mirror
else
echo "You have chosen to keep the mirror ... "
fi

echo "updating bootloader label(s) ... "
update_efi_boot_loader_label
}

Once the file is saved, make sure you update its permission:

    chmod 700 setup_nvme_boot.sh

To Mirror or Not?

Before running the script, think about what you would like to configuration to look like:

  1. High Performance + Redundancy, or
  2. Highest performance and no Redundancy.

As with everything else, there are Pros and Cons:

Keeping the mirror (Redundancy)

In this scenario, you will have NVMe SSD and the eMMC mirror each other.

Pros- You will have a mirror of everything - Will automatically switch to eMMC is NVMe SSD failes - You can boot from either one (needs a monitor)Cons- Regardless of your NVME capacity, you are limited to the size of the eMMC - The slowest (eMMC) performance decides the system throughput - You cannot use eMMC for anything else

Highest performance

In this scenario, all partitions on eMMC are deleted and destroyed.

Pros- Higher performance - Less things to maintainCons- If it fails, it needs to be replaced.

Run the script

Type:

    sudo -i

This way, you will be running as root. So, be very careful!

Once done, type:

    ./setup.nvme.boot

Wait for the process to complete!

You are done!

]]>
Michel Kohanim https://www.linkedin.com/in/michelkohanim/
<![CDATA[How to run a VM on eisy (Home Assistant in this example)]]> https://developer.isy.io/blog/Home Assistant 2024-01-12T00:00:00.000Z bhyve

Would like to run a Guest OS on your eisy?

This tutorial describes a way to use your eisy as a host for a Guest operating system, we'll take a popular Home Automation platform Home Assistant as an example (however any other x64 OS image should work). Virtual Machine lifecycle is handled by vm-bhyve.

danger

Following instructions assume that:
a. You already have an NVMe card installed in your eisy and
b. You are not on WiFi.

Helper script

Use the following script as an example, it works as is, but feel free to make any modifications. Execute script as root user, for example sudo ./create_ha_vm.sh .

#!/bin/sh

# Where do we want to storge VM resources (ZFS pool name and mount path)
VMFILESET="storage/vms"
VMDIR="/storage/vms"

# Home Assistant VM name and how much resources to allocate
HA_VM_NAME="homeassistant"
HA_VM_CPU="2"
HA_VM_MEM="1G"
HA_VM_DISC="16G"

# specify network interface - by default it's Ethernet re0
INTERFACE="re0"

# pick the latest x86-64 image from here https://github.com/home-assistant/operating-system/releases/
HA_IMAGE_URL="https://github.com/home-assistant/operating-system/releases/download/11.4/haos_generic-x86-64-11.4.img.xz"

# Internal variables
TMPDIR=`mktemp -d`
IMAGE_NAME="${TMPDIR}/haos_generic-x86-64.img"
VM_CONF=${VMDIR}/${HA_VM_NAME}/${HA_VM_NAME}.conf

# make sure ifconfig_DEFAULT is not set as it causes tap0 interface issues
# ensure re0 is set to DHCP
sysrc -x ifconfig_DEFAULT
sysrc ifconfig_re0="DHCP"

echo "Make sure necessary packages are installed"
pkg install -y vm-bhyve edk2-bhyve wget qemu-tools

echo "Prepare /etc/rc.conf"
sysrc vm_enable="YES"
sysrc vm_dir="zfs:${VMFILESET}"

# this makes Home Assistant VM start up automatically on boot, comment out if this is not desired
sysrc vm_list=${HA_VM_NAME}

echo "Create ZFS fileset for VMs and prepare templates"
zfs create ${VMFILESET}
vm init
cp /usr/local/share/examples/vm-bhyve/*.conf ${VMDIR}/.templates/

# create VM networking (common for all VMs on the system)
vm switch create public
vm switch add public ${INTERFACE}

echo "Downloading image"
wget -O ${IMAGE_NAME}.xz ${HA_IMAGE_URL}
echo "Extracting..."
unxz ${IMAGE_NAME}.xz

echo "Creating a VM"
vm create -t linux-zvol -s ${HA_VM_DISC} ${HA_VM_NAME}

echo "Copying image"
dd if=${IMAGE_NAME} of=/dev/zvol/${VMFILESET}/${HA_VM_NAME}/disk0 bs=1m
rm -rf ${TMPDIR}

sysrc -f ${VM_CONF} loader="uefi"
sysrc -f ${VM_CONF} cpu=${HA_VM_CPU}
sysrc -f ${VM_CONF} memory=${HA_VM_MEM}

vm start ${HA_VM_NAME}
vm info ${HA_VM_NAME}
vm list

echo "Please wait about 10 minutes and follow instructions at https://www.home-assistant.io/getting-started/onboarding/ to get your Home Assistant setup"
echo "ISY integration: https://www.home-assistant.io/integrations/isy994/"
tip
Alternatively, and if you are OK with all the parameters, 
you can simply run the following command:
    curl -s https://pkg.isy.io/script/create_ha_vm.sh | sudo bash

VM basics

Home Assistant VM would be a separate host on your network with it's own IP address, you can use MAC address printed by the script to look up the IP and create a DHCP reservation on your router if desired. Although the script does this for you, if things do not work, please make sure you have:

    ifconfig_re0="DHCP" 

instead of

    ifconfig_DEFAULT="DHCP" 

in /etc/rc.conf.

If you don't, please do this:

    sudo sysrc -x ifconfig_DEFAULT
sudo sysrc iconfig_re0="DHCP"

Home Assistant Port

Keep in mind that Home Assistant uses non-standard port 8123 for HTTP interface by default.

Home Assistant IoX integration

If you run into any issues getting the ISY/IoX integration to work, please do make sure you read the ISY Integration Instructions provided by the developer.

Shutting Down Home Assistant

To shut down Home Assistant - login to it's dashboard and navigate into Settings -> System, then use the power button in the top right corner, click advanced and select shutdown.

At any time you can shutdown the VM from eisy command line :

    sudo vm poweroff -f homeassistant

Please note that this is not a graceful shutdown, it's an equivalent to pulling the power.

Unfortunately, it does not look like Home Assistant is handling ACPI shutdown correctly (sudo vm stop homeassistant) at this moment.

]]>
Andrey Pevnev https://www.linkedin.com/in/andrey-pevnev-2213a033/
<![CDATA[Add Fast Storage to Your eisy!]]> https://developer.isy.io/blog/NVMe 2024-01-11T00:00:00.000Z eisy



You may wonder why you would need more storage on your eisy. Well, you really don't unless you are a geek. And of course, you are in the Geeks Corner, here are some of the things we use it for:

  1. Very fast network file server
  2. Media files especially now that you can use our AudioPlayer plugin to play music through the headphone jack on the back
  3. Plugin development since read/write operations are much faster on SSD than than the internal storage

Get an NVMe M.2 SSD

You can get pretty much any M.2 NVMe SSD with the limitation that the capacity cannot be more than 1TB.

Install the SSD

eisy-nvme

  1. Press and hold the power button so that the light turns red
  2. Unplug the power cord
  3. Open the 4 screws at the bottom and gently remove the bottom cover
  4. Remove the screw that holds the SSD in place (top right on a stand)
  5. Plugin the SSD and screw in the screw in 4
  6. Gently put the bottom cover back ensuring that the label points to the back (where the connectors are)
  7. Screw in the 4 screws you took out in 3
  8. Plugin the power cord

That's it!

Configure the OS

ssh to eisy

  1. On Mac, open the terminal app (search | terminal)
  2. On Windows, open the command prompt (search | cmd)
  3. Type:
  1. When prompted, type in the password. Default is admin. If you have not already changed the password, change it immediately.

All sudo may prompt you for the password. Please use the password in step 4.

The two most popular file systems for unix are ZFS and UFS (mostly on BSD). UFS is much easier to manage while ZFS is much more flexiblie. We are going to use ZFS. We will also provide two methods: one is simple which takes over the whole disk and the other allows you to partition into different partitions.

Simple - Takes over the whole disk

  1. Make sure you have /etc/zfs directory otherwise automount will not work after reboot
    sudo mkdir -p /etc/zfs
  1. Make sure the SSD is installed
    sudo nvmecontrol devlist

You should get something like this:

    nvme0: SPCC M.2 PCIe SSD
nvme0ns1 (122104MB) --> 128GB

If not, then the SSD is not installed properly.

  1. Create a ZFS pool
    sudo zpool create storage /dev/nvd0

Make sure it got craeted

    sudo zpool list

You should get something like this:

    NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
storage 97.5G 106K 97.5G - - 0% 0% 1.00x ONLINE -
zudi 108G 27.3G 80.7G - - 26% 25% 1.00x ONLINE -

  1. Change permissions as you see fit For this example, we use admin:admin as the owner
    sudo chown admin:admin /storage
  1. Make sure that it can be automounted after reboot
    sudo zpool export storage 
sudo zpool import storage
  1. Copy some files and reboot

Multiple partitions

  1. Make sure you have /etc/zfs directory otherwise automount will not work after reboot
    sudo mkdir -p /etc/zfs
  1. Make sure the SSD is installed
    sudo nvmecontrol devlist

You should get something like this:

    nvme0: SPCC M.2 PCIe SSD
nvme0ns1 (122104MB) --> 128GB

If not, then the SSD is not installed properly.

  1. Create a partitioning scheme on the SSD
    sudo gpart create -s gpt nvd0
  1. Add partition(s)

You can add as many partitions as you like as long as the the total size is not greater than what you got from nvmecontrol command (see above). In our case, we are just going to allocate about 100G for our storage and leave about 20G for the future:

    sudo gpart add -s 100000M -t freebsd-zfs -l storage_m nvd0

Just to make sure the partition was created:

    gpart show 

You should get something like this:

=>       40  241663920  mmcsd0  GPT  (115G)
40 1024 1 freebsd-boot (512K)
1064 131072 2 efi (64M)
132136 226852864 3 freebsd-zfs (108G)
226985000 8388608 4 freebsd-swap (4.0G)
235373608 6290352 - free - (3.0G)

=> 40 250069600 nvd0 GPT (119G)
40 204800000 1 freebsd-zfs (98G)
204800040 45269600 - free - (22G)

  1. Create a ZFS pool
    sudo zpool create storage /dev/gpt/storage_m

Make sure it got craeted

    sudo zpool list

You should get something like this:

    NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
storage 97.5G 106K 97.5G - - 0% 0% 1.00x ONLINE -
zudi 108G 27.3G 80.7G - - 26% 25% 1.00x ONLINE -

  1. Change permissions as you see fit For this example, we use admin:admin as the owner
    sudo chown admin:admin /storage
  1. Make sure that it can be automounted after reboot
    sudo zpool export storage 
sudo zpool import storage
  1. Copy some files and reboot

You are done!

]]>
Michel Kohanim https://www.linkedin.com/in/michelkohanim/
<![CDATA[Is Your Home Ready for Halloween?]]> https://developer.isy.io/blog/welcome 2023-10-18T00:00:00.000Z Hallowween

Get 13% discount to upgrade to eisy, and spookify your guests with Halloween Automation!

With the help of the Ring, Hue, LiFX, and Sonos plugins, you can turn your home into a haunted house! For example, you could turn on a fog machine as a guest arrives, turn on or change the color of lights (such as Hue/LiFX), or perhaps a strobe light. And whenever the guest rings the doorbell, you could play sound effects through Sonos.

With over 120 plugins available for eisy, the possibilities are endless!

Why should I upgrade?
  1. 994 is no longer supported.
  2. Support for newer INSTEON i3 products.
  3. Instant push notifications to UD Mobile from your programs.
  4. Geo Fencing lets eisy perform different tasks based on your location.
  5. Camera support on UD Mobile.
  6. Integration with over 120 other devices and services such as Ring, Ecobee, ELK, Tesla, Roomba, Wemo, Sonos and Weather Services.
  7. Minimal learning curve because you can still use the same familiar Admin Console interface. 
How easy is it to upgrade?

If you have a 994 with firmware 5.2.0+ and don't have any Z-Wave devices, it's as easy as:

  1. Backing up your 994.
  2. Disconnecting the PLM from your 994 and connect it to a USB port on eisy using the Serial PLM Kit.
  3. With one click, migrate your ISY Portal License as well as Alexa and Google Home configurations to your new eisy.

If you do have Z-Wave devices, and due to the nature of Z-Wave routing algorithms, migration might not be as seamless. Here's the link to the complete migration instructions.

If your 994 has an older firmware than 5.2.0, then you need to upgrade the firmware level first and we can help you with that as well.

What do I need?

1 x eisy

1 x Serial PLM Kit

If and only if you have Z-Wave devices that you want to migrate:

1 x ZMatter USB

What's the discount coupon?

The discount coupon is 6MWC93FU and it expires on 10/27/2023.

]]>
Michel Kohanim https://www.linkedin.com/in/michelkohanim/