This repository contains Ansible playbooks which maintain:
- https://ipfs.io and the deprecated gateway.ipfs.io
- The default IPFS bootstrap nodes
- Bots for irc://chat.freenode.net/#ipfs
- Monitoring
Solarnet currently consists of 12 hosts.
most of them on DigitalOcean in various different data centers across the world.
The storage nodes are hosted with different hosters.
See the hosts file.
All hosts are peered into the cjdns network Hyperboria, which is also the transport for the VPN between them. The VPN grants access to internal services like http://metrics.i.ipfs.io, and each IPFS node's API. Access control is merely an allowlist of cjdns IPv6 addresses.
All hosts get provisioned with the following:
- nginx
- cjdns
- Docker
- SSH authorized_keys
- Prometheus node_exporter
Depending on their role they also get provisioned with:
- IPFS
- SSL cert and key for ipfs.io
- Grafana and Prometheus
- Pinbot
See the hosts file for the role associations.
Requirements:
- virtualenv
- pip
# install ansible and other dependencies
$ make deps
# activate the virtualenv
$ . venv/bin/activate
# see if it works
(venv)$ which ansible
(venv)$ ansible all -a 'whoami'IPFS and cjdns private keys, SSL certificates, and cjdns peering credentials,
are tracked by Git in a secret repository, in encrypted form.
We need to decrypt them for usage, and encrypt them for committing changes.
You can base your own secret repository off secrets.yml.example.
# initialize and decrypt
$ git clone https://github.com/protocol/infrastructure-secrets.git secrets/
$ echo "the-key" > ../solarnet.key
$ ./secrets.sh -d
# make changes and encrypt
$ vim secrets_plaintext/secrets.yml
$ ./secrets.sh -e
$ cd secrets/
$ git add secrets.yml
$ git commit -m 'Add some password or so'You can also pipe the key instead of writing it to a file:
$ echo "the-key" | ./secrets.sh -dNote that ./secrets.sh -e rewrites all files, regardless of whether they actually change. That's why we stage only secrets.yml in the example above.
# to all hosts
(venv)$ ./full.sh all
# to one host
(venv)$ ./full.sh pluto
# only ipfs changes, to storage hosts
(venv)$ ansible-playbook -l storage ipfs.yml(venv)$ make ipfs_ref
(venv)$ ansible-playbook ipfs.yml
(venv)$ git add roles/ipfs/vars/main.yml
(venv)$ git commit -m 'ipfs: update to latest master'
(venv)$ make cjdns_ref
(venv)$ ansible-playbook cjdns.yml
(venv)$ git add roles/cjdns/vars/main.yml
(venv)$ git commit -m 'cjdns: update to latest master'There's disk space telemetry: http://metrics.i.ipfs.io/dashboard/db/meta
# runs on all gateway hosts in parallel
(venv)$ ansible gateway -a 'docker exec -i ipfs ipfs repo gc'(venv)$ ansible all -a 'docker ps'# two hosts in parallel
(venv)$ ansible all -f 2 -a 'docker restart ipfs'# ipfs
(venv)$ ssh [email protected] docker exec -i ipfs ipfs log tail
# ipfs.io nginx errors
(venv)$ ssh [email protected] /opt/nginx/logs/default.error.logChanges to the nginx, IPFS, or cjdns configurations should trigger an nginx reload. If a playbook fails after writing an updated config file, but before the "reload handler" triggers the nginx reload, then the reload will never happen, and we have to do it manually.
(venv) ansible all -a 'docker exec -i nginx sh -c '/etc/init.d/nginx configtest && /etc/init.d/nginx reload''You're probably using the system version of Ansible, and it is outdated (< 1.9). Make sure to follow the Ansible setup steps above, and load the virtualenv. This will load a working version of Ansible.
We use Prometheus to scrape and store timeseries from IPFS and the hosts themselves. Grafana provides the dashboard UI.
Both are available at http://metrics.i.ipfs.io and http://metrics.i.ipfs.io/prometheus, respectively. There are two ways of access:
- cjdns
- You need to be peered into the Hyperboria cjdns network in order to reach the address that this domain name points to.
- Your cjdns address needs to be allowlisted.
- SSH port-forwarding
ssh -L 8080:metrics.i.ipfs.io:80 root@<any-solarnet-host>.i.ipfs.io- http://localhost:8080
In addition to serving http://h.ipfs.io, we use cjdns for a very simple VPN based on an IP address allowlist.
$ git clone https://github.com/hyperboria/cjdns.git
$ cd cjdns/
$ ./do
$ ./cjdroute --genconf > cjdroute.conf
$ ./cjdroute < cjdroute.conf
$ killall cjdroute
# or, on osx
$ brew install cjdns
# or, on ubuntu, repo = precise|trusty|utopic|vivid
$ echo "deb http://ubuntu.repo.cjdns.ca/ <repo> main" > /etc/apt/sources.list.d/cjdns.list
$ apt-get update && apt-get install cjdnsThis creates a tunX network interface,
which grabs all fc00::/8 traffic and hands it to cjdns.
Scripts for various init systems are provided in contrib/.
Cjdns nodes peer automatically on local networks,
but on WANs like the internet, peering requires credentials.
You can peer with the Solarnet cjdns nodes
by generating peering credentials (./peering.sh gateway solarnet),
and adding them to `cjdroute.conf.
Note that peering with any other nodes in the Hyperboria network is sufficient, too.
You can check the status of your peerings:
$ watch tools/peerStatsOr check connectivity to solarnet:
$ ping6 h.ipfs.ioSince IP packets in cjdns are authenticated and encrypted, we can use the HTTP client's IP address for authentication, instead of Basic Auth or another login mechanism.
Restricted HTTP services listen only on the cjdns IPv6 address, and allow access only to the following clients:
- localhost
- All
cjdns_identitiesnodes insecrets.yml - All
metrics_allowlistnodes insecrets.yml
For access to Grafana and Prometheus,
add your cjdns IPv6 address to metrics_allowlist.