<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Articles by Charles Hill]]></title><description><![CDATA[My writing focuses on my areas of technical expertise which include programming, software engineering, security and privacy, hardware projects, bitcoin, and lightning network.]]></description><link>https://degreesofzero.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Nov 2023 14:04:52 GMT</lastBuildDate><atom:link href="https://degreesofzero.com/feeds/articles.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[Fetch web page from TOR hidden service with nodejs]]></title><description><![CDATA[<html><head></head><body><p>Many software projects now use TOR hidden services to bypass local NAT firewalls or to make their self-hosted, at-home services available remotely without a static, public IP address. There are also privacy benefits for both service operators and users.</p>
<p>This article assumes that you already have the TOR hidden service properly configured and everything is running smoothly on the service side. So how do you connect to a TOR hidden service remotely using nodejs? It's actually pretty simple. We will use TOR's socks proxy, a socks proxy agent module, and the built-in http/s modules of nodejs.</p>
<h2 id="install-tor">Install TOR</h2>
<p>First you will need to install TOR. On debian/Ubuntu you can use the following command:</p>
<pre><code class="language-bash">sudo apt-get install tor
</code></pre>
<p>This should automatically setup a service to run the TOR proxy on system boot.</p>
<p>For other systems have a look at <a href="https://2019.www.torproject.org/docs/installguide.html.en">the official website</a> for installation guides.</p>
<p>Verify that the TOR proxy is working:</p>
<pre><code class="language-bash">curl --verbose \
    --socks5 127.0.0.1:9050 \
    --socks5-hostname 127.0.0.1:9050 \
    -s https://check.torproject.org/ | cat | grep -m 1 Congratulations | xargs
</code></pre>
<p>If your TOR proxy is working then you should see "Congratulations" printed to your terminal.</p>
<h2 id="nodejs-script-to-fetch-web-page-from-tor-hidden-service">Nodejs script to fetch web page from TOR hidden service</h2>
<p>Below is an example script that can fetch a web page served via a TOR hidden service:</p>
<pre><code class="language-js">const url = require('url');
const http = require('http');
const https = require('https');
const SocksProxyAgent = require('socks-proxy-agent');

// Use the SOCKS_PROXY env var if using a custom bind address or port for your TOR proxy:
const proxy = process.env.SOCKS_PROXY || 'socks5h://127.0.0.1:9050';
console.log('Using proxy server %j', proxy);
// The default HTTP endpoint here is DuckDuckGo's v3 onion address:
const endpoint = process.argv[2] || 'https://duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion';
console.log('Attempting to GET %j', endpoint);
// Prepare options for the http/s module by parsing the endpoint URL:
let options = url.parse(endpoint);
const agent = new SocksProxyAgent(proxy);
// Here we pass the socks proxy agent to the http/s module:
options.agent = agent;
// Depending on the endpoint's protocol, we use http or https module:
const httpOrHttps = options.protocol === 'https:' ? https : http;
// Make an HTTP GET request:
httpOrHttps.get(options, res =&gt; {
    // Print headers on response:
    console.log('Response received', res.headers);
    // Pipe response body to output stream:
    res.pipe(process.stdout);
});
</code></pre>
<p>Install the <code>socks-proxy-agent</code> module via npm:</p>
<pre><code class="language-bash">npm install socks-proxy-agent
</code></pre>
<p>Create a new file:</p>
<pre><code class="language-bash">touch ./tor-http-fetch.js
</code></pre>
<p>Copy/paste the above script into the new file.</p>
<p>Run the script as follows:</p>
<pre><code class="language-bash">node ./tor-http-fetch.js
</code></pre>
<p>By default it will fetch DuckDuckGo's home page via its v3 onion address. You can use the script to fetch another page instead:</p>
<pre><code class="language-bash">node ./tor-http-fetch.js "http://your-tor-hidden-service.onion"
</code></pre>
<p>The script will work with both "http" and "https" URLs.</p>
<p>If you need to send custom headers or otherwise change the HTTP request to better suit your needs, have a look at the <a href="https://nodejs.org/api/http.html">nodejs http</a> module docs.</p>
</body></html>]]></description><link>https://degreesofzero.com/article/fetch-web-page-from-tor-hidden-service-with-nodejs.html</link><guid isPermaLink="true">https://degreesofzero.com/article/fetch-web-page-from-tor-hidden-service-with-nodejs.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Tue, 21 Sep 2021 14:45:00 GMT</pubDate></item><item><title><![CDATA[Static Bitcoin Lightning Donation QR Codes]]></title><description><![CDATA[<html><head></head><body><p>Are you still using a static bitcoin address to receive donations? You may want to stop that. Anyone who scans your donation QR code can see exactly how much money you've received and the full transaction history of your address. These days there is a better way to receive donations. With the Lightning Network and the lnurl-pay protocol it is possible for your supporters to send you bitcoin privately, instantly, and with low fees.</p>
<br>
<div>
    <div class="x-smaller right">
        <div class="image">
            <img src="static-bitcoin-lightning-donation-qrcodes/images/street-art-example.jpg" width="644" height="859" alt="">
        </div>
    </div>
    <h2>Example Street Art</h2>
    <p>In the example shown here, we can see a static QR code embedded in street art. This QR code contains a bitcoin address: "3Pbo...c4Grb". It is easy to look-up the transaction history for this address using a block explorer.</p>
    <p>As of the time this article was written, the address has received over 100 transactions totaling about 0.17 BTC. About half of the total amount received is still sitting in the address waiting to be spent. It's also possible to follow the trail backwards to see where all these donations came from and forwards to where the funds are sent.</p>
    <p>If you care about your own financial privacy or that of your supporters, you should learn how to use the Lightning Network to greatly improve the privacy of your bitcoin payments.</p>
</div>

<div class="clear"></div>
<br>

<h2 id="static-donation-qr-codes-with-lightning-network">Static Donation QR Codes with Lightning Network</h2>
<p>The Lightning Network allows instant, low-fee, and private bitcoin payments. This is possible because the bitcoin transactions that are exchanged between counter-parties on the Lightning Network are not broadcast or stored on the blockchain. Only in cases of final settlement is it necessary to pay the miner fee for a bitcoin transaction to be included in a block. However, there are down-sides to the Lightning Network.</p>
<p>One problem of the Lightning Network is that it doesn't use bitcoin addresses for receiving payments. Instead a new concept called invoices are used to request payments. These invoices are one-time use and are generated for an exact amount of bitcoin. So this means it's not possible to print an invoice as a QR code and use indefinitely.</p>
<p>This is where the <a href="https://github.com/fiatjaf/lnurl-rfc/blob/luds/06.md">lnurl-pay</a> protocol comes in. It's a side-channel protocol to help facilitate a nicer end-user experience while using Lightning Network for everyday payments.</p>
<p>A wallet application that supports lnurl-pay will decode the bech32 text and make an HTTP request to the URL. The full lnurl-pay UX flow is as follows:</p>
<ol>
<li>User opens a mobile wallet app that supports lnurl-pay<ul>
<li>Find a list of wallet apps <a href="https://github.com/fiatjaf/awesome-lnurl#wallets">here</a></li>
</ul>
</li>
<li>User uses the app to scan the QR code</li>
<li>App decodes the QR code to get the URL</li>
<li>App makes an HTTP request to the URL</li>
<li>Web service replies with the lnurl-pay response data, which includes:<ul>
<li>Meta data about the lnurl-pay</li>
<li>Minimum and maximum payment amount in msats (millisatoshis)</li>
<li>Another URL to which the app will send a second HTTP request</li>
</ul>
</li>
<li>App shows the information above to the user</li>
<li>User chooses the amount to pay and confirms</li>
<li>App sends the second HTTP request with the amount to pay</li>
<li>Web service replies with a Lightning invoice</li>
<li>App pays the invoice</li>
</ol>
<p>It might look like a lot of steps, but to the user of the app it's only two steps:</p>
<ol>
<li>Scan the QR code with their wallet app</li>
<li>Choose payment amount and confirm</li>
</ol>
<p>The following services provide custodial Lightning accounts that support lnurl-pay:</p>
<ul>
<li><a href="https://lnbits.com/">lnbits.com</a> - Provides custodial Lightning wallet accounts with many extensions like "LNURLp" which allows the creation of re-usable lnurl-pay links.</li>
<li><a href="https://coinos.io/">coinos.io</a> - Another custodial Lightning wallet provider</li>
</ul>
<h2 id="lightning-addresses-are-even-easier">Lightning Addresses Are Even Easier</h2>
<p>The new <a href="https://lightningaddress.com/">Lightning Address</a> protocol makes it even easier to receive Lightning payments. There are already several services that offer Lightning addresses as a service to their users:</p>
<ul>
<li><a href="https://coinos.io/">coinos</a> - A custodial Lightning service provider</li>
<li><a href="https://t.me/lntxbot">lntxbot</a> - A telegram bot that provides a custodial Lightning wallet to Telegram users</li>
<li><a href="https://t.me/LightningTipBot">LightningTipBot</a> - Another telegram bot for Lightning</li>
</ul>
<p>Once you've setup a Lightning wallet account with one of the above services, you can get your own Lightning address. Your address will follow the "you@service" pattern. So for lntxbot it would be "<a href="mailto:you@lntxbot.com">you@lntxbot.com</a>" or for coinos it would be "<a href="mailto:you@coinos.io">you@coinos.io</a>".</p>
<p>This style of address has a similar look and feel as an email address but instead of sending email, other users can send you bitcoin via the Lightning Network.</p>
<p>Under-the-hood, the Lightning Address protocol uses the lnurl-pay protocol. The real difference is in the aesthetics of the text that is shared. Compared to the long URLs that are typical of lnurl-pay, a Lightning Address is much simpler. If you'd like to read more about how this new protocol works you can find details in its <a href="https://github.com/andrerfneves/lightning-address">GitHub repository</a>.</p>
<h2 id="how-to-generate-qr-codes">How to Generate QR Codes</h2>
<p>It's possible to generate QR codes from text in many ways. For example, you could do it using the <a href="https://github.com/lincolnloop/python-qrcode/">qr</a> command-line utility:</p>
<pre><code class="language-bash">echo -n "hello" | qr
</code></pre>
<p>This will print a QR code in your terminal.</p>
<p>You can also save the output of the <code>qr</code> command to a file:</p>
<pre><code class="language-bash">echo -n "hello" | qr &gt; hello.png
</code></pre>
<p>If you prefer to use a web-based QR code generator, you can try the following:</p>
<ul>
<li><a href="https://www.the-qrcode-generator.com/">www.the-qrcode-generator.com</a></li>
</ul>
</body></html>]]></description><link>https://degreesofzero.com/article/static-bitcoin-lightning-donation-qrcodes.html</link><guid isPermaLink="true">https://degreesofzero.com/article/static-bitcoin-lightning-donation-qrcodes.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Tue, 21 Sep 2021 11:30:00 GMT</pubDate></item><item><title><![CDATA[Docker and firewalls: Are your services protected?]]></title><description><![CDATA[<html><head></head><body><p>Are you running a firewall like ufw with docker? You might be surprised to learn that your firewall is probably not doing anything to block unwanted internet traffic from reaching your docker services. Docker modifies iptables rules to completely bypass or ignore the rules set by ufw. In this article, I will explain how to check if the services running on your server are exposed and how to protect them.</p>
<h2 id="check-for-exposed-docker-services">Check for exposed docker services</h2>
<p>I usually begin articles, like this one, by explaining some history or back-story to provide context. But in this case, let's dive right into how to check if your services are exposed remotely.</p>
<p>In this section we will use <code>netstat</code> and <code>nmap</code> to check for local processes that are listening for TCP connections and  to scan ports. To install them:</p>
<pre><code class="language-bash">sudo apt-get install net-tools nmap
</code></pre>
<p>Use netstat to print a list of processes that are actively listening for TCP connections:</p>
<pre><code class="language-bash">sudo netstat -tlpn
</code></pre>
<p>Example results:</p>
<pre><code>Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:8332          0.0.0.0:*               LISTEN      17021/docker-proxy  
tcp        0      0 127.0.0.1:8333          0.0.0.0:*               LISTEN      17146/docker-proxy  
tcp        0      0 0.0.0.0:5432            0.0.0.0:*               LISTEN      17330/docker-proxy  
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      651/sshd            
</code></pre>
<p>From the above results, we can see that we have 4 services listening for TCP connections. "Local Address" refers to the host (IP address and port number) on which the service is listening. For example, requests to "127.0.0.1:8332" will be handled by that service.</p>
<ul>
<li>"127.0.0.1" is the loopback address. Services bound to the loopback address are not accessible remotely.</li>
<li>"0.0.0.0" means all interfaces. Services bound to this address are accessible remotely unless a firewall is blocking those requests.</li>
</ul>
<p>Let's check this assumption by using nmap to scan for open ports:</p>
<pre><code class="language-bash">nmap -p 0-65535 0.0.0.0
</code></pre>
<p>Example results:</p>
<pre><code>Starting Nmap 7.60 ( https://nmap.org ) at 2021-08-16 16:00 UTC
Nmap scan report for 0.0.0.0
Host is up (0.010s latency).
Not shown: 65535 closed ports
PORT      STATE SERVICE
22/tcp    open  ssh
8332/tcp  open  unknown
8333/tcp  open  bitcoin
5432/tcp  open  postgresql

Nmap done: 1 IP address (1 host up) scanned in 3.15 seconds
</code></pre>
<p>Here we see that all 4 service ports are open on any interface. But this doesn't tell us what we really want to know - are these ports exposed remotely?</p>
<p>To answer that, we need the system's LAN IP address. You can get this by using ifconfig:</p>
<pre><code class="language-bash">ifconfig | grep -Po "inet 192.168.[^ ]+" | grep -Po "192.168.[^ ]+"
</code></pre>
<p>If your system is a VPS, running in a cloud, then its LAN IP address might begin with "10." instead of "192.168.". Check the full output of <code>ifconfig</code> to view all of your system's networking interfaces.</p>
<p>Now let's repeat the scan with the LAN IP address:</p>
<pre><code class="language-bash">nmap -p 0-65535 192.168.XXX.XXX
</code></pre>
<p>Example results:</p>
<pre><code>Starting Nmap 7.60 ( https://nmap.org ) at 2021-08-16 16:00 UTC
Nmap scan report for 192.168.XXX.XXX
Host is up (0.010s latency).
Not shown: 65535 closed ports
PORT      STATE SERVICE
22/tcp    open  ssh
5432/tcp  open  postgresql

Nmap done: 1 IP address (1 host up) scanned in 3.31 seconds
</code></pre>
<p>From the above we can see that the system has two ports open for remote TCP traffic. The first is port 22 which is used for SSH access. If we need to access the machine remotely via SSH, then this port should stay open.</p>
<p>The second port is for postgreSQL, which very likely should not be exposed remotely. In this example server, we're running postgreSQL in a docker container. </p>
<p>So what gives? Why is docker exposing this service remotely? The answer is because you told it to. Now let's fix it.</p>
<h3 id="the-fix-dont-expose-docker-services-remotely">The Fix: Don't expose docker services remotely</h3>
<p>Sounds simple, right?</p>
<p>Most users of docker don't realize that they are exposing their services remotely when they publish ports. For example, this command creates and runs a docker container:</p>
<pre><code class="language-bash">docker run -p 3000:3000 &lt;image&gt;
</code></pre>
<p>The <code>-p</code> argument tells docker to "publish" port 3000 - i.e. create a listener and forward requests to port 3000 to the new container. But this is insecure because the default host that docker binds to is "0.0.0.0"!</p>
<p>These kinds of examples are all over the internet in tutorials, how-to articles, GitHub issues, stackoverflow answers, and more. Users have been trained to use docker in an insecure way.</p>
<p>The fix is actually quite simple. When publishing ports, tell docker to bind to "127.0.0.1" instead:</p>
<pre><code class="language-bash">docker run -p 127.0.0.1:3000:3000 &lt;image&gt;
</code></pre>
<p>Now the service will not be exposed remotely.</p>
<p>You can find more details about using the <code>-p, --publish</code> arguments in the <a href="https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose">official documentation</a>.</p>
<h2 id="the-docker--ufw-problem-unintuitive-defaults">The docker + ufw problem: Unintuitive defaults</h2>
<p>Based on the many articles, bugs, and issues about this common problem, it's safe to say that docker's default behavior is far from intuitive. One could even call the default behavior dangerous.</p>
<p>From <a href="https://github.com/moby/moby/issues/22054">an old issue</a> which remains un-fixed as of today:</p>
<blockquote>
<p><strong>Docker Network bypasses Firewall, no option to disable</strong></p>
<p>Steps to reproduce the issue:</p>
<ol>
<li>Setup the system with a locked down firewall</li>
<li>Create a set of docker containers with exposed ports</li>
<li>Check the firewall; docker will by use "anywhere" as the source, thereby all containers are exposed to the public.</li>
</ol>
</blockquote>
<p>And the problem has recently attracted the attention of <a href="https://news.ycombinator.com/item?id=27613217">hackernews</a>:</p>
<blockquote>
<p><strong>Hacker deleted all of NewsBlur’s Mongo data and is now holding the data hostage</strong></p>
<p>NewsBlur's founder here. I'll attempt to explain what's happening.</p>
<p>...</p>
<p>It's been a great year of maintenance and I've enjoyed the fruits of Ansible + Docker for NewsBlur's 5 database servers (PostgreSQL, MongoDB, Redis, Elasticsearch, and soon ML models).</p>
<p>...</p>
<p>Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn't work on a new server because of Docker. When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world.</p>
</blockquote>
<p>So what's going on here? Why is ufw ineffective at blocking traffic to services run inside docker containers?</p>
<p>Docker inserts its own iptables rules, which bypass ufw's own iptables rules. So those ufw rules that you think are protecting your docker services, are not actually doing that.</p>
<p>If you're curious, you can print your system's iptables rules with the following command:</p>
<pre><code class="language-bash">sudo iptables -S
</code></pre>
<p>You will notice both ufw and docker have inserted their own rules.</p>
<p>I don't claim to understand the deep, dark magic of iptables. So I won't even begin to try to explain it here.</p>
<h3 id="what-about---iptablesfalse">What about --iptables=false?</h3>
<p>The most popular solution to the docker + ufw problem is to configure the docker daemon with <code>--iptables=false</code>. This is a bad idea because it makes docker unusable by blocking out-bound traffic as well as any networking between containers. So if you want docker to function properly, you will need to create and manage iptables rules manually. That doesn't sound like a long-term viable solution.</p>
<p>If you're really interested, you can have a look at the proposed solution that can be found in <a href="https://stackoverflow.com/a/51741599">this stackoverflow answer</a>. It looks like a lot of effort for not a lot of benefit. The much simpler solution is to just not expose your services. Bind to "127.0.0.1" when publishing your service ports.</p>
<h2 id="are-non-docker-services-exposed-as-well">Are non-docker services exposed as well?</h2>
<p>The big question that might be on your mind. Are the services running outside of docker exposed too? Luckily, the answer appears to be no. You can verify this yourself. Create a TCP listener on port 12345 using netcat:</p>
<pre><code class="language-bash">nc -l -k -p 12345
</code></pre>
<p>Leave this running and open a new, separate terminal window. If your ufw is enabled, then port 12345 should be blocked by default. Use nmap to perform a port scan on the individual port:</p>
<pre><code class="language-bash">nmap -p 12345 192.168.XXX.XXX
</code></pre>
<p>Results:</p>
<pre><code>Starting Nmap 7.70 ( https://nmap.org ) at 2021-08-16 16:00 UTC
Nmap scan report for 192.168.XXX.XXX
Host is up (0.00031s latency).

PORT      STATE SERVICE
12345/tcp open  netbus

Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds
</code></pre>
<p>Whoops! Looks like ufw didn't block the port scan. Maybe it's because we're running the port scan locally? Let's try remotely. Run the following command from a different computer that's connected to the same LAN (router/wifi):</p>
<pre><code class="language-bash">nmap -p 12345 192.168.XXX.XXX
</code></pre>
<p>Results:</p>
<pre><code>Starting Nmap 7.60 ( https://nmap.org ) at 2021-08-16 16:00 UTC
Nmap scan report for 192.168.XXX.XXX
Host is up.

PORT      STATE    SERVICE
12345/tcp filtered netbus

Nmap done: 1 IP address (1 host up) scanned in 2.01 seconds
</code></pre>
<p>Phew! Looks like ufw is doing its job - at least for non-dockerized services.</p>
<h2 id="defense-in-depth">Defense in-depth</h2>
<p>Keep using ufw to protect your systems. But you shouldn't be relying on a single line of defense to protect your services. If ufw was the only thing standing between your services and the public internet, then that was a mistake.</p>
<p>Use the networking tools above to check for exposed services on your systems.</p>
<p>If your system is a VPS at a cloud provider, then you should look at what firewall options they have available. In the case of <strong>DigitalOcean</strong>, it's possible to configure a firewall that can protect your VPS ("droplet") from unwanted traffic. You can find this in <strong>Network</strong> &gt; <strong>Firewalls</strong>. Create a new firewall, white-list the ports that you want, and then add your droplets. It's that simple.</p>
<p>If your cloud provider doesn't provide a network-level firewall, you could use <strong>CloudFlare</strong>'s network firewall service. This will have negative privacy implications due to routing all traffic thru CloudFlare, but it could be a reasonable trade-off for your case.</p>
<p>If the system is on physical hardware to which you have physical access, then you might consider configuring a firewall on the router thru which your system connects to the internet. If your router doesn't allow you to configure such a firewall, then it might be time to invest in better hardware.</p>
<p>That's it for this one. Good luck and keep those docker services safe!</p>
</body></html>]]></description><link>https://degreesofzero.com/article/docker-and-firewalls.html</link><guid isPermaLink="true">https://degreesofzero.com/article/docker-and-firewalls.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Mon, 16 Aug 2021 15:00:00 GMT</pubDate></item><item><title><![CDATA[How to rebalance Lightning Network channels]]></title><description><![CDATA[<html><head></head><body><p>So you're operating your own Lightning Network node and you've opened a few channels - nice work! But opening channels is just the first step. To make your node more useful for yourself as well as to the rest of the network - i.e. to route payments - it's necessary to balance your channels. In this article, you will learn how to use the lncli command-line tool to manually rebalance your lnd node's channels.</p>
<h2 id="why-balanced-channels-are-better">Why Balanced Channels Are Better</h2>
<p>Un-balanced channels are a problem because only the counter-parties in a channel know about the internal state of a channel (local vs. remote balances). So when a node constructs a route thru which to send a payment, they are hoping that each channel along the route will have sufficient local balance to route the payment onward. If all of your node's channels are equally balanced, then there's a better chance that another node will succeed in routing a payment thru your node. This is better for the health of the network and also it's better for you, because you can collect routing fees.</p>
<p>Another reason to balance your channels is to give your node more options when sending its own payments. For example, if many of your channels have zero or very low local balance, your node won't be able to use those channels to send payments.</p>
<h2 id="manual-channel-rebalancing">Manual Channel Rebalancing</h2>
<p>So first thing to do is to figure out which of your node's channels are in need of rebalancing. Use the following command to get a list of your node's active channels:</p>
<pre><code class="language-bash">lncli listchannels --active_only
</code></pre>
<p>Example results:</p>
<pre><code class="language-json">{
    "channels": [
        {
            "active": true,
            "remote_pubkey": "REMOTE_NODE_PUBKEY_01",
            "channel_point": "XXX:0",
            "chan_id": "SHORT_CHANNEL_ID_01",
            "capacity": "500000",
            "local_balance": "499530",
            "remote_balance": "0",
            "commit_fee": "729",
            "commit_weight": "724",
            "fee_per_kw": "1006",
            "unsettled_balance": "0",
            "total_satoshis_sent": "0",
            "total_satoshis_received": "0",
            "num_updates": "0",
            "pending_htlcs": [
            ],
            "csv_delay": 144,
            "private": false,
            "initiator": true,
            "chan_status_flags": "ChanStatusDefault",
            "local_chan_reserve_sat": "5000",
            "remote_chan_reserve_sat": "5000",
            "static_remote_key": true,
            "commitment_type": "STATIC_REMOTE_KEY",
            "lifetime": "7",
            "uptime": "7",
            "close_address": "",
            "push_amount_sat": "0",
            "thaw_height": 0,
            "local_constraints": {
                "csv_delay": 144,
                "chan_reserve_sat": "5000",
                "dust_limit_sat": "573",
                "max_pending_amt_msat": "495000000",
                "min_htlc_msat": "1",
                "max_accepted_htlcs": 483
            },
            "remote_constraints": {
                "csv_delay": 144,
                "chan_reserve_sat": "5000",
                "dust_limit_sat": "573",
                "max_pending_amt_msat": "495000000",
                "min_htlc_msat": "1",
                "max_accepted_htlcs": 483
            }
        },
        {
            "active": true,
            "remote_pubkey": "REMOTE_NODE_PUBKEY_02",
            "channel_point": "XXX:0",
            "chan_id": "SHORT_CHANNEL_ID_02",
            "capacity": "500000",
            "local_balance": "0",
            "remote_balance": "499302",
            "commit_fee": "728",
            "commit_weight": "724",
            "fee_per_kw": "1006",
            "unsettled_balance": "0",
            "total_satoshis_sent": "0",
            "total_satoshis_received": "0",
            "num_updates": "0",
            "pending_htlcs": [
            ],
            "csv_delay": 144,
            "private": false,
            "initiator": false,
            "chan_status_flags": "ChanStatusDefault",
            "local_chan_reserve_sat": "5000",
            "remote_chan_reserve_sat": "5000",
            "static_remote_key": true,
            "commitment_type": "STATIC_REMOTE_KEY",
            "lifetime": "7",
            "uptime": "7",
            "close_address": "",
            "push_amount_sat": "0",
            "thaw_height": 0,
            "local_constraints": {
                "csv_delay": 144,
                "chan_reserve_sat": "5000",
                "dust_limit_sat": "573",
                "max_pending_amt_msat": "495000000",
                "min_htlc_msat": "1",
                "max_accepted_htlcs": 483
            },
            "remote_constraints": {
                "csv_delay": 144,
                "chan_reserve_sat": "5000",
                "dust_limit_sat": "573",
                "max_pending_amt_msat": "495000000",
                "min_htlc_msat": "1000",
                "max_accepted_htlcs": 483
            }
        }
    ]
}
</code></pre>
<p>In the above example output, we can see that we have two active channels. Both have a channel capacity of 500000 sats. Each of these channels has their balance completely on one side or the other. With such an imbalance, it's unlikely that either channel will be used to route payments for other nodes.</p>
<h3 id="not-enough-remote-balance">Not Enough Remote Balance?</h3>
<div class="wrap">
    <div class="right">
        <div class="image">
            <img src="how-to-rebalance-lightning-network-channels/images/ln-plus-swaps-triangle-small.jpg" alt="">
            <p class="caption">Triangular channel swaps can help you to use your capital more effectively</p>
        </div>
    </div>
    <p>If all of your channels have only a local balance and no remote balance, then you will not be able to rebalance them. Before you can continue with the rest of this guide, you will need to first find a counter-party who is willing to open a channel to your node.</p>
    <h4>Channel Swapping and Liquidity Triangles</h4>
    <p>I recommend to use one of the following collaborative channel swapping services or groups:</p>
    <ul>
        <li><a href="https://lightningnetwork.plus/swaps">lightningnetwork.plus</a></li>
        <li><a href="https://old.reddit.com/r/TheLightningNetwork/comments/n7s4px/lightning_network_triangle_megathread_build_the/">r/thelightningnetwork channel thread</a></li>
        <li><a href="https://old.reddit.com/r/lightningnetwork/comments/ll4tib/channel_thread/">r/lightningnetwork channel thread</a></li>
        <li><a href="https://t.me/theRingsOfFire">t.me/theRingsOfFire</a></li>
        <li><a href="https://t.me/Plebnet">t.me/Plebnet</a></li>
    </ul>
</div>


<h3 id="manual-rebalance-via-self-payment">Manual rebalance via self payment</h3>
<p>To balance the example channels above, you will pick two of your channels that need to be re-balanced. The first channel should have excess <b>local</b> balance. The second channel should have excess <b>remote</b> balance. You will use these two channels to pay yourself thru what is sometimes referred to as a "channel cycle".</p>
<p>Generate a new payment request so that you can pay yourself the amount that you wish to rebalance.</p>
<pre><code class="language-bash">lncli addinvoice 250000
</code></pre>
<p>The invoice amount here is 250000 sats because that's half the channel capacity in the example channels above. Change this amount to whatever makes sense in your case.</p>
<p>Use the following command to pay the invoice:</p>
<pre><code>lncli payinvoice \
    --allow_self_payment \
    --fee_limit 30 \
    --outgoing_chan_id SHORT_CHANNEL_ID_01 \
    --last_hop REMOTE_NODE_PUBKEY_02 \
    SELF_PAYREQ
</code></pre>
<ul>
<li><code>--allow_self_payment</code> - This flag is required to pay your own invoice</li>
<li><code>--fee_limit</code> - The maximum fee to pay (sats).</li>
<li><code>--outgoing_chan_id</code> - The short channel id ("chan_id") of the out-bound channel. This is the channel that has excess local balance.</li>
<li><code>--last_hop</code> - The public key of the remote node for the in-bound channel. This is the channel that has excess remote balance.</li>
</ul>
<p>Replace <code>SHORT_CHANNEL_ID_01</code> with the "chan_id" of the out-bound channel.</p>
<p>Replace <code>REMOTE_NODE_PUBKEY_02</code> with the "remote_pubkey" of the in-bound channel.</p>
<p>Replace <code>SELF_PAYREQ</code> with the invoice that you generated in the previous command.</p>
<p>You may need to increase the fee limit in case you are unable to route any payments. Alternatively, you can try to generate and pay an invoice for a smaller amount.</p>
<p>If the payment was successful, then congratulations! You've rebalanced your first channels.</p>
<p>Use the command from earlier to inspect the updated balances of your active channels:</p>
<pre><code class="language-bash">lncli listchannels --active_only
</code></pre>
<p>You should see that both of the channels are now balanced.</p>
<h2 id="rebalancing-with-other-tools">Rebalancing with other tools</h2>
<p>Here's a list of other tools that you could use to help you re-balance your channels:</p>
<ul>
<li><a href="https://github.com/Ride-The-Lightning/RTL">RTL (Ride the Lightning)</a> - A web interface to help manage a Lightning Network node</li>
<li><a href="https://github.com/alexbosworth/balanceofsatoshis">balanceofsatoshis ("bos")</a> - CLI tool that helps with liquidity and channel management</li>
<li><a href="https://github.com/C-Otto/rebalance-lnd">rebalance-lnd</a> - A script to help rebalance lnd node channels</li>
</ul>
</body></html>]]></description><link>https://degreesofzero.com/article/how-to-rebalance-lightning-network-channels.html</link><guid isPermaLink="true">https://degreesofzero.com/article/how-to-rebalance-lightning-network-channels.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Thu, 12 Aug 2021 15:00:00 GMT</pubDate></item><item><title><![CDATA[Lightning Node Setup, Backup, and Recovery]]></title><description><![CDATA[<html><head></head><body><p>So you want to setup a Lightning Network node and you want to do it somewhat safely. More than likely the node software that you've chosen is <a href="https://github.com/lightningnetwork/lnd">lnd</a> by Lightning Labs. It's a reasonable choice. It provides many tools and interfaces including CLI, JSON-RPC, REST, several backup and restore options, watchtowers, and more.</p>
<p>There are other node implementations for the Lightning Network:</p>
<ul>
<li><a href="https://github.com/ACINQ/eclair">Eclair</a></li>
<li><a href="https://github.com/ElementsProject/lightning">C-Lightning</a></li>
<li><a href="https://github.com/rust-bitcoin/rust-lightning">Rust-Lightning</a></li>
<li><a href="https://github.com/spesmilo/electrum/">Electrum</a> - Yes, it supports Lightning Network!</li>
</ul>
<p>You can investigate those other implementations to decide which one is best for your use-case. The practical guidance in this article will assume you're using lnd.</p>
<h2 id="the-risks-are-real">The Risks Are Real</h2>
<p>Lightning Network software is still very much a work-in-progress. And in case you had ideas about putting a significant amount of money into Lightning Network channels, you should read this comment thread on GitHub: <a href="https://github.com/lightningnetwork/lnd/issues/2468">"Still cannot force close inactive channels"</a>. The comments contain such gems as:</p>
<blockquote>
<p>Aah... looks like you restored from an outdated backup?
If you're unlucky, you force-closed the channels (which is a breach when your backup is outdated), and the remote took the funds. If you're lucky, the remote force closed and has data-loss protection enabled. If that's the case, you can try to connect to the peer and see what happens. Please post debug logs!</p>
</blockquote>
<p>And:</p>
<blockquote>
<p>.. you lost data. Therefore your node is now waiting on the remote peer to connect and give you data you need to recover the channel state. Depending on which version they're running (or even implementation), they may never give you this special data.</p>
</blockquote>
<p>And:</p>
<blockquote>
<p>I have the same problem with more than 3 BTC missing and "If you restored from an old backup (which you should never do!)" sounds like the most irrational thing...</p>
</blockquote>
<p>Ouch! But lucky for the last commenter they were able to recover their funds in the end - after a long and difficult debugging process with the help of one of the lnd project contributors.</p>
<p>Still interested in being your own bank?</p>
<h2 id="setup">Setup</h2>
<p>The first step towards being your own sovereign individual (at least with regards to finance) is to run your own full bitcoin node. Since lnd can interface with <a href="https://github.com/bitcoin/bitcoin">bitcoind</a> and <a href="https://github.com/btcsuite/btcd">btcd</a>, I recommend to use one of those as your full node software. Some notes regarding full node setup:</p>
<p>For both bitcoind and btcd:</p>
<ul>
<li>Set the <code>txindex</code> flag to build the transaction index</li>
<li>Configure RPC access for your lnd</li>
</ul>
<p>From lnd's installation documentation:</p>
<blockquote>
<p>We don't require --txindex when running with bitcoind or btcd but activating the txindex will generally make lnd run faster.</p>
</blockquote>
<p>Additional notes for bitcoind:</p>
<ul>
<li>Do <strong>not</strong> enable pruning</li>
</ul>
<p>It is also possible to run lnd with <a href="https://github.com/lightninglabs/neutrino">neutrino</a>:</p>
<blockquote>
<p>Neutrino is an experimental Bitcoin light client written in Go and designed with mobile Lightning Network clients in mind. It uses a new proposal for compact block filters to minimize bandwidth and storage use on the client side, while attempting to preserve privacy and minimize processor load on full nodes serving light clients.</p>
</blockquote>
<p>I've found neutrino to be pretty unreliable. But maybe the situation has improved since the last time I used it. Give it a try if you want to avoid setting up a full bitcoin node.</p>
<h3 id="node-software-installation">Node Software Installation</h3>
<p>There already exist detailed <a href="https://github.com/lightningnetwork/lnd/blob/master/docs/INSTALL.md">installation instructions</a> provided by the lnd project itself. So if you don't already have lnd installed, follow those instructions and then return here to continue.</p>
<h3 id="wallet-setup">Wallet Setup</h3>
<p>Every Lightning Network node needs a bitcoin wallet. The lnd node software provides an easy-to-follow setup process via its CLI:</p>
<pre><code class="language-bash">lncli create
</code></pre>
<p>Follow the instructions to generate a new seed, set a wallet password, and set your cipher seed passphrase.</p>
<p>Before it can be used, you need to unlock the wallet as follows:</p>
<pre><code class="language-bash">lncli unlock
</code></pre>
<p>Provide the wallet password when prompted. The node will begin to download block headers and other information needed to build its internal database.</p>
<h3 id="watchtower-configuration">Watchtower Configuration</h3>
<blockquote>
<p>Watchtowers act as a second line of defense in responding to malicious or accidental breach scenarios in the event that the client’s node is offline or unable to respond at the time of a breach, offering greater degree of safety to channel funds.</p>
</blockquote>
<div class="right small">
    <div class="image">
        <img src="lightning-network-node-setup-backup-and-recovery/images/watchtower.jpg" alt="">
    </div>
</div>

<p>At present there are only "altruistic" watchtowers, meaning node operators are running these watchtower nodes not for profit but to help the network. You might be able to find an altruistic watchtower to watch over your node's channels here:</p>
<ul>
<li><a href="https://github.com/openoms/lightning-node-management/issues/4">Watchtower list</a></li>
<li><a href="https://lightningboost.info/watchtower">What are Watchtowers</a></li>
</ul>
<p>You can configure a watchtower via the CLI with the <code>lncli wtclient add</code> command. And it is important to note that your lnd node can use multiple watchtowers in case one of them disappears or doesn't fulfill its function.</p>
<div class="clear"></div>

<p>Or you can configure a watchtower in your <code>lnd.conf</code>:</p>
<pre><code>wtclient.active=true
wtclient.private-tower-uris=PUBKEY@127.0.0.1:9911
</code></pre>
<h4 id="setup-your-own-watchtower">Setup Your Own Watchtower</h4>
<p>You don't need to use someone else's watchtower. You can run your own. But you should note that it is highly recommended to run a watchtower on its own physical machine with its own infrastructure (electricity, internet access, etc) or in a separate data center from your primary lnd node. You could even run multiple watchtower nodes all in separate data centers around the world.</p>
<p>A watchtower node is an lnd node that has been configured to act as a watchtower. You will need to go thru the usual wallet setup process as if you were preparing a normal lnd node - e.g <code>lncli create</code> and so on.</p>
<p>Here's an example <code>lnd.conf</code> for a watchtower node:</p>
<pre><code>bitcoin.active=1
bitcoin.mainnet=1
bitcoin.node=bitcoind
bitcoind.rpchost=localhost:8332
bitcoind.rpcuser=XXX
bitcoind.rpcpass=XXX
bitcoind.zmqpubrawblock=tcp://localhost:28332
bitcoind.zmqpubrawtx=tcp://localhost:28333
watchtower.active=true
watchtower.listen=localhost:9911
</code></pre>
<p>This sample configuration uses bitcoind. The important lines are the last two which are prefixed <code>watchtower</code>. These tell the lnd node to enable its watchtower and to listen at <code>localhost:9911</code>. This works for my setup because I use <a href="/article/remote-reverse-proxy-with-ssh-vps.html">port-forwarding via SSH reverse proxies</a> to connect my servers.</p>
<p>For more details:</p>
<ul>
<li><a href="https://github.com/lightningnetwork/lnd/blob/master/docs/watchtower.md">Private Altruist Watchtowers</a></li>
<li><a href="https://github.com/wbobeirne/watchtower-example">Watchtower Example Demo</a></li>
</ul>
<h3 id="important-tips">Important Tips</h3>
<p>After you've finished your node setup, there are a few things you can do to manage your new Lightning Network node and to help save yourself some pain later.</p>
<h4 id="use-a-mobile-app">Use a Mobile App</h4>
<p>These days there are several solid mobile apps that can help you manage your node:</p>
<ul>
<li><a href="https://play.google.com/store/apps/details?id=fr.acinq.eclair.wallet.mainnet2">Eclair Mobile</a> - Android only</li>
<li><a href="https://zaphq.io/">Zap</a> - Desktop, iOS, and Android</li>
<li><a href="https://github.com/ZeusLN/zeus">Zeus</a> - iOS and Android</li>
</ul>
<p>Using a mobile wallet to access your lnd node remotely will allow you to:</p>
<ul>
<li>Use bitcoin as a currency in the real-world</li>
<li>Check your remote/local channel balances</li>
<li>Monitor for offline channels</li>
</ul>
<p>Follow each project's configuration instructions to connect them to your lnd. In general you need three things: Your node's "hostname" (IP Address plus port number), TLS certificate, and macaroon (with admin privileges). The TLS certificate allows your mobile app to establish encrypted communications with your node directly. The macaroon is an authorization token that has some privileges associated with it. In this case you will need the admin macaroon.</p>
<h4 id="prevent-zombie-channels">Prevent Zombie Channels</h4>
<div class="right small">
    <div class="image">
        <img src="lightning-network-node-setup-backup-and-recovery/images/zombies.png" alt="">
    </div>
</div>

<p>This is not an official term, but "Zombie Channel" could be used to refer to an open channel where both nodes are offline. Since neither node involved in the channel is online anymore, the channel will stick around, holding the funds hostage inside.</p>
<p>This could happen in the event that you have an open channel with an offline peer and then your node also goes offline due to some kind of failure. This will make recovering those funds difficult. To prevent this scenario, you can force-close a channel if your peer has been offline for some time.</p>
<div class="clear"></div>


<h2 id="backup">Backup</h2>
<p>To protect your funds, you should ensure that you are backing up important files to help with a possible recovery process in the event of node failure. The most important file to backup is <code>channel.backup</code> - the so called "Static Channel Backups" file.</p>
<blockquote>
<p>After version v0.6-beta of lnd, the daemon now ships with a new feature called Static Channel Backups (SCBs). We call these static as they only need to be obtained once: when the channel is created. From there on, a backup is good until the channel is closed. The backup contains all the information we need to initiate the Data Loss Protection (DLP) feature in the protocol, which ultimately leads to us recovering the funds from the channel on-chain. This is a foolproof safe backup mechanism.</p>
</blockquote>
<p>On Linux systems, the default location of this file is <code>~/.lnd/data/chain/bitcoin/mainnet/channel.backup</code>. Have a look at <a href="https://gist.github.com/alexbosworth/2c5e185aedbdac45a03655b709e255a3">LND backup script for channel.backup using inotify</a> for an example how to backup your <code>channel.backup</code> file.</p>
<p>The <code>channel.backup</code> file is encrypted using your wallet's seed phrase, so it is safe to copy it to your cloud file storage service provider. If you are already using an S3-compatible cloud storage provider then you might find <a href="https://degreesofzero.com/article/automated-encrypted-remote-backups-using-open-source-tools.html">this article</a> useful for configuring your automated remote backup.</p>
<h2 id="recovery">Recovery</h2>
<p>If you're here that means something has gone wrong. But, there is hope! You will very likely be able to recover most if not all of the bitcoin from your crashed lnd node. Stop for a moment. Breathe. Feeling ok? Good. Now is the time to go slow and make a plan before doing anything.</p>
<p>What should you <strong>not</strong> do? The absolute worst thing you can do is copy an old <code>channel.db</code> file into a new instance of lnd and run it. That will almost certainly result in permanent loss of funds.</p>
<p>Here is a nice flow chart that can help you figure out your next steps in the recovery process:</p>
<div class="center">
    <div class="image full-size">
        <img src="lightning-network-node-setup-backup-and-recovery/images/ln-rescue-flow.png" alt="">
    </div>
</div>

<div class="clear"></div>

<p>The above flow chart was taken from the very helpful <a href="https://github.com/guggero/chantools">chantools</a> project. It is an open-source set of tools developed by one of the contributors to lnd. The chantools program will (hopefully) help you recover your funds from an old copy of your lnd's <code>channel.db</code>. But before we get to that, let's first recover any on-chain funds and attempt a recovery using your static channel backup file.</p>
<h3 id="recovering-on-chain-funds">Recovering On-Chain Funds</h3>
<p>Before you do anything with an old <code>channel.db</code> file, let's follow steps 2 and 3 from the above flow chart. With a fresh lnd/lncli instance, let's create a new wallet using your old seed:</p>
<pre><code class="language-bash">lncli create
</code></pre>
<p>When prompted for your seed and cipher seed phrase, be sure to provide the same exact values that were used for the crashed lnd node which you are recovering. This will allow the wallet to check for and recover its existing on-chain balance. Wait for your lnd to finish syncing to the chain backend (bitcoind or btcd). You can check its progress as follows:</p>
<pre><code class="language-bash">lncli getinfo
</code></pre>
<h3 id="recovering-off-chain-funds">Recovering Off-Chain Funds</h3>
<p>Once your lnd has finished syncing, you can move on to the next step: Restoring from channel backup. Check the integrity of your static channel backup file (SCB):</p>
<pre><code class="language-bash">lncli verifychanbackup --multi_file=./channel.backup
</code></pre>
<p>Expected output if your backup is OK:</p>
<pre><code>{

}
</code></pre>
<p>Initiate the recovery process with the following command:</p>
<pre><code class="language-bash">lncli restorechanbackup --multi_file=./channel.backup
</code></pre>
<p>You should see a prompt similar to the following:</p>
<blockquote>
<p>WARNING: You are attempting to restore from a static channel backup (SCB) file.
This action will CLOSE all currently open channels, and you will pay on-chain fees.</p>
<p>Are you sure you want to recover funds from a static channel backup? (Enter y/n):</p>
</blockquote>
<p>Enter "y" to initiate the restore process.</p>
<p>This command can take a long time to execute. Do not stop it prematurely. For more information about these steps, see <a href="https://github.com/lightningnetwork/lnd/blob/master/docs/recovery.md#recovering-using-scbs">recovering using SCBs</a>.</p>
<p>And now... you wait. It may take a week or more to recover your funds from channel force-closures.</p>
<h3 id="further-recovery-steps">Further Recovery Steps</h3>
<p>After following all the previous recovery steps, if you still have open or pending channels, you will need to use chantools. Carefully follow the steps described in the project's <a href="https://github.com/guggero/chantools#channel-tools">readme</a> file to recover your remaining funds.</p>
</body></html>]]></description><link>https://degreesofzero.com/article/lightning-network-node-setup-backup-and-recovery.html</link><guid isPermaLink="true">https://degreesofzero.com/article/lightning-network-node-setup-backup-and-recovery.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Wed, 11 Nov 2020 13:30:00 GMT</pubDate></item><item><title><![CDATA[The Case for Make: The New Old Build Tool]]></title><description><![CDATA[<html><head></head><body><p>If you've never had to migrate a project from one build system to another, I envy you. How sweet it is to have not experienced the psychological torture that is unwinding years of hacks and work-arounds layered on-top of one another during a legacy project's lifetime. But nothing lasts forever. You too will know this feeling eventually. Or maybe not. Let's talk about how you can avoid this fate by using a new (very old) build system called <a href="https://www.gnu.org/software/make/">Make</a>.</p>
<h2 id="the-case-against-modern-build-systems">The Case Against Modern Build Systems</h2>
<p>Modern build systems use an insane number of dependencies:</p>
<ul>
<li><a href="https://npm.anvaka.com/#/view/2d/gulp">gulp</a> - 296 nodes, 513 links</li>
<li><a href="https://npm.anvaka.com/#/view/2d/grunt">grunt</a> - 170 nodes, 277 links</li>
<li><a href="https://npm.anvaka.com/#/view/2d/webpack">webpack</a> - 82 nodes, 119 links</li>
</ul>
<div class="wrap">
  <div class="center x-smaller">
    <div class="images">
      <img width="1920" height="1080" src="the-case-for-make-the-new-old-build-tool/images/dep-tree-gulp.jpg" alt="" title="gulp's dependency graph">
      <img width="1920" height="1080" src="the-case-for-make-the-new-old-build-tool/images/dep-tree-grunt.jpg" alt="" title="grunt's dependency graph">
      <img width="1920" height="1080" src="the-case-for-make-the-new-old-build-tool/images/dep-tree-webpack.jpg" alt="" title="webpack's dependency graph">
    </div>
    <p class="caption"><b>gulp</b> (left), <b>grunt</b> (center), <b>webpack</b> (right)</p>
  </div>
</div>

<p>Of the build systems mentioned above, only one is still under active development. That one is <a href="https://webpack.js.org/">webpack</a>, which is used by the popular web framework <a href="https://reactjs.org/">React</a>. If you are unfamiliar with webpack, good for you. It is a gigantic piece of software that does too many things. From my experience, webpack can be nice to bootstrap a simple proof-of-concept project, but eventually you will hit a wall where you need to do some extremely hacky work-arounds to do something that it doesn't easily support.</p>
<p>Why am I talking about dependencies? Well...</p>
<blockquote>
<p>Earlier this week, many npm users suffered a disruption when a package that many projects depend on — directly or indirectly — was unpublished by its author, as part of a dispute over a package name. The event generated a lot of attention and raised many concerns, because of the scale of disruption, the circumstances that led to this dispute, and the actions npm, Inc. took in response.</p>
</blockquote>
<p>This was the <a href="https://blog.npmjs.org/post/141577284765/kik-left-pad-and-npm">"leftpad incident"</a> as it has become to be known.</p>
<p>The package author mentioned in the above quote unpublished more than 250 packages from the npm registry in a very short time. This broke thousands of projects and caused a lot of headaches for maintainers and developers throughout the ecosystem.</p>
<p>This alone should be a strong reason to try to limit your project's exposure to huge dependency graphs.</p>
<p>But in case you are not yet convinced, here are a few more reasons:</p>
<ul>
<li>Less time wasted fixing the build process after upgrading dependencies.</li>
<li>Reduce the attack surface that could allow malicious/rogue dependencies to:<ul>
<li><a href="https://www.veracode.com/blog/research/abusing-npm-libraries-data-exfiltration">Exfiltrate sensitive data</a> such as keys or secrets via the file system or environment variables.</li>
<li>Utilize (abuse) system resources to mine cryptocurrencies.</li>
<li>Use system's network capacity to spam, run proxy servers, or DOS attack other services.</li>
</ul>
</li>
</ul>
<h2 id="the-case-for-make">The Case For Make</h2>
<p>Make. Is. Everywhere. You very likely already have it installed on your system. Or if not, it will be available to install via your system's package repository.</p>
<p>Make is ideal for running builds or as a general purpose task runner. It allows you to easily incorporate bash commands and tools that already exist on your system. All of these tools have been around forever, are well tested, and they are stable.</p>
<h3 id="make-in-practice">Make In Practice</h3>
<p>Here is an example Makefile that includes comments that explain each section:</p>
<pre><code class="language-makefile">## Usage
#
#   $ make build        # compile files that need compiling
#   $ make clean        # remove build files
#   $ make clean build  # remove build files and recompile build files from scratch
#

## Variables
BUILD=build
ALL_CSS=$(BUILD)/css/all.css
SRC=src

# Targets
#
# The format goes:
#
#   target: list of dependencies
#     commands to build target
#
# If something isn't re-compiling double-check the changed file is in the
# target's dependencies list.

# Phony targets - these are for when the target-side of a definition
# (such as "build" below) isn't a file but instead a just label. Declaring
# it as phony ensures that it always run, even if a file by the same name
# exists.
.PHONY: build\
clean\
fonts\
images

build: fonts images $(ALL_CSS)

clean:
  # Delete build files:
  rm -rf $(BUILD)/*

# Define a list of CSS source files.
CSS_FILES=$(SRC)/css/fonts.css\
$(SRC)/css/reset.css\
$(SRC)/css/styles.css\
$(SRC)/css/responsive.css

$(ALL_CSS): $(SRC)/css/
  mkdir -p $$(dirname $@)
  rm -f $(ALL_CSS)
  for file in $(CSS_FILES); do \
    echo "/* $$file */" &gt;&gt; $(ALL_CSS); \
    cat $$file &gt;&gt; $(ALL_CSS); \
    echo "" &gt;&gt; $(ALL_CSS); \
  done

fonts:
  # Copy fonts to build directory.
  mkdir -p $(BUILD)/fonts/OpenSans
  cp -r node_modules/open-sans-fontface/fonts/**/* $(BUILD)/fonts/OpenSans/

images:
  # Copy images to build directory.
  mkdir -p $(BUILD)/images/
  cp -r $(SRC)/* $(BUILD)/images/
</code></pre>
<p>It is basically bash with the added syntax for build targets. Make will only build files whose inputs have been modified. So in this example, if you change one of your CSS source files then the all.css build file will be recompiled.</p>
<p>The example above is quite simple. It only includes copying and concatenating files. You can add dependencies as you need them to perform minification of JavaScript files, syntax highlighting, templating, and more.</p>
<p>For more advanced build processes, it's a good idea to execute bash (or node.js) scripts from within the Makefile. This gives you the structure and functionality of Make with the flexibility of whichever scripting language you prefer.</p>
<p>During the last few years, I've migrated several projects to Make and it has turned out to be a great move for the long-term maintainability of those projects. You can have a look at some of these projects for more complex, real-world examples using Make:</p>
<ul>
<li><a href="https://github.com/samotari/paynoway">PayNoWay</a> - Bitcoin double-spending app for Android</li>
<li><a href="https://github.com/samotari/bleskomat-diy">Bleskomat</a> - Next generation Bitcoin ATM hardware and software project</li>
</ul>
<p>Choose Make as your next project's build system. Your future self will thank you!</p>
</body></html>]]></description><link>https://degreesofzero.com/article/the-case-for-make-the-new-old-build-tool.html</link><guid isPermaLink="true">https://degreesofzero.com/article/the-case-for-make-the-new-old-build-tool.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Tue, 10 Nov 2020 20:15:00 GMT</pubDate></item><item><title><![CDATA[Beyond Coffee: Bitcoin for Every Day Use with LNURL]]></title><description><![CDATA[<html><head></head><body><p>A lot of progress is being made to improve the user experience for those who would like to buy a cup of coffee with their bitcoin. For the longest time most new developments were happening in infrastructure projects such as nodes, exchanges, trading, hardware and software wallets. Now the <a href="https://lightning.network/">Lightning Network</a> has brought the focus of many developers back to the end-user, including a resurgence in mobile wallet development with several new Lightning-first wallets.</p>
<p>In the early days a high-level of knowledge was required to use the Lightning Network. But this is no longer the case. There are already some solid, newbie-friendly wallet apps available:</p>
<ul>
<li><a href="https://phoenix.acinq.co/">Phoenix</a> by ACINQ - easy, non-custodial</li>
<li><a href="https://www.walletofsatoshi.com/">WalletOfSatoshi</a> - very easy, but custodial!</li>
<li><a href="https://lightning-wallet.com/">BLW (Bitcoin Lightning Wallet)</a> - non-custodial, medium complexity</li>
<li><a href="https://breez.technology/">Breez</a> - non-custodial</li>
</ul>
<p>One of the latest developments that is helping to improve the usability of the Lightning Network is <a href="https://github.com/btcontract/lnurl-rfc">LNURL</a> - a side channel communication protocol to smooth over some of the remaining UX problems. LNURL is a set of subprotocols each with a specific UX flow designed to be simple and easy to implement.</p>
<h2 id="withdrawals">Withdrawals</h2>
<p>One pain point with the Lightning Network has been the need to generate a new, unique invoice for every payment that you would like to receive and then somehow give this invoice to the service that you are using so that it can be paid. One real-world example is a web-based paywall service that accumulates satoshis via Lightning Network on your behalf. Once in a while you might want to manually withdraw your satoshis to your own wallet. To do this you would need to first generate an invoice for the exact number of satoshis and then copy/paste that invoice into your computer's browser. But if your wallet is on your phone, this can be an annoying experience.</p>
<div class="wrap">
    <div class="right">
        <div class="image">
            <img src="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/bleskomat-in-action.jpeg" alt="">
            <p class="caption"><a href="https://www.bleskomat.com/">Bleskomat: The next gen. Bitcoin Lightning ATM</a></p>
        </div>
    </div>
    <p>Enter the lnurl-withdraw subprotocol. With a service that supports it, the end user simply scans a single QR code and the service communicates with an LNURL HTTP server to facilitate the payment process via the Lightning Network. The user doesn't need to manually generate invoices or count satoshis - their wallet and the web service can do all of that in the background.</p>
    <h3>Smooth, New Real-World Applications</h3>
    <p>This improved user experience is a perfect fit for many applications such as an offline Lightning Network ATM, simplified withdraw processes from exchange accounts, withdrawing change from a PoS terminal, and more.</p>
</div>


<h2 id="static-payment-qr-codes">Static Payment QR Codes</h2>
<div class="wrap">
    <p>Another popular subprotocol is lnurl-pay. This one enables static QR codes that can be printed and re-used many times. A common example would be a printed sign that contains a QR code for donations. The lnurl-pay subprotocol allows the receiver to specify a range (min/max) for the accepted payment amounts. When the end-user (payer) scans the QR code, they are presented with a dialog that asks how much they want to pay. They choose the amount, confirm the payment in their app, and then their app communicates with an LNURL server to complete the payment via the Lightning Network.</p>
    <h3>Street Artists Beware!</h3>
    <p>Those QR-encoded donation addresses are destroying your privacy. Anyone who scans the QR code can see your address and its complete transaction history. One huge advantage to lnurl-pay QR codes is privacy. The payer's app only sees a decoded URL and min/max parameters. They won't be able to see your transaction history or how much money you've received.</p>
    <div class="center smaller">
        <div class="images">
            <img src="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/street-art-example-02.jpg" alt="">
            <img src="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/street-art-example-01.jpg" alt="">
            <img src="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/street-art-example-03.jpg" alt="">
        </div>
    </div>
</div>



<h2 id="authentication">Authentication</h2>
<p>Growing in popularity with bitcoin and Lightning Network developers, lnurl-auth allows service operators to provide a new method for authentication, authorization, and login.</p>
<blockquote>
<p>A special linkingKey can be used to login user to a service or authorise sensitive actions. This preferrably should be done without compromising user identity so plain LN node key can not be used here. Instead of asking for user credentials a service could display a "login" QR code which contains a specialized LNURL.</p>
</blockquote>
<p>Though part of the LNURL specification, lnurl-auth doesn't actually touch the Lightning Network. It uses your bitcoin wallet's existing seed to generate a unique signing key for each website with which you authenticate. The protocol can be used as a 2FA option or as a replacement for email/username + password login.</p>
<p><a href="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/lnurl-auth-key-tree.png"></a></p><div class="center smaller"><div class="image"><a href="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/lnurl-auth-key-tree.png"><img src="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/lnurl-auth-key-tree.png" alt=""></a></div></div><p></p>
<div class="clear"></div>

<p>The protocol dictates that the user's app should deterministically derive a "hashing key" and use that to derive a unique linking key for each service with which the user attempts to authenticate. Privacy of the user is protected because the user's app provides a unique public key to each service. App developers are encouraged to follow the derivation scheme defined in the specification to allow for portability between apps.</p>
<h2 id="open-channel-request">Open Channel Request</h2>
<p>Probably the least popular of the subprotocols, lnurl-channel allows an end-user to request that a channel be opened to their node.</p>
<blockquote>
<p>Suppose user has a balance on a certain service which he wishes to turn into an incoming channel and service supports such functionality. This would require many parameters so the resulting QR may be overly dense and cause scanning issues. Additionally, the user has to make sure that a connection to target LN node is established before an incoming channel is requested.</p>
</blockquote>
<h2 id="resources-and-tools">Resources and Tools</h2>
<p>If you'd like to add support for LNURL to your own app or service, have a look at <a href="https://github.com/fiatjaf/awesome-lnurl">this comprehensive list</a> for tools, libraries, and other services that already support it.</p>
<p>During the lockdown earlier this year, I created <a href="https://lnurl-toolbox.degreesofzero.com">lnurl-toolbox</a> - a browser-based tool to test mobile and browser-based implementations of the LNURL protocol. If you'd like to try it out for yourself, I recommend the BLW wallet app (linked above) because it has the most comprehensive LNURL support.</p>
<p><a href="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/lnurl-toolbox-screenshot.png"></a></p><div class="center smaller"><div class="image"><a href="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/lnurl-toolbox-screenshot.png"><img src="beyond-coffee-bitcoin-for-every-day-use-with-lnurl/images/lnurl-toolbox-screenshot.png" alt=""></a></div></div><p></p>
<div class="clear"></div>
</body></html>]]></description><link>https://degreesofzero.com/article/beyond-coffee-bitcoin-for-every-day-use-with-lnurl.html</link><guid isPermaLink="true">https://degreesofzero.com/article/beyond-coffee-bitcoin-for-every-day-use-with-lnurl.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Sat, 22 Aug 2020 13:00:00 GMT</pubDate></item><item><title><![CDATA[Web Traffic Analytics Without Third Parties]]></title><description><![CDATA[<html><head></head><body><p>The internet is a very different place than the one that I experienced growing up. Back then there were no ad networks, browser fingerprinting, drive-by exploits, obfuscated code running cryptomining software while you read, and all other manner of shady money-driven tactics you see today. It was a simpler time. Now without ad/tracker-blocking browser extensions, the web is almost unusable.</p>
<p>So what can be done by an individual still operating their own website today? Well we can stop using third-party analytics solutions. There is a reason those services are free for you to use. You are paying with your visitors' privacy and eventual erosion of trust in you. And since you are including third-party code on the client, you are risking your visitors' security as well.</p>
<p>Let's get to it. I've been using <a href="https://goaccess.io/">GoAccess</a> as a replacement for web traffic analytics:</p>
<blockquote>
<p>GoAccess is an open source real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser.</p>
</blockquote>
<p><img src="web-traffic-analytics-without-third-parties/images/goaccess-bright.png" alt=""></p>
<p>GoAccess analyzes web server log files directly to generate analytics reports. It is compatible with Apache, Nginx, Amazon S3, Elastic Load Balancing, CloudFront, and others. It's open-source, so if you're using some obscure web server with unusual log formats, you can contribute to the project via their <a href="https://github.com/allinurl/goaccess">GitHub</a>.</p>
<h3 id="simple-web-server-configuration">Simple Web Server Configuration</h3>
<p>You will need to install GoAccess wherever your web server's log files are stored. For the sake of simplicity, we will assume it's on your web server. Here's how to install goaccess:</p>
<pre><code class="language-bash">wget https://tar.goaccess.io/goaccess-1.4.tar.gz
tar -xzvf goaccess-1.4.tar.gz
cd goaccess-1.4/
./configure --enable-utf8 --enable-geoip=legacy
make
make install
</code></pre>
<p>For more information see the <a href="https://goaccess.io/download">download page</a> on the project's website.</p>
<p>If you're running your web server on a Linux machine, its log files are likely generated and pruned by a utility named "logrotate". The default max age for log files is quite short (14 days). So you will want to increase this to something like 6 months or a year. Configuration files for logrotate are located in <code>/etc/logrotate.d</code>. Here is an example for nginx (<code>/etc/logrotate.d/nginx</code>):</p>
<pre><code>/var/log/nginx/*.log {
        daily
        missingok
        rotate 14
        compress
        delaycompress
        notifempty
        create 640 nginx adm
        sharedscripts
        postrotate
                if [ -f /var/run/nginx.pid ]; then
                        kill -USR1 7
                fi
        endscript
}
</code></pre>
<p>The line that you will want to change is <code>rotate 14</code>. This is the maximum age (in days) for log files. Files older than this setting will be pruned/deleted.</p>
<p>To generate a report from your web server's current log files:</p>
<pre><code class="language-bash">goaccess \
        -f /var/log/nginx/access.log* \
        --log-format=COMBINED \
        --ignore-crawlers \
        --output html
</code></pre>
<h3 id="using-docker">Using Docker</h3>
<p>If you're using Docker to run your web server, have a look at <a href="https://github.com/allinurl/goaccess/blob/master/DOCKER.md">running GoAccess in Docker</a> - detailed guide to running goaccess in its own container. Not mentioned in that guide is that you will need to use a mounted volume to store your web server's logs.</p>
</body></html>]]></description><link>https://degreesofzero.com/article/web-traffic-analytics-without-third-parties.html</link><guid isPermaLink="true">https://degreesofzero.com/article/web-traffic-analytics-without-third-parties.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Thu, 11 Jun 2020 11:30:00 GMT</pubDate></item><item><title><![CDATA[Remote Reverse Proxy Using SSH + VPS]]></title><description><![CDATA[<html><head></head><body><p>A short and quick guide to setting up a reverse proxy from your local machine thru a remote virtual private server (VPS). This setup is useful for manual testing a service that's running on your local machine temporarily or if you're running permanent services behind a NAT firewall.</p>
<p>The first thing you will need to do is to reconfigure the SSH service on your VPS. You will need to add the following options to the SSH service's configuration file:</p>
<pre><code>GatewayPorts yes
AllowTcpForwarding yes
ClientAliveInterval 60
ClientAliveCountMax 10
</code></pre>
<ul>
<li><code>GatewayPorts</code> - When set to "yes", remote hosts are allowed to connect to ports forwarded for the client.</li>
<li><code>AllowTcpForwarding</code> - When set to "yes", TCP forwarding is permitted.</li>
<li><code>ClientAliveInterval</code> - Number of seconds that the server will wait before sending a null packet to the client (to keep the connection alive).</li>
<li><code>ClientAliveCountMax</code> - This is the limit of how long (increments of ClientAliveInterval) a client is allowed to stay unresponsive before being disconnected.</li>
</ul>
<p>You can simply append the above configuration options to the end of your server's <code>/etc/ssh/sshd_config</code>, but the options will be applied to all SSH connections - not immediately insecure but also not a good habit to leave such options available system-wide.</p>
<p>A more secure setup is to grant these options to a single user which will be created for the sole purpose of reverse proxying.</p>
<p>To create the reverse proxy user:</p>
<pre><code class="language-bash">useradd \
    --shell /bin/rbash \
    --home-dir /home/reverseproxy \
    --create-home \
    reverseproxy
</code></pre>
<ul>
<li><code>--shell /bin/rbash</code> - Sets the login shell for the user to a restricted version of bash.</li>
</ul>
<p>It is necessary to set a password for the new user even if logging in via pubkey:</p>
<pre><code class="language-bash">passwd reverseproxy
</code></pre>
<p>Generate the <code>.ssh</code> directory with <code>authorized_keys</code> file for the new user:</p>
<pre><code class="language-bash">mkdir -p /home/reverseproxy/.ssh; \
    touch /home/reverseproxy/.ssh/authorized_keys
</code></pre>
<p>Don't forget to append your pubkey to the <code>authorized_keys</code> file.</p>
<p>If you need further help with this step, see my previous tutorial about <a href="https://degreesofzero.com/article/passwordless-ssh-on-linux.html">how to configure passwordless SSH</a>.</p>
<p>Append the configuration options to your server's SSH configuration file:</p>
<pre><code class="language-bash">cat &gt;&gt; /etc/ssh/sshd_config &lt;&lt; EOL
Match User reverseproxy
    GatewayPorts yes
    AllowTcpForwarding yes
    ClientAliveInterval 60
    ClientAliveCountMax 10
    EOL
</code></pre>
<p>Then restart the server's SSH service:</p>
<pre><code class="language-bash">service ssh restart
</code></pre>
<p>And finally run the following command on your local machine to establish the reverse proxy tunnel:</p>
<pre><code class="language-bash">ssh -v -N -T -R 8080:localhost:8080 reverseproxy@IP_ADDRESS_OF_VPS
</code></pre>
<ul>
<li><code>-v</code> - Print verbose log messages.</li>
<li><code>-N</code> - Do not execute a remote command.</li>
<li><code>-T</code> - Disable pseudo-terminal allocation.</li>
<li><code>-R</code> - Establish a reverse tunnel with a remote entry point.</li>
</ul>
<p>That's it! You should now be able to access the service running at port 8080 (in this example) on your local machine via the virtual private server's IP address.</p>
<p>If you'd like to keep the tunnel open long-term, I suggest to use <a href="https://www.harding.motd.ca/autossh/">autossh</a>:</p>
<blockquote>
<p>autossh is a program to start a copy of ssh and monitor it, restarting it as necessary should it die or stop passing traffic</p>
</blockquote>
<p>And if you followed this tutorial and you're still not able to get it working, you can try <a href="https://ngrok.com/">ngrok</a> instead:</p>
<blockquote>
<p>ngrok exposes local servers behind NATs and firewalls to the public internet over secure tunnels</p>
</blockquote>
</body></html>]]></description><link>https://degreesofzero.com/article/remote-reverse-proxy-with-ssh-vps.html</link><guid isPermaLink="true">https://degreesofzero.com/article/remote-reverse-proxy-with-ssh-vps.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Wed, 20 May 2020 13:00:00 GMT</pubDate></item><item><title><![CDATA[Automated, Encrypted, Remote Backups using Open Source Tools]]></title><description><![CDATA[<html><head></head><body><p>When designing a backup scheme, it's important to start with your high-level requirements. In my case, they are as follows:</p>
<ul>
<li>Is fully automated and non-interactive</li>
<li>Does not expose decryption key by writing it to disk in an unprotected state</li>
<li>Uses well-supported, high-availability, open-source tools</li>
</ul>
<p>So with that, let's get started. This article is split into a few different sections, which I've linked here:</p>
<ul>
<li><a href="#data-encryption">Data Encryption</a><ul>
<li><a href="#create-a-new-key-pair">Create a New Key Pair</a></li>
<li><a href="#export-and-import-public-key">Export and Import Public Key</a></li>
<li><a href="#test-the-encryptiondecryption-setup">Test the Encryption/Decryption Setup</a></li>
</ul>
</li>
<li><a href="#uploading-encrypted-backups-to-remote-storage">Uploading Encrypted Backups to Remote Storage</a></li>
<li><a href="#task-scheduling">Task Scheduling</a><ul>
<li><a href="#potential-gotchas-of-crontab">Potential Gotcha's of crontab</a></li>
<li><a href="#testing-and-debugging-crontab">Testing and Debugging crontab</a></li>
</ul>
</li>
<li><a href="#complete-example-script">Complete Example Script</a></li>
</ul>
<h2 id="data-encryption">Data Encryption</h2>
<p>One of my goals with this setup is to reduce exposure of the decryption key as much as possible. So writing it to disk in plaintext by storing it in a file on a filesystem should be avoided. With symmetric encryption, the encryption and decryption steps both use the same key. And since we are trying to design an automated (non-interactive) backup scheme, the encryption step requires access to the unprotected encryption key. For these reasons, symmetric encryption is not a good choice.</p>
<p>Asymmetric encryption is the way. So which open source tool has been around forever, is widely available, and is relatively simple to use? Why <a href="https://www.gnupg.org/">gpg</a> of course!</p>
<blockquote>
<p>“GPG” stands for “Pretty Good Privacy”; “GPG” stands for “Gnu Privacy Guard.” It was the original freeware copyrighted program; GPG is the re-write of GPG. The GPG uses the RSA algorithm and the IDEA encryption algorithm. GPG uses the NIST AES, Advanced Encryption Standard</p>
</blockquote>
<p>We will need gpg to be installed on both our local system as well as on the system where we want to run our backup script. For debian and ubuntu, it's as simple as:</p>
<pre><code class="language-bash">sudo apt-get install gpg
</code></pre>
<p>For other operating systems it is likely just as simple. Once you've got gpg installed, you can move on to the next steps.</p>
<h3 id="create-a-new-key-pair">Create a New Key Pair</h3>
<p>On your local system, generate a new private/public key pair using gpg:</p>
<pre><code class="language-bash">gpg --full-generate-key
</code></pre>
<p>This will guide you through the key generation process. For this backup scheme, I chose the following:</p>
<ul>
<li>"RSA and RSA" for kind of key</li>
<li>4096 bits key size</li>
<li>Does not expire</li>
</ul>
<p>The name and other identifying information is up to you.</p>
<p>Don't forget to backup your keys! If you don't already have a backup scheme setup for your password databases, SSH/PGP keys, and other critical data then now is a good time to do it.</p>
<p>Use the following command to list the keys for which you have both the public and private key:</p>
<pre><code class="language-bash">gpg --list-secret-keys --keyid-format LONG
</code></pre>
<p>The output will look something like this:</p>
<pre><code>/path/to/user/.gnupg/pubring.kbx
------------------------------
sec   rsa4096/XXXXXXXXXXXXXXXX 2020-001-01 [SC]
      XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
uid                 [ultimate] xxx (xxx) &lt;xxx@xxx&gt;
ssb   rsa4096/XXXXXXXXXXXXXXXX 2020-001-01 [E]
</code></pre>
<h3 id="export-and-import-public-key">Export and Import Public Key</h3>
<p>The next step is to export the public key from your local machine so that we can import it to the system that will run the backup script.</p>
<p>Use the following command to print the GPG key ID for the last key that you generated:</p>
<pre><code class="language-bash">gpg --list-secret-keys --keyid-format LONG | sed -r -n 's/sec +[a-z0-9]+\/([A-Z0-9]+) .*/\1/p' | tac | head -n 1
</code></pre>
<p>And then use this command to export your public key:</p>
<pre><code class="language-bash">gpg --armor --export YOUR_GPG_KEY_ID &gt; key.asc
</code></pre>
<p>Upload the "key.asc" file to the system that will run the backup.</p>
<p>Finally, on the backup system run the following to import the key:</p>
<pre><code class="language-bash">gpg --import key.asc
</code></pre>
<p>Whew!</p>
<h3 id="test-the-encryptiondecryption-setup">Test the Encryption/Decryption Setup</h3>
<p>Now let's test the whole encryption/decryption process. Run the following command on the system where you will be running the backup script:</p>
<pre><code class="language-bash">echo 'it works!' | gpg --encrypt -r "YOUR_GPG_KEY_ID" --output ./test.gpg
</code></pre>
<p>To demonstrate that the system is not able to decrypt:</p>
<pre><code class="language-bash">gpg -r "YOUR_GPG_KEY_ID" --decrypt ./test.gpg
</code></pre>
<p>You should see the following error message:</p>
<pre><code>gpg: encrypted with 4096-bit RSA key, ID XXX, created YYYY-MM-DD
      "xxx"
gpg: decryption failed: No secret key
</code></pre>
<p>For a successful decryption test, run the command again but on the system that has your GPG private key. You should see an</p>
<pre><code class="language-bash">gpg -r "YOUR_GPG_KEY_ID" --decrypt ./test.gpg
</code></pre>
<p>If you see "it works!" printed in your terminal, then congratulations! If not, you will need to review the above steps to see what you might've missed.</p>
<h2 id="uploading-encrypted-backups-to-remote-storage">Uploading Encrypted Backups to Remote Storage</h2>
<p>You may have already heard the expression, "two is one and one is none", but it's worth considering when designing a data backup scheme. Though it is a bit simplistic, you can interpret it as a reminder that more redundancy is usually better. In our case, adding a remote storage backup can be very beneficial to the overall durability of our data.</p>
<p>You are probably already familiar or at least aware of <a href="https://aws.amazon.com/s3/">Amazon's AWS S3 cloud storage</a>, but I prefer to use <a href="https://www.digitalocean.com/products/spaces/">DigitalOcean's Spaces</a> which offers a compatible API. Both options work with <a href="https://s3tools.org/s3cmd">s3cmd</a> - an open source tool to manage your S3-compatible cloud storage.</p>
<p>Like before, installation on debian and ubuntu is quite simple:</p>
<pre><code class="language-bash">sudo apt-get install s3cmd
</code></pre>
<p>If using another system, you can follow the installation instructions on the official website linked above.</p>
<p>Now it's time to configure your s3cmd tool. Be sure that your current shell is logged in as the user that will be running the backup script later:</p>
<pre><code class="language-bash">s3cmd --configure
</code></pre>
<p>Complete the steps in the configuration. You will need an access key, bucket location and domain, etc. For help with this specific command have a look at the <a href="https://s3tools.org/s3cmd-howto">s3cmd howto page</a> or, if using DigitalOcean, have a look at this article about <a href="https://www.digitalocean.com/docs/spaces/resources/s3cmd-usage/">s3cmd usage for "Spaces"</a>.</p>
<p>Once you've got the s3cmd configured, you can test it by trying to upload and read a file from your remote storage:</p>
<pre><code class="language-bash">export S3_LOCATION="YOUR_BUCKETNAME" \
    &amp;&amp; echo "it works!" | s3cmd -c ~/.s3cfg put - s3://$S3_LOCATION/test \
    &amp;&amp; s3cmd -c ~/.s3cfg --no-progress get s3://$S3_LOCATION/test - | cat
</code></pre>
<p>Do you see "it works!" printed in your console? Super!</p>
<h2 id="task-scheduling">Task Scheduling</h2>
<p><a href="http://man7.org/linux/man-pages/man5/crontab.5.html">Crontab</a> is the obvious choice here because it's available pretty much universally across all unix-like systems.</p>
<blockquote>
<p>A crontab file contains instructions for the cron daemon in the following simplified manner: "run this command at this time on this date."</p>
</blockquote>
<p>Basically what crontab allows us to do is configure our system to run scheduled tasks. The format for defining which comands to run and when is fairly simple:</p>
<pre><code>m h  dom mon dow   command
</code></pre>
<ul>
<li><code>m</code> = minutes</li>
<li><code>h</code> = hours</li>
<li><code>dom</code> = day of the month</li>
<li><code>mon</code> = month</li>
<li><code>dow</code> = day of the week</li>
<li><code>command</code> = the command to be executed</li>
</ul>
<p>So for the following example:</p>
<pre><code>0 */3 * * * /path/to/script.sh
</code></pre>
<p>The script will be run once at 0 minutes past every 3 hours.</p>
<p>To schedule cron tasks, you must add them to your user's crontab file. Run the following to open the terminal-based editor:</p>
<pre><code class="language-bash">crontab -e
</code></pre>
<p>Your tasks should go at the bottom of the file, each on their own new line.</p>
<p>For an interactive, human-friendly tool to practice or test cron scheduling, have a look at <a href="https://crontab.guru/">crontab.guru</a>. It also includes a few tips for working with crontab.</p>
<h3 id="potential-gotchas-of-crontab">Potential Gotcha's of crontab</h3>
<p>Scripts run via cron do not <code>source</code> the user's profile or bashrc files, so environment variables like <code>PATH</code> will not be set and you will not be able to use all the programs that you expect. The solution is to set the <code>PATH</code> variable at the top of your scripts:</p>
<pre><code class="language-bash">PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin
</code></pre>
<h3 id="testing-and-debugging-crontab">Testing and Debugging crontab</h3>
<p>It's a good idea to test your cron setup to verify that everything is working as you expect. So let's start by creating a simple test script:</p>
<pre><code class="language-bash">echo '#!/bin/bash' &gt; ~/cron-test.sh \
    &amp;&amp; echo 'PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin' &gt;&gt; ~/cron-test.sh \
    &amp;&amp; echo 'set -e' &gt;&gt; ~/cron-test.sh \
    &amp;&amp; echo 'VERSION="$(s3cmd --version)"' &gt;&gt; ~/cron-test.sh \
    &amp;&amp; echo 'echo "[$(date -u +%FT%T)] $VERSION"' &gt;&gt; ~/cron-test.sh \
    &amp;&amp; chmod a+x ~/cron-test.sh
</code></pre>
<p>Let's set the test script to run once every 15 seconds. This is a little tricky because crontab's granularity is down to the minute. So this requires a bit of a creative kludge:</p>
<pre><code>* * * * * ~/cron-test.sh
* * * * * ( sleep 15 ; ~/cron-test.sh )
* * * * * ( sleep 30 ; ~/cron-test.sh )
* * * * * ( sleep 45 ; ~/cron-test.sh )
</code></pre>
<p>What's happening here is that all the cron tasks run at the same time (once every minute). But each subsequent task is delayed by +15 seconds (<code>sleep X</code>).</p>
<p>And to help with debugging the tasks as they run, I like to write the script's output to a log file:</p>
<pre><code>~/cron-test.sh 2&gt;&amp;1 &gt;&gt; ~/cron-test.log;
</code></pre>
<ul>
<li><code>2&gt;&amp;1</code> = redirect <code>stderr</code> (error output) to wherever <code>stdout</code> is being redirected</li>
<li><code>&gt;&gt; ~/cron-test.log</code> = redirect <code>stdout</code> (and append) to crontab.log file</li>
</ul>
<p>Combining both of the above will look like this:</p>
<pre><code>* * * * * ~/cron-test.sh 2&gt;&amp;1 &gt;&gt; ~/cron-test.log
* * * * * ( sleep 15 ; ~/cron-test.sh 2&gt;&amp;1 &gt;&gt; ~/cron-test.log )
* * * * * ( sleep 30 ; ~/cron-test.sh 2&gt;&amp;1 &gt;&gt; ~/cron-test.log )
* * * * * ( sleep 45 ; ~/cron-test.sh 2&gt;&amp;1 &gt;&gt; ~/cron-test.log )
</code></pre>
<p>And finally to watch the log output as it is written:</p>
<pre><code class="language-bash">touch ~/cron-test.log \
    &amp;&amp; tail -n 20 -f ~/cron-test.log
</code></pre>
<p>If all is well, you should see something like the following:</p>
<pre><code>[2020-01-01T00:00:01] s3cmd version 2.0.2
[2020-01-01T00:00:16] s3cmd version 2.0.2
[2020-01-01T00:00:31] s3cmd version 2.0.2
</code></pre>
<p>A new line should appear once every 15 seconds.</p>
<h2 id="complete-example-script">Complete Example Script</h2>
<p>So after all of that, here's a complete example script for you to copy/paste to your heart's content. It uses pg_dump to dump an entire Postgres database, gzip to compress the data, gpg to encrypt the data, and finally s3cmd to upload the encrypted backup file to remote storage.</p>
<pre><code class="language-bash">#!/bin/bash

#   MIT License
#   
#   Copyright (c) 2020 Charles Hill
#   
#   Permission is hereby granted, free of charge, to any person obtaining a copy
#   of this software and associated documentation files (the "Software"), to deal
#   in the Software without restriction, including without limitation the rights
#   to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#   copies of the Software, and to permit persons to whom the Software is
#   furnished to do so, subject to the following conditions:
#   
#   The above copyright notice and this permission notice shall be included in all
#   copies or substantial portions of the Software.
#   
#   THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#   IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
#   FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
#   AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
#   LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
#   OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
#   SOFTWARE.
#   
#   For a human-friendly explanation of this license:
#   https://tldrlegal.com/license/mit-license
#   
#   If you have questions technical or otherwise, find my contact details here:
#   https://degreesofzero.com/
#   

# This is required because scripts run via cron do not source the user's profile or bashrc files.
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin

# Date/time will be used to keep our backup files properly organized.
DATE=$(date -u +%F)
DATETIME="$(date -u +%FT%T)"

# https://github.com/s3tools/s3cmd
# https://www.digitalocean.com/docs/spaces/resources/s3cmd/
S3_LOCATION="s3/bucket/path/$DATE"
S3_CONFIG_FILE="/path/to/user/.s3cfg"

# The GPG key ID that was previously imported to the server's keyring.
# NOTE: Don't forget to mark the imported key as trusted.
GPG_KEY_ID="XXX"

# Directory where backup files are stored locally.
BACKUPS="/path/to/local/backups"
mkdir -p $BACKUPS

# Dump SQL, compress and encrypt.
DBHOST="localhost"
DBNAME="XXX"
DBUSER="XXX"
DBPASS="XXX"
FILE="$BACKUPS/backup-file-name-$DATETIME.sql.gz.gpg"
if [ ! -f "$FILE" ]; then
    echo "Creating encrypted SQL dump..."
    pg_dump \
        --host="$DBHOST" \
        --dbname="$DBNAME" \
        --username="$DBUSER" \
        --password="$DBPASS" \
            | gzip --best - \
            | gpg --encrypt -r "$GPG_KEY_ID" --output $FILE
else
    echo "Encrypted backup already exists"
fi

echo "$FILE"

# Upload to remote location.
echo "Uploading to remote..."
s3cmd -c $S3_CONFIG_FILE put $FILE s3://$S3_LOCATION/

echo "Done!"
exit 0
</code></pre>
</body></html>]]></description><link>https://degreesofzero.com/article/automated-encrypted-remote-backups-using-open-source-tools.html</link><guid isPermaLink="true">https://degreesofzero.com/article/automated-encrypted-remote-backups-using-open-source-tools.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Sun, 09 Feb 2020 12:00:00 GMT</pubDate></item><item><title><![CDATA[Double-Spending: There's an App for That!]]></title><description><![CDATA[<html><head></head><body><div class="wrap">

<div class="right image smaller">
    <img width="1051" height="576" src="double-spending-theres-an-app-for-that/images/double-spend.png" alt="">
</div>

<p>Double-spending is no longer a theoretical possibility but a practical reality. Most of the end-user applications used widely today leave their users vulnerable to being defrauded via double-spend attacks.</p>
<p>So what is double-spending and what can be done to protect users? First you need to understand two things:</p>
<ul>
<li>Transactions in Bitcoin are made of inputs and outputs</li>
<li>A transaction is valid if its inputs have not been previously spent</li>
</ul>
</div>

<p>A double-spend is a transaction that has successfully invalidated a previously accepted payment transaction. Over the years, several different types of double-spend attacks have been theorized and some even put to practice:</p>
<ul>
<li><a href="https://en.bitcoin.it/wiki/Irreversible_Transactions#Race_attack">Race Attack</a> - Two transactions that spend the same input are in a race against each other. The first to be confirmed is the winner.</li>
<li><a href="https://en.bitcoin.it/wiki/Irreversible_Transactions#Finney_attack">Finney Attack</a> - An attacker mines a block in secret. This secretly mined block contains a double-spend transaction. The attacker then broadcasts a payment transaction that an unsuspecting merchant accepts. The attacker then broadcasts the mined block.</li>
<li><a href="#alternative-history-attack">Alternative History Attack</a> - A.k.a "51% Attack"</li>
<li><a href="#replace-by-fee-rbf-attack">Unconfirmed Transaction Replacement</a> - A.k.a "Replace-by-fee (RBF) Attack"</li>
</ul>
<div class="wrap">
    <div class="center x-smaller">
        <div class="images">
            <img width="1118" height="613" src="double-spending-theres-an-app-for-that/images/race-attack.png" alt="">
            <img width="1706" height="680" src="double-spending-theres-an-app-for-that/images/finney-attack.png" alt="">
        </div>
        <p class="caption">Race Attack (left) and Finney Attack (right)</p>
    </div>
</div>


<h2 id="alternative-history-attack">Alternative History Attack</h2>
<div class="wrap">

<div class="right image smaller">
    <img width="1550" height="781" src="double-spending-theres-an-app-for-that/images/alternative-history-attack.png" alt="">
</div>

<p>This type of double-spend attack involves rewriting blocks to effectively erase a confirmed transaction from history. Once this is done, the inputs used in the original transaction can be double-spent however the attacker sees fit. The more confirmations a transaction has, the more difficult it becomes to change it. This is why it is recommended to wait at least a few confirmations before considering high-value transactions as final.</p>
</div>

<p>Note that it is not necessary, but highly unlikely, to be able to achieve a change-of-history double-spend attack without 51% of the network hashrate. But due to the size of the Bitcoin network, this is a very expensive and risky attack to manage successfully. There are many instances of this attack in recent history, but on much smaller networks:</p>
<ul>
<li><a href="https://bitcoinexchangeguide.com/double-spend-attack-on-bitcoin-gold-btg-causes-chaos-for-exchanges/">"Double Spend Attack On Bitcoin Gold (BTG) Causes Chaos For Exchanges"</a> - May 2018</li>
<li><a href="https://www.reddit.com/r/monacoin/comments/8k7640/51_attack_on_monacoin/">"51% attack on monacoin"</a> - May 2018</li>
<li><a href="https://www.ccn.com/litecoin-cash-latest-small-cap-altcoin-to-suffer-51-percent-attack/">"Litecoin Cash Allegedly the Latest Small-Cap Altcoin to Suffer 51 Percent Attack"</a> - June 2018</li>
<li><a href="https://www.reddit.com/r/einsteinium/comments/6peyff/so_emc2_was_hacked_by_a_double_spend_attack_what/">"So EMC2 was Hacked by a 'Double Spend Attack.'"</a> - July 2018</li>
<li><a href="https://www.reddit.com/r/CryptoCurrency/comments/ae1jxi/219500_ethereum_classic_etc_worth_11_million/">"219,500 Ethereum Classic (ETC) Worth $1.1 Million Double Spent During 51 Percent Attack; Coinbase May Delist"</a> - January 2019</li>
<li><a href="https://www.ccn.com/breaking-coinbase-says-ethereum-classic-attack-included-500000-in-double-spends/">"Coinbase Says Ethereum Classic Attack Included $500,000 in Double Spends"</a> - January 2019</li>
</ul>
<p>As a network grows it becomes more and more expensive to launch a successful 51% attack. That's not to say that it is impossible or that Bitcoin itself is immune to double-spending. There are historical examples of successful double-spend attacks against Bitcoin. However these attacks take a different approach - they use <a href="#unconfirmed-transaction-replacement">unconfirmed transaction replacement</a>. There are even historical examples of this happening in the wild:</p>
<ul>
<li><a href="https://www.ccn.com/bitcoin-atm-double-spenders-police-need-help-identifying-four-criminals/">"Bitcoin ATM Double-Spenders: Police Need Help Identifying Four Criminals"</a> - March 2019</li>
</ul>
<h2 id="unconfirmed-transaction-replacement">Unconfirmed Transaction Replacement</h2>
<div class="wrap">

<div class="right image smaller">
    <img width="1623" height="712" src="double-spending-theres-an-app-for-that/images/rbf-attack.png" alt="">
</div>

<p>Don't have access to 51% of the hash rate? Not to worry. There's another type of double-spend attack that is Much more accessible to the average Bitcoin user: Replace-by-fee (RBF) or Unconfirmed Transaction Replacement.</p>
<p>A transaction that has not yet been included in a block is considered "unconfirmed" - also referred to as a zero-confirmation ("0-conf") transaction. Once confirmed in a block, it is not possible to replace it in the manner described here.</p>
</div>

<p>To successfully double-spend by replacing an unconfirmed transaction, you will need to be able to:</p>
<ul>
<li>Build raw transactions that use the replace-by-fee (RBF) feature</li>
<li>Broadcast raw transactions reliably</li>
<li>Fetch unspent transaction outputs (UTXO) for a given address</li>
<li>Get network fee estimate</li>
</ul>
<h3 id="theres-an-app-for-that">There's An App For That!</h3>
<div class="wrap">

<div class="right"><img src="double-spending-theres-an-app-for-that/images/paynoway-logo.svg" width="180" height="180" alt=""></div>

<p>All this can be done for you with the help of a script or, even more conveniently, as an app. To illustrate the point, I created <a href="https://github.com/samotari/paynoway">PayNoWay</a> to demonstrate to business owners what a practical double-spend attack can look like.</p>
<p>The app looks and acts like a "normal" mobile wallet application: Scan a QR code generated by a Point-of-Sale app; Send the payment (blue button); Send a double-spend transaction to recover the funds (red button); Wait for one of the transactions to be confirmed. A successful double-spend is not guaranteed, but the success rate is quite high (~90%) on mainnet Bitcoin.</p>
</div>

<div class="wrap">
    <div class="center x-smaller">
        <div class="images">
            <img width="375" height="667" src="double-spending-theres-an-app-for-that/images/paynoway-screenshots/send.png" alt="">
            <img width="375" height="667" src="double-spending-theres-an-app-for-that/images/paynoway-screenshots/history.png" alt="">
            <img width="375" height="667" src="double-spending-theres-an-app-for-that/images/paynoway-screenshots/receive.png" alt="">
            <img width="375" height="667" src="double-spending-theres-an-app-for-that/images/paynoway-screenshots/configure.png" alt="">
        </div>
        <p class="caption">Screenshots from PayNoWay</p>
    </div>
</div>

<p>Almost every person to whom I have demonstrated the double-spend attack was shocked at just how easy it was. This attack is within the capabilities of many cryptocurrency users. Anyone who is accepting transactions as payment for goods or services without waiting for at least one confirmation is vulnerable and should be aware of the risks.</p>
<p>As a proof-of-concept the app also supports Litecoin mainnet, but the success rate is low due to replacement transactions being rejected by almost every relay node. However this does not mean that Litecoin is not susceptible. It would be possible to greatly improve the success rates by connecting directly to mining nodes - which are far less likely to reject any transactions that pay a higher fee.</p>
<h2 id="wallet-apps-vs-double-spending">Wallet Apps vs. Double-Spending</h2>
<p>Let's have a look at how some of the most popular mobile and desktop apps perform against double-spending. The following testing criteria were used:</p>
<ul>
<li><strong>RBF Flag</strong> - Does the app display a warning to the user if a received transaction is RBF?</li>
<li><strong>Double-Spend Alert</strong> - Does the app display a warning when a transaction was double-spent?</li>
<li><strong>Child-Pays-For-Parent</strong> - Does the app allow the user to attempt to boost a received transaction's fee by using CPFP?</li>
<li><strong>Correct Balance</strong> - Does the app show the correct balance before and after a double-spend?</li>
</ul>
<blockquote>
<p><strong>Child pays for parent</strong> means, as the name implies, that spending an unconfirmed transaction will cause miners to consider confirming the parent transaction in order to get the fees from the child transaction included in the same block.</p>
</blockquote>
<p>Only wallet applications that allow receiving on-chain Bitcoin payments were tested.</p>
<table id="android-wallet-apps-test-results" class="research-results">
    <thead>
        <tr><th class="research-results-title" colspan="5">Android Wallet Apps</th></tr>
        <tr>
            <th>Name</th>
            <th>RBF Flag</th>
            <th>Double-Spend Alert</th>
            <th>Child-Pays-For-Parent</th>
            <th>Correct Balance</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>BLW<a href="https://play.google.com/store/apps/details?id=com.lightning.walletapp"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>Bitcoin Wallet<a href="https://play.google.com/store/apps/details?id=de.schildbach.wallet"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
            <tr>
            <td>Blockstream Green<a href="https://blockstream.com/green/"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>BlueWallet<a href="https://play.google.com/store/apps/details?id=io.bluewallet.bluewallet"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>BRD ("Breadwallet")<a href="https://play.google.com/store/apps/details?id=com.breadwallet"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>Coinomi<a href="https://www.coinomi.com/en/downloads/"></a></td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="good">✓</td>
        </tr>
        <tr>
            <td>Eclair<a href="https://play.google.com/store/apps/details?id=fr.acinq.eclair.wallet"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>Mycelium<a href="https://play.google.com/store/apps/details?id=com.mycelium.wallet"></a></td>
            <td class="good">✓</td>
            <td class="bad">x</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
        </tr>
        <tr>
            <td>Samourai<a href="https://samouraiwallet.com/download"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
    </tbody>
</table>

<table id="ios-wallet-apps-test-results" class="research-results">
    <thead>
        <tr><th class="research-results-title" colspan="5">iOS Wallet Apps</th></tr>
        <tr>
            <th>Name</th>
            <th>RBF Flag</th>
            <th>Double-Spend Alert</th>
            <th>Child-Pays-For-Parent</th>
            <th>Correct Balance</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Blockstream Green<a href="https://blockstream.com/green/"></a></td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="good">✓</td>
        </tr>
        <tr>
            <td>BlueWallet<a href="https://itunes.apple.com/app/bluewallet-bitcoin-wallet/id1376878040"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>BRD ("Breadwallet")<a href="#"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>Coinomi<a href="https://www.coinomi.com/en/downloads/"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
    </tbody>
</table>


<table id="desktop-wallet-apps-test-results" class="research-results">
    <thead>
        <tr><th class="research-results-title" colspan="5">Desktop Wallet Apps</th></tr>
        <tr>
            <th>Name</th>
            <th>RBF Flag</th>
            <th>Double-Spend Alert</th>
            <th>Child-Pays-For-Parent</th>
            <th>Correct Balance</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Bitcoin Core<a href="https://bitcoin.org/en/download"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>Coinomi<a href="https://www.coinomi.com/en/downloads/"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
        <tr>
            <td>Electrum<a href="https://electrum.org/"></a></td>
            <td class="good">✓</td>
            <td class="bad">x</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
        </tr>
        <tr>
            <td>Wasabi<a href="https://wasabiwallet.io/"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
    </tbody>
</table>


<table id="web-wallets-test-results" class="research-results">
    <thead>
        <tr><th class="research-results-title" colspan="5">Web Wallets</th></tr>
        <tr>
            <th>Name</th>
            <th>RBF Flag</th>
            <th>Double-Spend Alert</th>
            <th>Child-Pays-For-Parent</th>
            <th>Correct Balance</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Trezor<a href="https://wallet.trezor.io/"></a></td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="good">✓</td>
        </tr>
    </tbody>
    <tbody>
        <tr>
            <td>blockchain.com<a href="http://blockchain.com/wallet"></a></td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
            <td>?</td>
        </tr>
    </tbody>
</table>


<h2 id="block-explorers-vs-double-spending">Block Explorers vs. Double-Spending</h2>
<p>Some Bitcoin users rely on websites called "Block Explorers" to show them their account (address) balances and transaction details. This is a bad practice, but users will use what is the easiest for them. So it's up to the maintainers and operators of these websites to do their best to educate and protect their users.</p>
<p> The following testing criteria were used:</p>
<ul>
<li><strong>RBF Flag</strong> - Does the explorer display a warning to the user if a transaction is RBF?</li>
<li><strong>Double-Spend Alert</strong> - Does the explorer display a warning when a transaction was double-spent?</li>
<li><strong>Both txs visible</strong> - Are both the payment and double-spend transactions visible while unconfirmed?</li>
<li><strong>Original tx preserved</strong> - Does the explorer keep the original payment transaction even after the double-spend transaction was confirmed?</li>
</ul>
<table class="research-results">
    <thead>
        <tr><th class="research-results-title" colspan="5">Block Explorers</th></tr>
        <tr>
            <th>Name</th>
            <th>RBF Flag</th>
            <th>Double-Spend Alert</th>
            <th>Both txs visible</th>
            <th>Original tx preserved</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>bitaps.com</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
        </tr>
        <tr>
            <td>blockchain.info</td>
            <td class="bad">x</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
        </tr>
        <tr>
            <td>blockchair.com</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
        </tr>
        <tr>
            <td>blockcypher.com</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
            <td class="good">✓</td>
            <td class="bad">x</td>
        </tr>
        <tr>
            <td>blockstream.info</td>
            <td class="good">✓</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
        </tr>
        <tr>
            <td>btc2.trezor.io</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
        </tr>
        <tr>
            <td>chain.so</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
        </tr>
        <tr>
            <td>insight</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
        </tr>
        <tr>
            <td>smartbit.com.au</td>
            <td class="bad">x</td>
            <td class="good">✓</td>
            <td class="bad">x</td>
            <td class="bad">x</td>
        </tr>
    </tbody>
</table>



<h2 id="solutions">Solutions</h2>
<p>There is no magical solution to this problem. Nodes and transaction relays can't know if a transaction is actually a malicious double-spend. A transaction might look suspicious because it's consuming the same exact inputs but has changed its outputs. But that doesn't necessarily mean that it is a malicious double-spend. Even worse, there's no guarantee that your wallet app will receive both the payment and double-spend transactions.</p>
<div class="wrap">

<div class="right"><img src="double-spending-theres-an-app-for-that/images/bitcoin-lightning.svg" width="180" height="180" alt=""></div>

<p>All that said, there are solutions!</p>
<ul>
<li>Don't accept unconfirmed RBF transactions</li>
<li>Only accept confirmed transactions</li>
<li>Accept unconfirmed RBF transactions but monitor the memory pool for double-spends</li>
<li>Only accept payments via <a href="https://en.wikipedia.org/wiki/Lightning_Network">Lightning Network (LN)</a> - Off-chain transactions that do not require confirmations for the recipient to know that they have been paid.</li>
</ul>
<p>Which solution is best depends on the context. The best solution for in-person payments (cafes, bars, restaurants, etc) is to use Lightning Network only. On-chain payments have high fees and require more time to be safe. Using LN for this context is great because it reduces fees and is not vulnerable to double-spending.</p>
</div>

<p>The purpose of this article is to bring awareness to the community. Please do not double-spend against merchants who are accepting cryptocurrencies as payment. If you see a business accepting Bitcoin payments in a naive way, do try to bring it to their attention but in a responsible way. As a still small community we should do our best to help each other and encourage the businesses that do already accept Bitcoin payments.</p>
<p>I'd love to hear about other solutions to this problem, so please feel free to reach out to me with your ideas or suggestions. See my contact details below.</p>
<h2 id="additional-resources">Additional Resources</h2>
<ul>
<li><a href="/talks/double-spending-made-easy/">Double-Spending Made Easy</a> - Slides I used during my presentation at the <a href="https://opt-out.hcpp.cz/">HCPP19</a>.</li>
</ul>
</body></html>]]></description><link>https://degreesofzero.com/article/double-spending-theres-an-app-for-that.html</link><guid isPermaLink="true">https://degreesofzero.com/article/double-spending-theres-an-app-for-that.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Tue, 25 Dec 2018 23:30:00 GMT</pubDate></item><item><title><![CDATA[Workshop: Shared Private Lightning Network]]></title><description><![CDATA[<html><head></head><body><ul>
<li><a href="#introduction">Introduction</a><ul>
<li><a href="#what-is-the-lightning-network-">What is the Lightning Network?</a></li>
<li><a href="#our-own-private-bitcoin-network">Our Own Private Bitcoin Network</a></li>
<li><a href="#sharing-information-during-the-workshop">Sharing Information During the Workshop</a></li>
</ul>
</li>
<li>For participants:<ul>
<li><a href="#install-and-configure-your-lightning-network-node">Install and Configure Your Lightning Network Node</a></li>
<li><a href="#create-and-fund-your-wallet">Create and Fund Your Wallet</a></li>
<li><a href="#sending-and-receiving-payments">Sending and Receiving Payments</a></li>
</ul>
</li>
<li>For the organizer:<ul>
<li><a href="#setup-private-bitcoin-network">Setup Private Bitcoin Network</a></li>
</ul>
</li>
<li><a href="#more-resources">More Resources</a></li>
</ul>
<h2 id="introduction">Introduction</h2>
<p>In this workshop, we are going to setup a shared, private lightning network. Each of you will have your own Lightning Network node with which you will participate in the network. You will:</p>
<ul>
<li>Create and fund your own bitcoin wallet</li>
<li>Connect to and open a channel with "Bob" (see the diagram below)</li>
<li>Send a payment to one of the other participants</li>
<li>Receive a payment</li>
</ul>
<p>Here is a diagram to illustrate:</p>
<pre><code>   LND                        LND                        LND
+ ----- +                   + --- +                   + ----- +
| Alice | &lt;--- channel ---&gt; | Bob | &lt;--- channel ---&gt; | Carol |
+ ----- +                   + --- +                   + ----- +
    |                          |                          |
    |                          |                          |
    +- - - - - - - - - - - - - +- - - - - - - - - - - - - +
                               |
                         + ----------- +
                         | BTC network | &lt;--- BTCD
                         + ----------- +
</code></pre>
<p>This setup will allow each of the connected parties to send and receive payments with all the others - even if they are not directly connected by a channel. For example, Alice can send payments to Bob because they have a channel between each other but she can also send payments to Carol via Bob's node because he has a channel with Carol.</p>
<h3 id="what-is-the-lightning-network">What is the Lightning Network?</h3>
<p>First let's start with a brief, high-level explanation of what the Lightning Network is and how it works. From the <a href="http://dev.lightning.community/overview/#lightning-network">community documentation</a>:</p>
<blockquote>
<p>The Lightning Network scales blockchains and enables trustless instant payments by keeping most transactions off-chain and leveraging the security of the underlying blockchain as an arbitration layer.</p>
</blockquote>
<blockquote>
<p>This is accomplished primarily through "payment channels", wherein two parties commit funds and pay each other by updating the balance redeemable by either party in the channel. This process is instant and saves users from having to wait for block confirmations before they can render goods or services.</p>
</blockquote>
<p>And the key word to remember is <strong>network</strong>. Payment channels by themselves are not so interesting. But when a transaction can be passed along from one peer to the next until it reaches its recipient - without the need for a direct connection between the payer and payee - now that is interesting.</p>
<h3 id="our-own-private-bitcoin-network">Our Own Private Bitcoin Network</h3>
<p>Lightning Network nodes need to communicate with a bitcoin node to be able to create on-chain transactions, watch the blockchain for updates, and to open/close channels.</p>
<p>Your LN node will communicate with a remote bitcoin node that we are running specifically for this workshop. This bitcoin node will be running in "simnet" mode, which gives us full control over the blockchain. This allows us to instantly mine blocks so that we don't have to wait 10 minutes for each block - the average time per block on the mainnet or testnet networks.</p>
<h3 id="sharing-information-during-the-workshop">Sharing Information During the Workshop</h3>
<p>To make it easy for us to share information with each other during this workshop, we will use an open IRC channel:</p>
<ul>
<li>In your browser go to <a href="https://webchat.freenode.net/">webchat.freenode.net</a></li>
<li>In the "Channels" field enter <code>#ln-workshop</code></li>
<li>You can send private messages or just post in the general chat</li>
</ul>
<h2 id="install-and-configure-your-lightning-network-node">Install and Configure Your Lightning Network Node</h2>
<p>Download and install GoLang:</p>
<pre><code class="language-bash">wget https://storage.googleapis.com/golang/go1.10.4.linux-amd64.tar.gz
tar -xvf go1.10.4.linux-amd64.tar.gz
mv go ~/.go
</code></pre>
<p>Add GoLang-related environment variables to your user's <code>.profile</code>:</p>
<pre><code class="language-bash">cat &gt;&gt; ~/.profile &lt;&lt; EOL
export GOROOT="\$HOME/.go"
export GOPATH="\$GOROOT/packages"
PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
EOL
</code></pre>
<p>Reload your user's <code>.profile</code> to load the new environment variables into the current terminal session:</p>
<pre><code class="language-bash">source ~/.profile
</code></pre>
<p>Test GoLang setup:</p>
<pre><code class="language-bash">go version
</code></pre>
<p>The previous command should output something like this:</p>
<pre><code>go version go1.10.4.linux/amd64
</code></pre>
<p>Install <code>lnd</code> (Lightning Network Daemon):</p>
<pre><code class="language-bash">go get -v -d github.com/lightningnetwork/lnd
cd $GOPATH/src/github.com/lightningnetwork/lnd
make &amp;&amp; make install
</code></pre>
<p>Create an lnd configuration file:</p>
<pre><code class="language-bash">mkdir -p ~/.lnd;
cat &gt; ~/.lnd/lnd.conf &lt;&lt; EOL
bitcoin.active=1
bitcoin.simnet=true
bitcoin.node=btcd
btcd.rpchost=PROVIDED_BY_ORGANIZER
btcd.rpcuser=PROVIDED_BY_ORGANIZER
btcd.rpcpass=PROVIDED_BY_ORGANIZER
btcd.rpccert=$HOME/.btcd/rpc.cert
debuglevel=debug
EOL
</code></pre>
<p>Create aliases for lnd and lncli:</p>
<pre><code class="language-bash">cat &gt;&gt; ~/.profile &lt;&lt; EOL
alias lnd-ws="lnd --lnddir=$HOME/.lnd"
alias lncli-ws="lncli --lnddir=$HOME/.lnd --network=simnet"
EOL
</code></pre>
<p>Make the new aliases usable in the current terminal window:</p>
<pre><code class="language-bash">source ~/.profile
</code></pre>
<p>One last step to be able to communicate with the <code>btcd</code> node via a TLS-encrypted connection. Save the remote btcd node's TLS certificate to a file locally:</p>
<pre><code class="language-bash">mkdir -p ~/.btcd;
cat &gt; ~/.btcd/rpc.cert &lt;&lt; EOL
PROVIDED_BY_ORGANIZER
EOL
</code></pre>
<p>Test your <code>lnd</code> setup:</p>
<pre><code class="language-bash">lnd-ws --version
</code></pre>
<p>You should see something like this:</p>
<pre><code>lnd version 0.5.0-beta commit=7fe095c54128b502e20b3482ef73f1b6ba737850
</code></pre>
<p>Great! Now let's move on to the next step.</p>
<h2 id="create-and-fund-your-wallet">Create and Fund Your Wallet</h2>
<p>Start <code>lnd</code>:</p>
<pre><code class="language-bash">lnd-ws
</code></pre>
<p>You should see something like this:</p>
<pre><code>2018-10-02 22:03:15.710 [INF] LTND: Version 0.5.0-beta commit=7fe095c54128b502e20b3482ef73f1b6ba737850
2018-10-02 22:03:15.710 [INF] LTND: Active chain: Bitcoin (network=simnet)
2018-10-02 22:03:15.714 [INF] CHDB: Checking for schema update: latest_version=6, db_version=6
2018-10-02 22:03:15.714 [INF] RPCS: Generating TLS certificates...
2018-10-02 22:03:15.731 [INF] RPCS: Done generating TLS certificates
2018-10-02 22:03:15.731 [INF] RPCS: password RPC server listening on 127.0.0.1:10009
2018-10-02 22:03:15.732 [INF] RPCS: password gRPC proxy started at 127.0.0.1:8080
2018-10-02 22:03:15.732 [INF] LTND: Waiting for wallet encryption password. Use `lncli create` to create a wallet, `lncli unlock` to unlock an existing wallet, or `lncli changepassword` to change the password of an existing wallet and unlock it.
</code></pre>
<p>Leave <code>lnd</code> running.</p>
<p>In a new terminal window, run the following command:</p>
<pre><code class="language-bash">lncli-ws create
</code></pre>
<p>Follow the prompts to complete the LN wallet creation process.</p>
<p>If successful you should see the following message:</p>
<pre><code>lnd successfully initialized!
</code></pre>
<p>Switch back to the terminal window where your <code>lnd</code> is running. You should now see something like this:</p>
<pre><code>2018-10-02 22:29:07.280 [INF] LNWL: Opened wallet
2018-10-02 22:29:07.340 [INF] LTND: Primary chain is set to: bitcoin
2018-10-02 22:29:08.233 [INF] LNWL: The wallet has been unlocked without a time limit
2018-10-02 22:29:08.233 [INF] LNWL: Catching up block hashes to height 0, this will take a while...
2018-10-02 22:29:08.237 [INF] LTND: LightningWallet opened
2018-10-02 22:29:08.242 [INF] LNWL: Caught up to height 0
2018-10-02 22:29:08.243 [INF] HSWC: Restoring in-memory circuit state from disk
2018-10-02 22:29:08.244 [INF] LNWL: Done catching up block hashes
2018-10-02 22:29:08.245 [INF] HSWC: Payment circuits loaded: num_pending=0, num_open=0
2018-10-02 22:29:08.246 [INF] LNWL: Started rescan from block 683e86bd5c6d110d91b94b97137ba6bfe02dbbdb8e3dff722a669b5d69d77af6 (height 0) for 0 addresses
2018-10-02 22:29:08.247 [INF] LNWL: Catching up block hashes to height 0, this might take a while
2018-10-02 22:29:08.249 [INF] LNWL: Done catching up block hashes
2018-10-02 22:29:08.249 [INF] LNWL: Finished rescan for 0 addresses (synced to block 683e86bd5c6d110d91b94b97137ba6bfe02dbbdb8e3dff722a669b5d69d77af6, height 0)
2018-10-02 22:29:08.250 [INF] RPCS: RPC server listening on 127.0.0.1:10009
2018-10-02 22:29:08.250 [INF] RPCS: gRPC proxy started at 127.0.0.1:8080
2018-10-02 22:29:08.302 [INF] HSWC: Starting HTLC Switch
2018-10-02 22:29:08.302 [INF] NTFN: New block epoch subscription
2018-10-02 22:29:08.302 [INF] NTFN: New block epoch subscription
2018-10-02 22:29:08.302 [INF] NTFN: New block epoch subscription
2018-10-02 22:29:08.302 [INF] DISC: Authenticated Gossiper is starting
2018-10-02 22:29:08.302 [INF] NTFN: New block epoch subscription
2018-10-02 22:29:08.302 [INF] BRAR: Starting contract observer, watching for breaches.
2018-10-02 22:29:08.302 [INF] CRTR: FilteredChainView starting
2018-10-02 22:29:08.350 [INF] CRTR: Filtering chain using 0 channels active
2018-10-02 22:29:08.350 [INF] CRTR: Prune tip for Channel Graph: height=0, hash=683e86bd5c6d110d91b94b97137ba6bfe02dbbdb8e3dff722a669b5d69d77af6
2018-10-02 22:29:08.351 [INF] CMGR: Server listening on [::]:9735
2018-10-02 22:29:08.352 [INF] SRVR: Auto peer bootstrapping is disabled
</code></pre>
<h3 id="funding-your-new-wallet">Funding Your New Wallet</h3>
<p>Create a new bitcoin address:</p>
<pre><code class="language-bash">lncli-ws newaddress np2wkh
</code></pre>
<p>Note that the address type here is important - <code>np2wkh</code> ("Pay to nested witness key hash").</p>
<p>Example output:</p>
<pre><code class="language-json">{
    "address": "&lt;new address printed here&gt;"
}
</code></pre>
<p>Copy and paste this address to the IRC channel.</p>
<p>The workshop organizer should now re-start the btcd node with the address as the recipient "mining" address:</p>
<pre><code class="language-bash">btcd --miningaddr=&lt;ADDRESS&gt;
</code></pre>
<p>Then the organizer should generate new blocks using the <code>btcctl</code> utility:</p>
<pre><code class="language-bash">btcctl generate 100
</code></pre>
<p>This will generate 100 new blocks.</p>
<p>Example output:</p>
<pre><code class="language-json">[
  "68d12bead9ac4a87599582d5186bf08634c9dfe86678d123b27d7d682ecd39df",
  "14cd20d79a0b2786b6a24710e75822eadd949f57cac93ea0dc8abcf858f7bf18",
  "67e5d6ceb6db58d334a195ab85613753aa909a2783c61c600320733614d6c7c7",
  "4e92a4c34792ac9394efb8470d6251656b09ae1e9d8bdfb95f5f7a69569ce925",
  "5ba8cbc7d51c80372aa48512fc81760f48f26167df518e6845ace3fab0978882",
  "... more block hashes omitted"
]
</code></pre>
<p>Normally blocks are mined about once every 10 minutes. But we don't have that kind of time, so we use the above command to generate blocks on-demand as we need them.</p>
<p>Now check your wallet balance:</p>
<pre><code class="language-bash">lncli-ws walletbalance
</code></pre>
<p>Example output:</p>
<pre><code class="language-json">{
    "total_balance": "5000000000",
    "confirmed_balance": "5000000000",
    "unconfirmed_balance": "0"
}
</code></pre>
<ul>
<li>The amounts shown are in satoshis</li>
<li><code>1 bitcoin = 100 million satoshis</code></li>
</ul>
<p>The above steps should be repeated for each workshop participant.</p>
<h2 id="opening-a-channel">Opening a Channel</h2>
<p>Before you can open channels, you will need to be connected to other lightning nodes ("peers"). To see your current peer connections:</p>
<pre><code class="language-bash">lncli-ws listpeers
</code></pre>
<p>Since you haven't connected to any peers yet, the result will be an empty array:</p>
<pre><code class="language-json">{
  "peers": []
}
</code></pre>
<p>To connect to "Bob":</p>
<pre><code class="language-bash">lncli-ws connect 0200eacbb299639eb19f4afd39d572e87398062e500e6b63ceb1ebded31fddd3b7@159.89.23.5
</code></pre>
<p>Now when you list your peers, you should see Bob's node listed:</p>
<pre><code class="language-bash">lncli-ws listpeers
</code></pre>
<p>Result:</p>
<pre><code class="language-json">{
    "peers": [
        {
            "pub_key": "0200eacbb299639eb19f4afd39d572e87398062e500e6b63ceb1ebded31fddd3b7",
            "address": "159.89.23.5:9735",
            "bytes_sent": "279",
            "bytes_recv": "279",
            "sat_sent": "0",
            "sat_recv": "0",
            "inbound": false,
            "ping_time": "0"
        }
    ]
}
</code></pre>
<p>But all we've done so far is make two lightning nodes aware of each other. We haven't opened any channels yet.</p>
<p>Try to open a channel with your new peer:</p>
<pre><code class="language-bash">lncli-ws openchannel --node_key=&lt;PEER_PUBKEY&gt; --local_amt=1000000
</code></pre>
<p>Each channel has two sides ("local" and "remote"). The <code>local_amt</code> argument in the above command is how much you want to add to the channel on the local (your) side.</p>
<p>Example output on success:</p>
<pre><code class="language-json">{
  "funding_txid": "41f045e8d3a33cceff237183e37eeabff3aea0cf4d64e28238807afe5df2dfa5"
}
</code></pre>
<p>If you see the following error:</p>
<pre><code>[lncli] rpc error: code = Unknown desc = not enough witness outputs to create funding transaction, need 0.01 BTC only have 0 BTC  available
</code></pre>
<p>Then you need to go back to the <a href="#create-and-fund-your-wallet">Create and Fund Your Wallet</a> step above and check that you are using the correct address type.</p>
<p>If you see an error similar to the following:</p>
<pre><code>[lncli] rpc error: code = Unknown desc = has witness data, but segwit isn't active yet
</code></pre>
<p>This means that the block height of the simnet blockchain is not high enough. The threshold for segwit activation is 300 blocks on simnet.</p>
<p>When your lightning node tries to open a new channel, it creates and broadcasts a special bitcoin transaction. Before the channel is considered "open", this transaction needs to be confirmed. The default number of confirmations (blocks) is 6. Whoever is controlling the <code>btcd</code> node should generate 6 new blocks:</p>
<pre><code class="language-bash">btcctl generate 6
</code></pre>
<p>Check that the channel was created:</p>
<pre><code class="language-bash">lncli-ws listchannels
</code></pre>
<p>You should see your new channel:</p>
<pre><code class="language-json">{
    "channels": [
        {
            "active": true,
            "remote_pubkey": "0200eacbb299639eb19f4afd39d572e87398062e500e6b63ceb1ebded31fddd3b7",
            "channel_point": "a8ea16394e1c759f57b3486eea33e3977b0a39d7ea70f409c3519bd581b52573:0",
            "chan_id": "1181974999924736",
            "capacity": "1000000",
            "local_balance": "990950",
            "remote_balance": "0",
            "commit_fee": "9050",
            "commit_weight": "600",
            "fee_per_kw": "12500",
            "unsettled_balance": "0",
            "total_satoshis_sent": "0",
            "total_satoshis_received": "0",
            "num_updates": "0",
            "pending_htlcs": [
            ],
            "csv_delay": 144,
            "private": false
        }
    ]
}
</code></pre>
<p>Super! With your newly opened channel, you can start sending and receiving lightning payments.</p>
<h2 id="sending-and-receiving-payments">Sending and Receiving Payments</h2>
<p>The first step in receiving a payment via the Lightning Network, is to generate an invoice. Use the following command to create your first invoice:</p>
<pre><code class="language-bash">lncli-ws addinvoice --amt=10000
</code></pre>
<p>Example output:</p>
<pre><code class="language-json">{
  "r_hash": "a9b22eca10de08426f11f3f59b8a733f1af831a699c1b3f6ca632533239dc1dd",
  "pay_req": "lnsb1pd8pxdzpp54xezajssmcyyymc3706ehznn8ud0svdxn8qm8ak2vvjnxguac8wsdqqcqzyse0qkh2fdn4adwlz598s4v9l2ulner3jalncsjf33za0r3hksv2u3m7vw2663ypaqcc4fjsuzeh5n5hfsqyggwk3rzp6neng4hza8stgp4aaszp"
}
</code></pre>
<p>The <code>pay_req</code> field is what we will share with our customer (or whoever will pay the invoice).</p>
<p>Share the above payment request invoice via the IRC chat so that one of the other workshop participants can send you a payment.</p>
<p>Some other participant should run the following command to send the payment:</p>
<pre><code class="language-bash">lncli-ws payinvoice &lt;The Payment Request&gt;
</code></pre>
<p>But wait, something went wrong:</p>
<pre><code class="language-json">{
    "payment_error": "unable to route payment to destination: TemporaryChannelFailure: insufficient capacity in available outgoing links: need 9002018 mSAT, max available is 1312000 mSAT",
    "payment_preimage": "",
    "payment_route": null
}
</code></pre>
<p>We are seeing this error because Bob doesn't have any funds on its side of your channel. In order for the payment to be successfully routed from the other participant to you via Bob, his node must have a local balance equal to or greater than the requested amount. Here is a series of diagrams to illustrate:</p>
<pre><code>+ ----- +                      + --- +                      + --------- +
| payor | &lt;-- 1000000 -- 0 --&gt; | Bob | &lt;-- 0 -- 1000000 --&gt; | recipient |
+ ----- +                      + --- +                      + --------- +
</code></pre>
<p>The next step of the failing payment looks like this:</p>
<pre><code>+ ----- +                         + --- +                      + --------- +
| payor | &lt;-- 990000 -- 10000 --&gt; | Bob | &lt;-- 0 -- 1000000 --&gt; | recipient |
+ ----- +                         + --- +                      + --------- +
</code></pre>
<p>And then where the payment finally fails:</p>
<pre><code>+ ----- +                         + --- +                            + --------- +
| payor | &lt;-- 990000 -- 10000 --&gt; | Bob | &lt;-- (10000) -- 1010000 --&gt; | recipient |
+ ----- +                         + --- +       !!!                  + --------- +
</code></pre>
<p>Payments are routed by transferring value from one side of a channel to another so that the transacted amount is effectively moved through the network until it reaches its final destination. You can only ever transfer value between the two peers involved in a channel by manipulating the local and remote balances in that channel. It is not possible to transfer value directly between channels. If one side of a channel does not have enough value to transfer to the other side, the routing will fail - as in the example above. </p>
<p>To fix this, you must first send some funds to Bob.</p>
<p>Whoever is controlling Bob's node should create a second payment request:</p>
<pre><code class="language-bash">lncli-ws addinvoice --amt=10000
</code></pre>
<p>Once they've shared the payment request via IRC, go ahead and pay it:</p>
<pre><code class="language-bash">lncli-ws payinvoice &lt;Bob's Payment Request&gt;
</code></pre>
<p>Check the balances in your channel to see that the payment was sent:</p>
<pre><code class="language-bash">lncli-ws listchannels
</code></pre>
<p>Now you should see that the remote balance is enough for your previous payment request to be routed.</p>
<pre><code class="language-json">{
    "channels": [
        {
            "active": true,
            "remote_pubkey": "0200eacbb299639eb19f4afd39d572e87398062e500e6b63ceb1ebded31fddd3b7",
            "channel_point": "a8ea16394e1c759f57b3486eea33e3977b0a39d7ea70f409c3519bd581b52573:0",
            "chan_id": "1181974999924736",
            "capacity": "1000000",
            "local_balance": "980950",
            "remote_balance": "10000",
            "commit_fee": "9050",
            "commit_weight": "724",
            "fee_per_kw": "12500",
            "unsettled_balance": "0",
            "total_satoshis_sent": "10000",
            "total_satoshis_received": "0",
            "num_updates": "2",
            "pending_htlcs": [
            ],
            "csv_delay": 144,
            "private": false
        }
    ]
}
</code></pre>
<p>Ask another workshop participant to try to pay your invoice. This time it should be successful. Cool!</p>
<p>That's it for this workshop. The remaining sections are part of the setup guide for the workshop organizer.</p>
<p>Thanks for participating!</p>
<h2 id="setup-private-bitcoin-network">Setup Private Bitcoin Network</h2>
<p>In order to create a private bitcoin network, you will need to setup at least one bitcoin node. In this workshop, we will be using <a href="https://github.com/roasbeef/btcd">btcd</a>. Steps to install btcd:</p>
<pre><code class="language-bash">go get -v -u github.com/Masterminds/glide
git clone https://github.com/roasbeef/btcd $GOPATH/src/github.com/roasbeef/btcd
cd $GOPATH/src/github.com/roasbeef/btcd
glide install
go install . ./cmd/...
</code></pre>
<p>Create btcd configuration file. RPC usernames and passwords will be "randomly" generated when you create the file:</p>
<pre><code class="language-bash">mkdir -p ~/.btcd;
cat &gt; ~/.btcd/btcd.conf &lt;&lt; EOL
simnet=1
txindex=1
rpclisten=$(host=($(hostname -I)); echo ${host[0]})
rpcuser=$(date '+%s%N-ln-workshop' | sha256sum | head -c 20)
rpcpass=$(date '+%s%N-ln-workshop' | sha256sum | head -c 32)
rpclimituser=$(date '+%s%N-ln-workshop' | sha256sum | head -c 20)
rpclimitpass=$(date '+%s%N-ln-workshop' | sha256sum | head -c 32)
rpcmaxclients=100
rpcmaxwebsockets=300
debuglevel=debug
EOL
</code></pre>
<p>Create the btcctl configuration file:</p>
<pre><code class="language-bash">mkdir -p ~/.btcctl;
cat &gt; ~/.btcctl/btcctl.conf &lt;&lt; EOL
simnet=1
$(cat ~/.btcd/btcd.conf | grep rpcuser=)
$(cat ~/.btcd/btcd.conf | grep rpcpass=)
EOL
</code></pre>
<p>Start btcd:</p>
<pre><code class="language-bash">btcd
</code></pre>
<p>You should see something like the following:</p>
<pre><code>2018-10-02 21:58:48.039 [INF] BTCD: Version 0.12.0-beta
2018-10-02 21:58:48.039 [INF] BTCD: Loading block database from '/home/user/.btcd/data/simnet/blocks_ffldb'
2018-10-02 21:58:48.045 [INF] BTCD: Block database loaded
2018-10-02 21:58:48.055 [INF] INDX: Transaction index is enabled
2018-10-02 21:58:48.055 [INF] INDX: cf index is enabled
2018-10-02 21:58:48.056 [DBG] INDX: Current internal block ID: 0
2018-10-02 21:58:48.056 [DBG] INDX: Current transaction index tip (height -1, hash 0000000000000000000000000000000000000000000000000000000000000000)
2018-10-02 21:58:48.056 [DBG] INDX: Current committed filter index tip (height -1, hash 0000000000000000000000000000000000000000000000000000000000000000)
2018-10-02 21:58:48.056 [INF] INDX: Catching up indexes from height -1 to 0
2018-10-02 21:58:48.056 [INF] INDX: Indexes caught up to height 0
2018-10-02 21:58:48.056 [INF] CHAN: Chain state (height 0, hash 683e86bd5c6d110d91b94b97137ba6bfe02dbbdb8e3dff722a669b5d69d77af6, totaltx 1, work 2)
2018-10-02 21:58:48.073 [INF] AMGR: Loaded 0 addresses from file '/home/user/.btcd/data/simnet/peers.json'
2018-10-02 21:58:48.073 [INF] RPCS: RPC server listening on 10.0.0.1:18556
2018-10-02 21:58:48.073 [INF] CMGR: Server listening on 0.0.0.0:18555
2018-10-02 21:58:48.073 [INF] CMGR: Server listening on [::]:18555
</code></pre>
<p>If you have a firewall enabled (always a good idea!), then you will need to allow connections to the btcd node's RPC port:</p>
<pre><code class="language-bash">sudo ufw allow 18556
</code></pre>
<p>Print your local area network IP address:</p>
<pre><code class="language-bash">hostname -I
</code></pre>
<p>And the credentials for the limited RPC user:</p>
<pre><code class="language-bash">cat ~/.btcd/btcd.conf | grep rpclimit
</code></pre>
<p>And finally the btcd node's RPC certificate:</p>
<pre><code class="language-bash">cat ~/.btcd/rpc.cert
</code></pre>
<p>Share the output of the above commands with the workshop's IRC channel, so that the workshop attendees can finish the configuration of their lnd nodes.</p>
<p>For the RPC TLS certificate, <a href="https://gist.github.com/">create a "gist"</a> and share the link instead of posting directly to the IRC chat.</p>
<p>Follow the participant instructions (links at the top of the page), to setup your lnd node and to create and fund its bitcoin wallet. Your lnd node will act as "Bob" during this workshop.</p>
<h3 id="gotchas-for-organizer">Gotcha's for Organizer</h3>
<p>There are some edge cases that can cause problems and waste time:</p>
<ul>
<li>If you've already had your btcd node setup once before and you change its listener IP address, you will need to re-generate the RPC TLS certificate. Delete the current certificate (and key file) then restart btcd.</li>
</ul>
<h2 id="more-resources">More Resources</h2>
<p>To learn more about the Lightning Network, have a look at the following additional resource(s):</p>
<ul>
<li><a href="https://dev.lightning.community/">https://dev.lightning.community/</a> - Community-driven website with many guides and detailed technical documentation about the LN protocol</li>
<li><a href="https://github.com/ElementsProject/lightning">https://github.com/ElementsProject/lightning</a> - Alternate LN implementation written in C</li>
<li><a href="https://github.com/ACINQ/eclair">https://github.com/ACINQ/eclair</a> - Alternate LN implementation written in Scala</li>
</ul>
</body></html>]]></description><link>https://degreesofzero.com/article/shared-private-lightning-network.html</link><guid isPermaLink="true">https://degreesofzero.com/article/shared-private-lightning-network.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Fri, 17 Aug 2018 14:00:00 GMT</pubDate></item><item><title><![CDATA[Streaming transactions from bitcoind via ZeroMQ]]></title><description><![CDATA[<html><head></head><body><p>There are many potential applications that need a reliable, fast stream of bitcoin transactions: payment processing, gathering transaction data for statistics and network analysis, fee estimation, and many more that I can't think of at the moment. You might think about using some existing third-party services (or APIs), but you will quickly realize that most of those services are not reliable, too slow, inefficient, or too expensive to license. And let's not forget the biggest problem of all: They are a third-party. In the world of cryptocurrencies, it's always better to verify yourself and cut out the middleman.</p>
<p>So in this guide, we will walk through the steps needed to setup a bitcoin node that will stream transactions over TCP using the <a href="http://zeromq.org/">ZeroMQ protocol</a>. For the sake of simplicity, it is assumed that both your bitcoin node and the subscriber application will be on the same machine.</p>
<h2 id="setup-bitcoind">Setup bitcoind</h2>
<p>Download bitcoind:</p>
<pre><code class="language-bash">wget -O bitcoind.tar.gz https://bitcoin.org/bin/bitcoin-core-0.16.2/bitcoin-0.16.2-x86_64-linux-gnu.tar.gz;
</code></pre>
<p>Check <a href="https://bitcoin.org/en/download">bitcoin.org</a> for the latest version number.</p>
<p>Extract the compressed tar file:</p>
<pre><code class="language-bash">tar -xvzf bitcoin.tar.gz
</code></pre>
<p>And move the bitcoind and bitcoin-cli binaries to the current directory:</p>
<pre><code class="language-bash">mv bitcoin-0.16.2/bin/bitcoind ./
mv bitcoin-0.16.2/bin/bitcoin-cli ./
</code></pre>
<p>Create a new folder to be used as the data directory:</p>
<pre><code class="language-bash">mkdir -p .bitcoin
</code></pre>
<p>Create a bitcoin configuration file:</p>
<pre><code class="language-bash">cat &gt; ./.bitcoin/bitcoin.conf &lt;&lt; EOL
# Use the regtest network, because we can generate blocks as needed.
regtest=1

# In this example, we will keep bitcoind running in one terminal window.
# So we don't need it to run as a daemon.
daemon=0

# RPC is required for bitcoin-cli.
server=1
rpcuser=test
rpcpassword=test

# In this example we are only interested in receiving raw transactions.
# The address here is the URL where bitcoind will listen for new ZeroMQ connection requests.
zmqpubrawtx=tcp://127.0.0.1:3000
EOL
</code></pre>
<p>For more details on how ZeroMQ is implemented and configured in bitcoind, see <a href="https://github.com/bitcoin/bitcoin/blob/master/doc/zmq.md">bitcoin/zmq.md</a>.</p>
<p>Start bitcoind:</p>
<pre><code class="language-bash">./bitcoind -datadir=./.bitcoin
</code></pre>
<p>Open a new terminal window (or tab), leaving bitcoind running in the first one.</p>
<p>Use bitcoin-cli to generate new blocks:</p>
<pre><code class="language-bash">./bitcoin-cli -datadir=./.bitcoin generate 101
</code></pre>
<p>This will give your local wallet some regtest bitcoin to play with.</p>
<p>Get a new address:</p>
<pre><code class="language-bash">./bitcoin-cli -datadir=./.bitcoin getnewaddress
</code></pre>
<p>Send some bitcoin to the new address:</p>
<pre><code class="language-bash">./bitcoin-cli -datadir=./.bitcoin sendtoaddress "ADDRESS" 1.000
</code></pre>
<p>You will need to repeat this last step when testing the example node.js application later.</p>
<h2 id="example-subscriber-application">Example Subscriber Application</h2>
<p>In the following example node.js script, we will use <a href="https://github.com/zeromq/zeromq.js/">zeromq.js</a> and <a href="https://github.com/bitcoinjs/bitcoinjs-lib">bitcoinjs-lib</a> to subscribe to a TCP socket using the ZeroMQ protocol and parse the received raw transaction data. The code is commented to help understand how it works.</p>
<pre><code class="language-js">// Library for working with the bitcoin protocol.
// For working with transactions, hd wallets, etc.
var bitcoin = require('bitcoinjs-lib');

// Implementation of ZeroMQ in node.js.
// From the maintainers of the ZeroMQ protocol.
var zmq = require('zeromq');

// Create a subscriber socket.
var sock = zmq.socket('sub');
var addr = 'tcp://127.0.0.1:3000';

// Initiate connection to TCP socket.
sock.connect(addr);

// Subscribe to receive messages for a specific topic.
// This can be "rawblock", "hashblock", "rawtx", or "hashtx".
sock.subscribe('rawtx');

sock.on('message', function(topic, message) {

    if (topic.toString() === 'rawtx') {

        // Message is a buffer. But we want it as a hex string.
        var rawTx = message.toString('hex');

        // Use bitcoinjs-lib to decode the raw transaction.
        var tx = bitcoin.Transaction.fromHex(rawTx);

        // Get the txid as a reference.
        var txid = tx.getId();

        console.log('received transaction', txid, tx);

        // To go further you can get the address for a specific output as follows:
        // var address = bitcoin.address.fromOutputScript(tx.outs[0].script, bitcoin.networks.testnet);
    }
});
</code></pre>
<p>Copy/paste the above code into a new file named <code>example.js</code>.</p>
<p>In this example, we are listening for raw transactions. But there are other "topics" that you can subscribe to via zeromq:</p>
<ul>
<li><code>"rawblock"</code> - Receive raw block data for new blocks.</li>
<li><code>"hashblock"</code> - Receive only the block hash for new blocks.</li>
<li><code>"rawtx"</code> - Receive raw transaction data for new transactions.</li>
<li><code>"hashtx"</code> - Receive only the transaction hash for new transactions.</li>
</ul>
<p>Initialize a new npm project:</p>
<pre><code class="language-bash">npm init
</code></pre>
<p>Hold down the <code>&lt;enter&gt;</code> key to accept all the defaults.</p>
<p>Install bitcoinjs-lib and zeromq modules via npm:</p>
<pre><code class="language-bash">npm install bitcoinjs-lib zeromq
</code></pre>
<p>Run the example node.js application:</p>
<pre><code class="language-bash">node example.js
</code></pre>
<p>Leave this running and open another terminal window (or tab).</p>
<p>To test whether everything is working properly, you will need to send a test transaction. Send some bitcoin using the command you tried earlier:</p>
<pre><code class="language-bash">./bitcoin-cli -datadir=./.bitcoin sendtoaddress "ADDRESS" 1.000
</code></pre>
<p>If all is well, you should see something like the following printed in the terminal window of the example node application:</p>
<pre><code>received transaction ffe5030e79e3092d6781095e61bf19996297fc5650bf8329ccb880e9c6141015 Transaction {
  version: 2,
  locktime: 101,
  ins: 
   [ { hash: &lt;Buffer b4 0a 3f 90 c4 ab b5 f3 37 3a 22 ac d0 47 1a cd 5b 61 5a ea dd 50 7c 87 9d b5 7e e6 82 d6 da 46&gt;,
       index: 1,
       script: &lt;Buffer 16 00 14 d6 08 78 ca 69 0b 3e b9 3e f1 c8 8d 67 8b ea 8f 86 59 0d 59&gt;,
       sequence: 4294967294,
       witness: [Array] } ],
  outs: 
   [ { value: 100000000,
       script: &lt;Buffer a9 14 6c d8 77 4d e7 1c 89 91 f9 f3 c3 3b d2 de 65 bf 21 da 2c 64 87&gt; },
     { value: 4799992920,
       script: &lt;Buffer a9 14 2f 0f b0 dc ab 89 06 e8 e0 5e ef 7e 53 94 82 22 cb 78 82 29 87&gt; } ] }
</code></pre>
<p>If you don't see the above output, then something is wrong. Double check that your bitcoind instance is still running. You can also check the debug log of bitcoind:</p>
<pre><code class="language-bash">tail .bitcoin/regtest/debug.log
</code></pre>
<h2 id="remote-setup-and-security-considerations">Remote Setup and Security Considerations</h2>
<p>In a production environment, the bitcoin node will run on a separate server from your node.js application. Before you go and configure your ZeroMQ end-point to listen on 0.0.0.0, it's important to understand that the ZeroMQ protocol uses TCP so it lacks any encryption or authentication mechanism.</p>
<p>To ensure that the bitcoin transactions being streamed to your node.js application are authentic, it's a good idea to use port forwarding via an SSH tunnel. This will provide a strong layer of encryption and authentication. For a detailed guide to configuring such a setup see <a href="https://degreesofzero.com/article/secure-cloud-services-via-ssh-tunneling.html">Secure Cloud Services via SSH Tunneling</a>.</p>
</body></html>]]></description><link>https://degreesofzero.com/article/streaming-transactions-from-bitcoind-via-zeromq.html</link><guid isPermaLink="true">https://degreesofzero.com/article/streaming-transactions-from-bitcoind-via-zeromq.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Tue, 07 Aug 2018 16:16:00 GMT</pubDate></item><item><title><![CDATA[Secure Cloud Services via SSH Tunneling]]></title><description><![CDATA[<html><head></head><body><p>For most web-based business applications these days, it is necessary to run secondary services such as databases, search indexes, shared caches, and so on. Typically these services will be running on their own dedicated box (VPS or dedicated hardware). As a security best practice, these services should not be listening for incoming requests and also inbound requests should be blocked by a firewall.</p>
<p>This presents a problem: How to allow the applications that depend on these services to reach them? Enter port forwarding via SSH tunneling. This guide will walk you through the process of configuring and debugging such a setup so that you can have the best of both worlds.</p>
<h2 id="configuring-the-service-server">Configuring the Service Server</h2>
<p>First we will need to allow SSH tunneling on the service server. To do this we will create a dedicated user that is restricted to a sub-set of commands. We will further restrict the incoming SSH tunnels to only allow port-forwarding.</p>
<p>Create a new user ("remote_access") with the following command:</p>
<pre><code class="language-bash">useradd --shell /bin/rbash --home-dir /home/remote_access --create-home remote_access;
</code></pre>
<ul>
<li>Use <code>rbash</code> (restricted bash) to limit the user to a small sub-set of commands.</li>
</ul>
<p>Create an authorized keys file for SSH authorization:</p>
<pre><code class="language-bash">mkdir -p /home/remote_access/.ssh;
touch /home/remote_access/.ssh/authorized_keys;
chown -R remote_access:remote_access /home/remote_access/.ssh;
</code></pre>
<p>Prepend each public key in the authorized keys file with the following:</p>
<pre><code>no-pty,no-X11-forwarding,permitopen="localhost:3306",command="/bin/echo do-not-send-commands" 
</code></pre>
<p>This will restrict the remote user to port-forwarding on a specific port ("3306" in this example).</p>
<p>You can allow more than one port by specifying the <code>permitopen</code> option once for each port. For example, to allow ports 3306 and 8080:</p>
<pre><code>no-pty,no-X11-forwarding,permitopen="localhost:3306",permitopen="localhost:8080",command="/bin/echo do-not-send-commands" 
</code></pre>
<h2 id="configuring-the-application-server">Configuring the Application Server</h2>
<p>Create a new RSA public/private key pair:</p>
<pre><code class="language-bash">mkdir -p ~/.ssh;
ssh-keygen -f ~/.ssh/id_rsa -t rsa;
</code></pre>
<p>Print the new RSA public key with the prepended SSH options:</p>
<pre><code class="language-bash">echo "no-pty,no-X11-forwarding,permitopen=\"localhost:3306\",command=\"/bin/echo do-not-send-commands\" $(cat ~/.ssh/id_rsa.pub)"
</code></pre>
<p>Copy/paste the output from the above command to the "remote_access" user's authorized keys file on the service server.</p>
<p>Now you can try connecting from the application server to the service server:</p>
<pre><code class="language-bash">ssh -vNTL 3307:localhost:3306 remote_access@REMOTE_HOST
</code></pre>
<ul>
<li><code>-v</code> Print verbose log messages.</li>
<li><code>-N</code> Do not execute a remote command.</li>
<li><code>-T</code> Disable pseudo-terminal allocation.</li>
<li><code>-L</code> Specifies that the given port (3307) on the local host is to be forwarded to the given host and port (localhost:3306) on the remote side.<ul>
<li>The local and remote ports <strong>must be different</strong> otherwise the tunnel won't work. There are some work-arounds to this, but that is outside the scope of this article.</li>
</ul>
</li>
</ul>
<p>Have a look at the output. You should see something like the following:</p>
<pre><code>debug1: Authentication succeeded (publickey).
Authenticated to REMOTE_HOST ([192.168.0.1]:22).
debug1: Local connections to LOCALHOST:3307 forwarded to remote address localhost:3306
debug1: Local forwarding listening on ::1 port 3307.
debug1: channel 0: new [port listener]
debug1: Local forwarding listening on 127.0.0.1 port 3307.
debug1: channel 1: new [port listener]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: Remote: PTY allocation disabled.
debug1: Remote: X11 forwarding disabled.
debug1: Remote: Forced command.
</code></pre>
<p>If you see the following message:</p>
<pre><code>debug1: No more authentication methods to try.
remote_access@bitcoind-1.cryptoterminal.eu: Permission denied (publickey).
</code></pre>
<p>This means that the server couldn't authenticate you as the "remote_access" user. Most likely you forgot to add or copied incorrectly your public key. Double check the "remote_access" user's authorized keys file.</p>
<p>If you see the following message while attempting to use the SSH tunnel:</p>
<pre><code>channel 2: open failed: administratively prohibited: open failed
</code></pre>
<p>This means that the <code>permitopen</code> option is not set correctly. Remember that the second port number is the remote port that should be allowed given the following value for <code>-L</code>: <code>3307:localhost:3306</code>.</p>
<p>If you are still having trouble, maybe take a break and come back to the problem later ;)</p>
<h3 id="automate-the-ssh-tunnel">Automate the SSH Tunnel</h3>
<p>Once you've got the SSH tunnel running, you can move on to configuring your application server to connect automatically on server start.</p>
<p>For managing the SSH tunnel (in case of network problems), it's a good idea to use <code>autossh</code>:</p>
<pre><code class="language-bash">sudo apt-get install autossh
</code></pre>
<p>This will automatically handle reconnects in case the remote server is rebooted, or if either server temporarily loses connectivity.</p>
<p>To start the SSH tunnel whenever the application server starts, add a crontask:</p>
<pre><code class="language-crontab">crontab -e
</code></pre>
<p>And then add the following on a new line:</p>
<pre><code>@reboot autossh -fnNTL 3307:localhost:3306 remote_access@REMOTE_HOST
</code></pre>
<ul>
<li><code>-f</code> Requests ssh to go to background just before command execution.</li>
<li><code>-n</code> Redirects stdin from <code>/dev/null</code> (actually, prevents reading from stdin). This must be used when ssh is run in the background.</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>Now you have the best of both worlds: Your services are safe and secure behind a firewall while still permitting your dependent applications to reach them. This limits any potential damage that can be caused by a compromised application server. And as a bonus, communications between the two servers are encrypted via the SSH tunnel. Yay!</p>
</body></html>]]></description><link>https://degreesofzero.com/article/secure-cloud-services-via-ssh-tunneling.html</link><guid isPermaLink="true">https://degreesofzero.com/article/secure-cloud-services-via-ssh-tunneling.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Fri, 03 Aug 2018 16:58:00 GMT</pubDate></item><item><title><![CDATA[SSH Tunnel on Windows Using PuTTY]]></title><description><![CDATA[<html><head></head><body><p>This guide will walk you through the steps needed to setup an SSH tunnel from a Windows machine using <a href="http://www.putty.org/">PuTTY</a>. You can download PuTTY <a href="https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html">here</a>. An SSH tunnel is useful for port-forwarding, in the case of connecting securely to a remote database server.</p>
<p>If you're using Linux or Mac, it's probably easier to simply use the terminal (or <a href="https://www.cyberciti.biz/faq/how-to-use-ssh-in-unix-or-linux-shell-script/">Secure Shell program</a>) that is already installed on your system.</p>
<h2 id="rsa-key">RSA key</h2>
<p>If you already have a private RSA key, you will need to convert it to <code>ppk</code> format so that PuTTY can use it. To convert your RSA private key to this format you can use PuTTYgen (can be found <a href="http://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html">here</a>; search for "puttygen.exe" in that page). The program is simple enough to use, I don't think you'll have any problems with it.</p>
<p>If you do not yet have an RSA key pair (public and private), the above linked PuTTYgen tool can generate one for you.</p>
<p>Once you have your private RSA key as a <code>ppk</code> file, you can move on to the next step.</p>
<h2 id="configure-putty">Configure PuTTY</h2>
<p>There are a few configurations that you must set precisely to get the SSH tunneling working with your RSA key.</p>
<p>First screen:</p>
<p><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-01.png"></a></p><div class="left smaller"><div class="image"><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-01.png"><img src="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-01.png" alt=""></a></div></div><p></p>
<div class="clear"></div>

<ul>
<li><strong>Host Name (or IP address)</strong> - Set the host name equal to <code>USER@HOST</code>; where <code>USER</code> is the user on the server with which you will be connecting and <code>HOST</code> is the IP address or hostname of the server.</li>
<li><strong>Close window on exit</strong> - This should be set to "Never". This allows you to debug the situation if a connection error occurs.</li>
</ul>
<p>Next screen:</p>
<p><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-02.png"></a></p><div class="left smaller"><div class="image"><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-02.png"><img src="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-02.png" alt=""></a></div></div><p></p>
<div class="clear"></div>

<ul>
<li><strong>Don't start a shell session or command at all</strong> - This should be checked.</li>
</ul>
<p>Next screen:</p>
<p><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-03.png"></a></p><div class="left smaller"><div class="image"><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-03.png"><img src="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-03.png" alt=""></a></div></div><p></p>
<div class="clear"></div>

<ul>
<li>Click the <strong>Browse</strong> button and select the <code>ppk</code> file that you generated earlier.</li>
</ul>
<p>Next screen:</p>
<p><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-04.png"></a></p><div class="left smaller"><div class="image"><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-04.png"><img src="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-04.png" alt=""></a></div></div><p></p>
<div class="clear"></div>

<ul>
<li><strong>Add new forwarded port</strong>:<ul>
<li><strong>Source port</strong> - the port number on your <em>local machine</em> which all requests to will be forwarded through the SSH tunnel to the remote server.</li>
<li><strong>Destination</strong> - the host and port number on the <em>remote server</em> to which requests will be forwarded. In this case, <code>localhost:3306</code> which is where a remote server's MySQL server is listening.</li>
<li>Don't forget to click the <strong>Add</strong> button.</li>
</ul>
</li>
</ul>
<p>That's it! It might be a good idea to save the configurations you've set so that you can easily go back and change things if you need to debug a problem. To save your configurations:</p>
<ul>
<li>Go back to the main screen</li>
<li>Give your session a name</li>
<li>Save</li>
</ul>
<h2 id="try-connecting">Try Connecting</h2>
<p>Now that everything is configured, try to click the <strong>Open</strong> button to initialize the connection to the server. You should see a black terminal window open up. Something like this:</p>
<p><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-05.png"></a></p><div class="left smaller"><div class="image"><a href="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-05.png"><img src="ssh-tunnel-on-windows-using-putty/images/putty-configure-screen-05.png" alt=""></a></div></div><p></p>
<div class="clear"></div>

<p>To open the logs view, like shown above, you right-click the title bar of the window and click the menu item called <strong>Event Log</strong>.</p>
</body></html>]]></description><link>https://degreesofzero.com/article/ssh-tunnel-on-windows-using-putty.html</link><guid isPermaLink="true">https://degreesofzero.com/article/ssh-tunnel-on-windows-using-putty.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Sat, 14 Oct 2017 11:00:00 GMT</pubDate></item><item><title><![CDATA[Habits of a Careful Internet Citizen]]></title><description><![CDATA[<html><head></head><body><p>The internet can be a hostile place, not just because of trolls and soul-crushing comments on youtube videos. Websites and internet-based services are being attacked and their users' personal information stolen by the millions. But we don't have to resign ourselves to being victims. We can protect ourselves. In this post, I will explain how you can minimize your risk and improve your security online.</p>
<h2 id="guard-your-personal-information">Guard Your Personal Information</h2>
<p>The best way to reduce the risk of your personal information being stolen, is to simply not give it out in the first place. Almost all websites and services these days will vacuum up as much personal user information as they can. This puts you at risk of your information being stolen by hackers. These hackers will re-package and sell your information to bad actors to be used in phishing scams, identity theft, and other nasty things. Practically speaking, it can be difficult to use a lot of websites and services without providing at least some personal information. Or is it?</p>
<p>If when you are filling out some form online (e.g for an order at an e-shop), certain information fields might be "required" for the form to be accepted (phone number, email, address, etc). However those fields might not actually be necessary to complete the order. It's very unlikely that they will really need your phone number or email address. And if they are not going to be shipping you a physical item by post, they don't need your real postal address either. So in these cases here are some tricks you can use:</p>
<ul>
<li><strong>Email</strong> - Use a free, public email account from <a href="https://www.mailinator.com/">Mailinator</a>. This is a quick, easy, and free way to get a real, functioning email address. This is nice because you don't have to sign-up or login to use it, and you can check all the emails that an account receives.</li>
<li><strong>Phone number</strong> - Input a fake, "real-looking" phone number. This is fine if you don't expect a phone-based account verification process and you don't need to receive text messages (SMS) from the website. If you need SMS capabilities, there are a few sites that allow you to receive SMS messages for free. These sites are like Mailinator, but for SMS:<ul>
<li><a href="https://www.receive-sms-online.info/">www.receive-sms-online.info</a></li>
<li><a href="https://smsreceivefree.com/">smsreceivefree.com</a></li>
</ul>
</li>
<li><strong>Postal Address</strong> - Look-up a real-world address in any location that makes sense. I like to use Google Maps for this because it's usually quick and easy.</li>
</ul>
<p>You might be saying to yourself: "Wow, that's kind of extreme. And I don't like the idea of lying." Get over it. There's a battle happening every day for consumer's personal information. You can either be a victim or you can protect yourself.</p>
<h3 id="online-payments">Online Payments</h3>
<p>If you're feeling adventurous, have a look at <a href="https://bitcoin.org/en/bitcoin-for-individuals">bitcoin</a>. With bitcoin it is possible to transact on the internet without the need for banks or other intermediaries. This has many benefits, including protecting your privacy while making online purchases.</p>
<p>Otherwise if using a card to pay online (and you're in the United States), always use a <em>credit card</em>. The reason for this is that you are <a href="https://www.law.cornell.edu/uscode/text/15/1643">only liable for up to $50 in the case of fraudulent charges</a>. Debit cards do not have this protection. If your debit card information is stolen and used fraudulently, the funds will be transferred out of your bank account and you will be forced to endure a lengthy process to have your money returned to you.</p>
<p>If you don't use a credit card, it might be a good idea to open a secondary checking account with a limited balance. This will allow you to transact online without exposing your full bank balance to the risk of fraudulant charges. Doing this likely will cost you some money every month in fees, but that could be a small price to pay for the added security and peace-of-mind.</p>
<h2 id="browser-choice-and-setup">Browser Choice and Setup</h2>
<p>Use <a href="https://www.mozilla.org/en-US/firefox/new/">Firefox</a> or <a href="https://www.chromium.org/">Chromium</a> (non-Googled version of Chrome) as your primary browser. On mobile, use Firefox because Chrome on mobile does not allow extensions to be installed. Must-have browser extensions:</p>
<ul>
<li>uBlock Origin (for <a href="https://chrome.google.com/webstore/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm?hl=en">Chrome</a> and <a href="https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/">Firefox</a>) - Blocks ads and most third-party tracking services. As a bonus it also makes pages load faster, reduces data usage, reduces power consumption, reduces visual clutter on the screen (so reading articles is easier), protects your privacy, blocks potential drive-by virus installers and other malicious code served by ad-networks.</li>
<li><a href="https://www.eff.org/https-everywhere">HTTPS Everywhere</a> (for Chrome and Firefox) - Forces encrypted communications with many major websites.</li>
</ul>
<p>Always use private browsing mode; in Chrome this is called "Incognito" mode. The reasoning here is that your browser will forget everything you've done when you close it. Yes, that means you will have to login to websites everytime you open your browser. But this also means that your accounts are protected against a whole class of browser-based vulnerabilities.</p>
<p>You may also want to consider disabling JavaScript in your browser. This makes pages load faster, saves your device's battery, and blocks most attack vectors. The down-side is that some website functions won't work, but you can easily white-list these sites as needed.</p>
<h3 id="chrome-settings">Chrome Settings</h3>
<p>Firefox already does a good job of providing reasonable defaults to protect your privacy; Chrome not so much. Here are some settings you may want to change:</p>
<ul>
<li><strong>Block third-party cookies</strong> - Can be found at <code>chrome://settings/content/cookies</code></li>
<li>The following settings under "Advanced" in <code>chrome://settings</code>:<ul>
<li><input disabled="" type="checkbox"> "Use a web service to help resolve navigation errors"</li>
<li><input disabled="" type="checkbox"> "Use a prediction service to help complete searches and URLs typed in the address bar"</li>
<li><input disabled="" type="checkbox"> "Use a prediction service to load pages more quickly"</li>
<li><input disabled="" type="checkbox"> "Automatically send some system information and page content to Google to help detect dangerous apps and sites"</li>
<li>[ <strong>✓</strong> ] "Protect you and your device from dangerous sites"</li>
<li><input disabled="" type="checkbox"> "Automatically send usage statistics and crash reports to Google"</li>
<li><input disabled="" type="checkbox"> "Send a 'Do Not Track' request with your browsing traffic"</li>
<li><input disabled="" type="checkbox"> "Use a web service to help resolve spelling errors"</li>
</ul>
</li>
<li><strong>Disable autofill</strong> - Can be found at <code>chrome://settings/autofill</code></li>
<li><strong>Disable manage passwords</strong> - Can be found at <code>chrome://settings/passwords</code></li>
</ul>
<h2 id="use-a-password-manager">Use a Password Manager</h2>
<p>Using a password manager is a critical step towards improving your security online. Some of the benefits of a password manager include:</p>
<ul>
<li><strong>One password to remember</strong> - The only password you will need to remember is your password manager's master password.</li>
<li><strong>Stronger passwords</strong> - Humans are bad at creating strong, random passwords. This is why it's better to let your password manager do this for you.</li>
<li><strong>Unique password for each site</strong> - Since you don't need to remember them, you can use a unique password for each website. This is important because when a website is hacked, your accounts for other sites are not at risk of being compromised.</li>
</ul>
<p>Recommended password managers:</p>
<ul>
<li><a href="https://www.keepassx.org/">KeePassX</a><ul>
<li>Available for Windows, Mac, and Linux</li>
<li>Free</li>
<li>Requires your own backup and syncing scheme</li>
<li>For Android support try one of the following apps: <a href="https://play.google.com/store/apps/details?id=com.android.keepass">KeePassDroid</a>, <a href="https://play.google.com/store/apps/details?id=keepass2android.keepass2android&amp;hl=en">Keepass2Android</a></li>
</ul>
</li>
<li><a href="https://1password.com/">1Password</a><ul>
<li>Available for Windows, Mac, Android, and iOS</li>
<li>$3 per month</li>
<li>Cloud-based backup and sync between devices</li>
</ul>
</li>
</ul>
<h2 id="two-factor-authentication">Two-factor Authentication</h2>
<p>For your most important accounts (primary email, online banking, etc), you should think about enabling two-factor authentication (2FA). This typically involves a secondary device (e.g your phone) which generates a random code every 30 seconds that must be used in addition to your account password to login. This dramatically improves your account security because an attacker would have to know your password and also have access to the device that generates your second-factor authentication codes.</p>
<p>SMS-based 2FA should be avoided. Never link your phone number as a "backup/recovery" method on any of your accounts. Phone numbers can be hijacked via social engineering. A determined attacker can call your phone company, convince some unmotivated customer service representative that they are you, and then switch the phone number on your account to a new SIM card. Once they do that, they will be able to gain access to your accounts via the account recovery mechanism using a code sent via SMS.</p>
<p>Recommended 2FA apps:</p>
<ul>
<li>Google Authenticator (for <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2&amp;hl=en">Android</a> and <a href="https://itunes.apple.com/us/app/google-authenticator/id388497605">iOS</a>)</li>
<li><a href="https://authy.com/">Authy</a></li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>That's not everything, but it's a solid start to improving your defense posture online. Don't worry if you can't (or don't want to) change all of your habits right now. Pick a couple things today and do those. Maybe come back and add a few more in the future.</p>
<p>Until next time, good luck!</p>
</body></html>]]></description><link>https://degreesofzero.com/article/habits-of-a-careful-internet-citizen.html</link><guid isPermaLink="true">https://degreesofzero.com/article/habits-of-a-careful-internet-citizen.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Mon, 09 Oct 2017 10:00:00 GMT</pubDate></item><item><title><![CDATA[Migrating to a new password hashing algorithm]]></title><description><![CDATA[<html><head></head><body><p>This article assumes that your current password storage mechanism involves some irreversible hashing function. In this case we are forced to migrate each user to the new storage algorithm when they login successfully, because we need the original plain text password to generate a new hash. If you're storing user passwords as plain text, or you're using some reversible encryption scheme, you can migrate all your users' password right away without the need for this article.</p>
<p>Here's a simplified example users table from MySQL:</p>
<pre><code>+--------------+------------------+------+-----+---------+----------------+
| Field        | Type             | Null | Key | Default | Extra          |
+--------------+------------------+------+-----+---------+----------------+
| id           | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| username     | varchar(255)     | NO   | UNI | NULL    |                |
| password     | varchar(255)     | NO   |     | NULL    |                |
+--------------+------------------+------+-----+---------+----------------+
</code></pre>
<p>In the above example table:</p>
<ul>
<li>The <code>id</code> field is an auto-incremented integer field unique to each user record.</li>
<li>The <code>username</code> field is a variable-length text field which must be unique to each user record.</li>
<li>The <code>password</code> field is a variable-length text field that will contain the hash of each user's password.</li>
</ul>
<p>We will need to add another field named <code>pw_storage_algorithm</code>:</p>
<pre><code>+-----------------------+------------------+------+-----+---------+----------------+
| Field                 | Type             | Null | Key | Default | Extra          |
+-----------------------+------------------+------+-----+---------+----------------+
| id                    | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| username              | varchar(255)     | NO   | UNI | NULL    |                |
| password              | varchar(255)     | NO   |     | NULL    |                |
| pw_storage_algorithm  | varchar(255)     | YES  |     | NULL    |                |
+-----------------------+------------------+------+-----+---------+----------------+
</code></pre>
<p>The default value (empty or null) for this new field is assumed to be your current password storage algorithm.</p>
<p>Your current login logic might look something like this (as pseudo-code):</p>
<pre><code>Search for user record in database (by username or email).

If user exists:
    If currentPasswordStorageAlgorithm(password) equals user.password:
        Login success.
    else
        Login failure.

Else:
    Login failure.
</code></pre>
<p>Modify the password checking logic in your application to take into account the new field:</p>
<pre><code>Search for user record in database (by username or email).

If user exists:
    If user.pw_storage_algorithm is "new_algorithm":
        If newPasswordStorageAlgorithm(password) equals user.password:
            Login success.
        Else:
            Login failure.
    Else:
        If legacyPasswordStorageAlgorithm(password) equals user.password:
            Set user.pw_storage_algorithm equal to "new_algorithm".
            Set user.password equal to newPasswordStorageAlgorithm(password).
            Save the user record.
            Login success.
        Else:
            Login failure.

Else:
    Login failure.
</code></pre>
<p>Notice the branch where we check the password using the legacy password storage algorithm. If the check is successful we hash the plain text password using the new password storage algorithm, then store this hash in the database. With this setup, whenever a user successfully logs in, the stored password will be swapped for the new algorithm. The next time the user logs in, the new password storage algorithm will be used to check their password.</p>
<p>That's it! Good luck and don't forget to add some tests ;)</p>
</body></html>]]></description><link>https://degreesofzero.com/article/migrating-to-a-new-password-hashing-algorithm.html</link><guid isPermaLink="true">https://degreesofzero.com/article/migrating-to-a-new-password-hashing-algorithm.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Mon, 25 Sep 2017 12:30:00 GMT</pubDate></item><item><title><![CDATA[Lessons Learned from a Year of Meetups]]></title><description><![CDATA[<html><head></head><body><p>Almost every Monday for the past year, I've been organizing a programming meetup in Prague. I tried multiple different formats and teaching styles. I bumbled through some poorly thought-out presentations and had some great sessions where I think everyone learned and had fun. In the past few months it has settled in to a smooth stride, and I am finally feeling good about it. So with over 50 individual meetups hosted and hundreds of attendees, now is probably a good time for a little reflection.</p>

<div class="image center">
	<img src="lessons-learned-from-a-year-of-meetups/images/meetup.jpg" alt="">
</div>

<p>I believe my original motivation for the group was to meet smart, interesting people with whom I could work on some fun projects. At the time I was suffering from a personal lack of motivation and needed a change to get things moving in the right direction again. With that said, I think it might be helpful if I dive into a few points which could help others who are considering organizing a similar group.</p>


<h2>Pick a Theme</h2>

<p>It's not possible to be everything to everyone. Instead focus on an area where your personal strengths will be of most use.</p>

<p>My group is focused on full-stack web programming, specifically Node.js and JavaScript, because this is where my hard skills are strongest. I also felt that it was going to be important to have a unique theme for the group. I tried to think back to how I started programming. Being a self-taught programmer, I learned by working with open source software. Modify an existing thing to change how ti works or to create something new. I settled on the name "Learning by doing".</p>


<h2>Be Clear</h2>

<p>Describe the meetup clearly enough so that potential attendees can self-select for the right fit. This is especially important if your space is limited. It's not fun when someone shows up to a meetup expecting something completely different. Though, this will happen sometimes no matter what you do.</p>


<h2>Interactive Learning &gt; Talking at People</h2>

<p>I don't know about you, but I dislike lectures where the presenter drones on and on about a topic without any kind of interaction with their audience. If it's not possible to have a fully interactive presentation, where the attendees can follow along with some real programming tasks, then at least stop for the occasional Q&amp;A.</p>


<h2>Go Slowly</h2>

<p>This is something I still struggle with, but I think I am doing much better these days. It's easy to forget how complex some programming concepts are, once you've already crossed the threshold of understanding. Have empathy for those who are struggling. A good technique is to stop and check for raised hands or confused faces.</p>


<h2>Try to Get Others Involved</h2>

<p>Don't try to do everything yourself forever. The people who show up regularly would probably love to help out somehow. Talk to them. Ask them what their goals are, and if they would like to be more involved.</p>
<p>In my group, some of the regulars are beginning to prepare their own presentations. I have had to be very persistent in asking attendees if they would like to present a topic sometime. The difficulty in finding presenters is mostly due to lack of time on their part, which is understandable.</p>


<h2>Onward</h2>

<p>Overall it's been a great experience. I've met a bunch of nice people, and it has pushed me to improve my soft skills. I look forward to seeing where it goes.</p>

<p>Cheers!</p>
</body></html>]]></description><link>https://degreesofzero.com/article/lessons-learned-from-a-year-of-meetups.html</link><guid isPermaLink="true">https://degreesofzero.com/article/lessons-learned-from-a-year-of-meetups.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Sun, 23 Apr 2017 20:00:00 GMT</pubDate></item><item><title><![CDATA[HTML5 Audio Sprites]]></title><description><![CDATA[<html><head></head><body><p>Since you're reading this, you probably have a pile of sound files in a web application wondering if there's a better way. And since you already know how image sprites work, you got to thinking that maybe you could do the same thing for audio. Well, you're in luck. You can! And, with a bit of third-party-library magic, it's possible across all the major browsers.</p>

<p>Right off the bat, here's a working demo of an audio sprite:</p>
<iframe src="/demos/audio-sprites" class="demo-iframe audio-sprites"></iframe>

<p>Each button plays a different sound, but all from a single audio file. This is done by specifying an <code>offset</code> and a <code>length</code> for each individual sound that is contained within the audio sprite file. When you want to play a particular sound, the audio file begins playing at that sound's offset and finishes playing after a specific amount of time specified by that sound's length.</p>

<p>If you inspect <a href="/demos/audio-sprites/js/audio-sprites.js">the source</a> of the demo, you will notice that I am using the <a href="https://github.com/CreateJS/SoundJS/">createjs</a> sound library for audio playback. SoundJS does not support audio sprites, as I was attempting to do, so I had to create a wrapper class to handle this part. Here is a sample of the wrapper class in use:</p>

<pre class="code js">// Create an instance of the wrapper class.
var Sound = new SoundManager({
	// The base URI path for downloading sound files.
	basePath: 'sounds/',
	// Define all the sounds we want to use in our application.
	// You can reference separate audio files, or the same audio file for each sound.
	sounds: [
		{ fileName: 'all.ogg', offset: 0, length: 115, id: 'sound1' },
		{ fileName: 'all.ogg', offset: 1000, length: 205, id: 'sound2' },
		{ fileName: 'all.ogg', offset: 2000, length: 524, id: 'sound3' }
	],
	// Fallback file extension(s).
	alternateExtensions: ['m4a']
});

// Listen for some user event to trigger a sound.
var button2 = document.getElementById('play-sound-button-2');
button2.addEventListener('click', function(evt) {
	Sound.play('sound2');
});</pre>

<p>Note the <code>offset</code> value of each proceeding sound. I have added some padding between each sound so that it is less likely that the audio playback from one track will overlap with the next one. The reason for this is the <a href="https://stackoverflow.com/questions/21097421/what-is-the-reason-javascript-settimeout-is-so-inaccurate">inaccuracy of JavaScript's setTimeout</a> function. This is also why I've included a custom <code>setTimeout</code> function in the wrapper class:</p>

<pre class="code js">SoundManager.prototype.setTimeout = function(callback, delay) {
	var start = Date.now();
	var end = start + delay;
	var expected = start + interval;
	var interval = 50;
	setTimeout(function step() {
		var now = Date.now();
		if (now &gt; end) {
			// Enough time has passed.
			// Execute the callback.
			return callback();
		}
		var drift = now - expected;
		expected += interval;
		setTimeout(step, Math.max(0, interval - drift));
	}, interval);
};</pre>

<p>The above function creates a loop internally which checks and adjusts for the drift since the last loop. It will continue until the original <code>delay</code> time has passed, then it will execute the <code>callback</code> function it was given. The isn't a perfect solution, because there could still be significant delays in the <a href="https://developer.mozilla.org/en/docs/Web/JavaScript/EventLoop">event loop</a> caused by slowly executing code somewhere else in the application.</p>

<p>Well, that's about it for audio sprites. Have some noisy fun out there!</p>
</body></html>]]></description><link>https://degreesofzero.com/article/html5-audio-sprites.html</link><guid isPermaLink="true">https://degreesofzero.com/article/html5-audio-sprites.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Mon, 12 May 2014 20:12:03 GMT</pubDate></item><item><title><![CDATA[Manage Remote MySQL Servers with Local phpMyAdmin]]></title><description><![CDATA[<html><head></head><body><p>Have you ever needed to manage your remote MySQL databases, and ended up settling on the less-than-ideal setup of having an instance of phpMyAdmin on the same server as the MySQL server? Well, I am about to make your day. I am going to show you how to manage any number of remote MySQL databases from your local instance of phpMyAdmin; without compromising on security.</p>

	<p>This tutorial assumes you've already got phpMyAdmin up and running locally. If you need help with that, there are <a href="https://duckduckgo.com/?q=how+to+install+phpmyadmin">plenty of articles floating around</a> to guide you.</p>


	<h3>SSH Tunnel</h3>
	<p>It is not absolutely necessary, but I suggest setting up <a href="/article/passwordless-ssh-on-linux.html">Passwordless SSH</a> between your local machine and the remote server. This will allow you to run the SSH tunnel as a background process.</p>
	<p>We will be using SSH to set up a connection to our remote host, through which all requests to a specific port locally will be forwarded to a specific port on the remote machine. This is where the magic happens:</p>
	<pre class="code bash">ssh -fNL 3307:localhost:3306 root@REMOTE_HOST</pre>
	<p><i>Replace <b>REMOTE_HOST</b> with the IP address (or host name) of the remote server.</i></p>

	<p>Let's break that command down a bit:</p>
	<ul>
		<li><b>-f</b> Requests ssh to go to background just before command execution. Useful for having the SSH tunnel run in the background. If you are using a password to connect to the remote server, you'll want to remove this argument.</li>
		<li><b>-N</b> Do not execute a remote command. This is useful for just forwarding ports.</li>
		<li><b>-L</b> Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side.</li>
	</ul>

	<p>To verify that the SSH tunnel was started successfully, run the following command:</p>
	<pre class="code bash">ps aux | grep ssh</pre>
	<p>You should see the ssh command you entered earlier in this list of processes.</p>


	<h3>Configure phpMyAdmin</h3>
	<p>Now that we've got the SSH tunnel running in the background, we can configure phpMyAdmin to connect to the remote machine over the tunnel. Edit the following configuration file:</p>
	<pre class="code bash">sudo vim /etc/phpmyadmin/config.inc.php</pre>
	<p>Add the following to the end of the file:</p>
	<pre class="code php"># Add the following after all the existing server configurations:
$cfg['Servers'][$i]['verbose']       = 'Local';
$cfg['Servers'][$i]['host']          = 'localhost';
$cfg['Servers'][$i]['port']          = '3306';
$cfg['Servers'][$i]['connect_type']  = 'tcp';
$cfg['Servers'][$i]['extension']     = 'mysqli';
$cfg['Servers'][$i]['compress']      = FALSE;
$cfg['Servers'][$i]['auth_type']     = 'cookie';
$i++;

$cfg['Servers'][$i]['verbose']       = 'Remote Server 1';// Change this to whatever you like.
$cfg['Servers'][$i]['host']          = '127.0.0.1';
$cfg['Servers'][$i]['port']          = '3307';
$cfg['Servers'][$i]['connect_type']  = 'tcp';
$cfg['Servers'][$i]['extension']     = 'mysqli';
$cfg['Servers'][$i]['compress']      = FALSE;
$cfg['Servers'][$i]['auth_type']     = 'cookie';
$i++;</pre>
	<p>Save the file. Now, open up your browser and point it at your local phpMyAdmin instance. If you modified the phpMyAdmin config file correctly, you should now see a select drop down at the login screen. This will list both of the servers: "Local" and "Remote Server 1". You can add any number of servers to phpMyAdmin in this way.</p>

	<p>Now, finally, the moment of truth. Try logging in with a valid MySQL user on your remote server. If all is well, you should be logged in to the account and should see all the databases to which that user has access.</p></body></html>]]></description><link>https://degreesofzero.com/article/manage-remote-mysql-servers-with-local-phpmyadmin.html</link><guid isPermaLink="true">https://degreesofzero.com/article/manage-remote-mysql-servers-with-local-phpmyadmin.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Tue, 15 Apr 2014 17:08:05 GMT</pubDate></item><item><title><![CDATA[Scheduled, Automatic Remote Database Backups on Linux]]></title><description><![CDATA[<html><head></head><body><p>In this post I will walk you through the process of setting up a scheduled, automatic remote database backup on Linux.</p><p>If you haven't already done so, you'll need to <a href="/article/passwordless-ssh-on-linux.html">set up passwordless SSH</a> from the Server with the database(s) to the Server that will be storing the database backup files.</p>
<h3>Create New MySQL User to Perform Backups</h3><p>This step is to be performed on the server with the database(s).</p><p>To be on the safe side, it's a good idea not to use the root MySQL user to perform the database backups. So, let's create a new user with just enough privileges to create backups:</p><pre class="code bash">CREATE USER '__DB_USER__'@'localhost' IDENTIFIED BY  '__DB_PASSWORD__';

GRANT SELECT , 
RELOAD , 
FILE , 
SUPER , 
LOCK TABLES , 
SHOW VIEW ON *.* TO '__DB_USER__'@'localhost' IDENTIFIED BY  '__DB_PASSWORD__' WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0;

FLUSH PRIVILEGES;</pre><p><i>Replace <strong>__DB_USER__</strong> with the name for your new user.</i></p><p><i>Replace <strong>__DB_PASSWORD__</strong> with a strong, unique password.</i></p>
<h3>Create Bash Script</h3><p>This step is to be performed on the server with the database(s).</p><p>Create a new file called <i>backup.sh</i>:</p><pre class="code bash">sudo vim /home/backup.sh</pre><p>Add the contents of the following Gist to your new file:</p><p><a href="https://gist.github.com/chill117/6243212">mysql_backup.sh</a></p><p>Read the comments for instructions on how to use the script.</p>
<p>Set strictest possible permissions; so that the only thing that can be done is script execution by the file owner:</p><pre class="code bash">sudo chmod 0100 backup.sh</pre>
<p>Create directory to store the database backups:</p><pre class="code bash">cd /home
sudo mkdir backups
cd backups
sudo mkdir db</pre>
<p>Test the backup script:</p><pre class="code bash">sudo /home/backup.sh</pre>
<p>If it worked, you should see a <i>.sql.gz</i> file for each of the databases you specified in the backup script.</p>
<h3>Add Cron Task</h3><p>This step is to be performed on the server with the database(s).</p><p>Finally, you'll need to add a cron task to execute the script on a set schedule. Edit crontab for current the user:</p><pre class="code bash">crontab -e</pre><p>If you've never used crontab on the current server before, it will ask you which editor you would like to use. I prefer <i>vim</i>, but you can use whichever you wish.</p>
<p>Append the following to the end of the crontab file:</p><pre class="code bash">30 2 * * * sudo /home/backup.sh</pre><p>This will run the backup script daily at 2:30 AM (server time). For more info on crontab: <a href="https://en.wikipedia.org/wiki/Crontab_(Unix_command">https://en.wikipedia.org/wiki/Crontab_(Unix_command</a>.</p>
<p>It might be a good idea to verify your server time with the following command:</p><pre class="code bash">date</pre>
<p>The output should be similar to the following:</p><pre class="code bash">Wed Jul 24 23:12:13 EDT 2013</pre>
<p>On Ubuntu you can use the following to set the timezone temporarily and get instructions on how to set it permanently:</p><pre class="code bash">tzselect</pre>
<p>If you need help setting the timezone on your linux distro:</p><p><a href="https://duckduckgo.com/?q=how+to+set+timezone+on+linux+server">https://duckduckgo.com/?q=how+to+set+timezone+on+linux+server</a></p>
<h3>Additional Resources</h3><p>These are not necessary to perform the remote, automatic backup, but you may find them useful.</p><p><a href="https://gist.github.com/chill117/6244241">prune_backups.sh</a> - <i>Automatically delete old backup files</i></p></body></html>]]></description><link>https://degreesofzero.com/article/scheduled-remote-database-backups-on-linux.html</link><guid isPermaLink="true">https://degreesofzero.com/article/scheduled-remote-database-backups-on-linux.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Wed, 24 Jul 2013 17:19:47 GMT</pubDate></item><item><title><![CDATA[Passwordless SSH on Linux]]></title><description><![CDATA[<html><head></head><body><p>There are a number of use cases where by logging in via SSH (without a password) is the best (or maybe only) option. For example, if you wanted to run an automated backup on a remote server that would upload files to another remote server via <i>scp</i>, you would need SSH to work without a password. This post will guide you through the process of setting up passwordless SSH, such that you will be able to use <i>scp</i> and other utilities that rely upon SSH for authentication without having to use a password.</p>

<h3>Step 1: Generate a Private/Public Key Pair</h3>
<p>Open up a terminal window on the machine from which you wish to access a remote machine.</p>
<p>Check to see if you already have an RSA key for the current user:</p>
<pre class="code bash">cd ~/.ssh
ls -l</pre>
<p>If the following files already exist, then you can skip this step:</p>
<pre class="code bash">-rw------- 1 user group 1502 Jul  18  2013 id_rsa
-rw-r--r-- 1 user group  360 Jul  18  2013 id_rsa.pub</pre>
<p>If you don't already have the files, run the following command:</p>
<pre class="code bash">ssh-keygen -f ~/.ssh/id_rsa -t rsa</pre>
<p>When prompted to enter a pass phrase, just hit enter (both times). This will allow the use of the RSA key without a password.</p>

<h3>Step 2: Copy the Public Key to the Remote Server</h3>
<p>Use <i>scp</i> to transfer your public key file to the remote server that you wish to access:</p>
<pre class="code bash">scp ~/.ssh/id_rsa.pub user_name@ip_address:~/</pre>
<p>Replace <i>user_name</i> with the username you usually use to SSH into the remote server. You'll be prompted for this user's password.</p>
<p>Replace <i>ip_address</i> with the IP Address of the remote server that has the database export on it.</p>

<h3>Step 3: Add Public Key to Remote Server's Authorized Keys File</h3>
<p>Secure shell (SSH) into the remote server:</p>
<pre class="code bash">ssh user_name@ip_address</pre>
<p>Replace <i>user_name</i> with the username you usually use to SSH into the remote server. You'll be prompted for this user's password.</p>
<p>Replace <i>ip_address</i> with the IP Address of the remote server that has the database export on it.</p>
<p>Append the Public Key to the remote server's authorized keys file:</p>
<pre class="code bash">cat ~/id_rsa.pub &gt;&gt; ~/.ssh/authorized_keys</pre>
<p>Set the appropriate permissions on the authorized keys file:</p>
<pre class="code bash">chmod 0644 ~/.ssh/authorized_keys</pre>
<p>Delete the Public Key file, just to tidy up:</p>
<pre class="code bash">rm ~/id_rsa.pub</pre>

<h3>Step 4: Test</h3>
<p>Open a new terminal window, and try to secure shell into the remote server again:</p>
<pre class="code bash">ssh user_name@ip_address</pre>
<p>You should no longer be prompted for a password and the secure shell session should start right away.</p>
</body></html>]]></description><link>https://degreesofzero.com/article/passwordless-ssh-on-linux.html</link><guid isPermaLink="true">https://degreesofzero.com/article/passwordless-ssh-on-linux.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Sat, 20 Jul 2013 05:19:15 GMT</pubDate></item><item><title><![CDATA[Controllers in Sub-Sub Folders in CodeIgniter]]></title><description><![CDATA[<html><head></head><body><p>I recently I needed to organize the controllers of a CodeIgniter instance into sub-sub folders. By default, CodeIgniter only allows routing to sub folders in the controller directory:</p>

<pre>application
-- controllers
---- api
------ users.php
------ items.php
---- welcome.php</pre>
<p>With the above structure, you'd access your welcome controller at the following URI:</p>
<pre>/welcome</pre>
<p>And, you'd access the users API controller at this URI:</p>
<pre>/api/users</pre>
<p>The problem comes when you want to have a deeper controllers directory structure:</p>
<pre>application
-- controllers
---- admin
------ api
-------- users.php
-------- items.php
------ home.php
------ shop.php
---- shop
------ api
-------- cart.php
------ catalog.php
---- blog
------ api
-------- article.php
-------- comments.php
------ home.php</pre>

<p>By default, CodeIgniter will not route requests for the following URIs correctly:</p>
<pre>/admin/api/item
/shop/api/cart
/blog/api/comments</pre>

<h3>How to Make it Work</h3>

<p>First, if you have not already, you'll need to add <span class="code">mod_rewrite</span> rules to your site's webroot to allow the routing of requests to your CodeIgniter's index.php file:</p>
<pre>&lt;IfModule mod_rewrite.c&gt;

  Options +FollowSymLinks
  RewriteEngine On
  RewriteBase /

  # If your default controller is something other than 'welcome' you should probably change this.
  RewriteRule ^(welcome(/index)?|index(\.php)?)/?$ / [L,R=301]
  RewriteRule ^(.*)/index/?$ $1 [L,R=301]

  RewriteCond %{REQUEST_FILENAME} !-d
  RewriteCond %{REQUEST_FILENAME} !-f
  RewriteRule ^(.*)$ /index.php/$1 [L]

  SetEnvIfNoCase X-Forwarded-For .+ proxy=yes
  SetEnvIfNoCase X-moz prefetch no_access=yes
   
  # Block pre-fetch requests with X-moz headers.
  RewriteCond %{ENV:no_access} yes
  RewriteRule .* - [F,L]

  # Fix for infinite redirect loops.
  RewriteCond %{ENV:REDIRECT_STATUS} 200
  RewriteRule .* - [L]

&lt;/IfModule&gt;
</pre>
<p>The above rules tell Apache to internally route all requests through the index.php file, but only for requests that do not match an existing file or directory. Once you've got rewrite rules in place and working, you can move on to the sub-sub-folder magic.</p>

<h4>The Magic</h4>
<p>You will need to extend the core Router class of CodeIgniter. All you have to do is add one file, <span class="code">application/core/MY_Router.php</span>:</p>
<pre class="code php">&lt;?php  if ( ! defined('BASEPATH')) exit('No direct script access allowed');

/*
	Extended the core Router class to allow for sub-sub-folders in the controllers directory.
*/
class MY_Router extends CI_Router {

	function __construct()
	{
		parent::__construct();
	}

	function _validate_request($segments)
	{
		if (count($segments) == 0)
		{
			return $segments;
		}

		// Does the requested controller exist in the root folder?
		if (file_exists(APPPATH.'controllers/'.$segments[0].'.php'))
		{
			return $segments;
		}

		// Is the controller in a sub-folder?
		if (is_dir(APPPATH.'controllers/'.$segments[0]))
		{
			// Set the directory and remove it from the segment array
			$this-&gt;set_directory($segments[0]);
			$segments = array_slice($segments, 1);

			while (count($segments) &gt; 0 &amp;&amp; is_dir(APPPATH.'controllers/'.$this-&gt;directory.$segments[0]))
			{
				// Set the directory and remove it from the segment array
				$this-&gt;set_directory($this-&gt;directory . $segments[0]);
				$segments = array_slice($segments, 1);
			}

			if (count($segments) &gt; 0)
			{
				// Does the requested controller exist in the sub-folder?
				if ( ! file_exists(APPPATH.'controllers/'.$this-&gt;fetch_directory().$segments[0].'.php'))
				{
					if ( ! empty($this-&gt;routes['404_override']))
					{
						$x = explode('/', $this-&gt;routes['404_override']);

						$this-&gt;set_directory('');
						$this-&gt;set_class($x[0]);
						$this-&gt;set_method(isset($x[1]) ? $x[1] : 'index');

						return $x;
					}
					else
					{
						show_404($this-&gt;fetch_directory().$segments[0]);
					}
				}
			}
			else
			{
				// Is the method being specified in the route?
				if (strpos($this-&gt;default_controller, '/') !== FALSE)
				{
					$x = explode('/', $this-&gt;default_controller);

					$this-&gt;set_class($x[0]);
					$this-&gt;set_method($x[1]);
				}
				else
				{
					$this-&gt;set_class($this-&gt;default_controller);
					$this-&gt;set_method('index');
				}

				// Does the default controller exist in the sub-folder?
				if ( ! file_exists(APPPATH.'controllers/'.$this-&gt;fetch_directory().$this-&gt;default_controller.'.php'))
				{
					$this-&gt;directory = '';
					return array();
				}

			}

			return $segments;
		}


		// If we've gotten this far it means that the URI does not correlate to a valid
		// controller class.  We will now see if there is an override
		if ( ! empty($this-&gt;routes['404_override']))
		{
			$x = explode('/', $this-&gt;routes['404_override']);

			$this-&gt;set_class($x[0]);
			$this-&gt;set_method(isset($x[1]) ? $x[1] : 'index');

			return $x;
		}


		// Nothing else to do at this point but show a 404
		show_404($segments[0]);
	}

	function set_directory($dir)
	{
		// Allow forward slash, but don't allow periods.
		$this-&gt;directory = str_replace('.', '', $dir).'/';
	}

}

/* End of file MY_Router.php */
/* Location: ./application/core/MY_Router.php */</pre>

<p>That's it! CodeIgniter will automatically include this file and instantiate the class within. You should now be able to organize your controllers into as many sub folders, sub sub folders, sub sub sub folders as you like.</p></body></html>]]></description><link>https://degreesofzero.com/article/controllers-in-sub-sub-folders-in-codeigniter.html</link><guid isPermaLink="true">https://degreesofzero.com/article/controllers-in-sub-sub-folders-in-codeigniter.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Thu, 28 Mar 2013 20:21:33 GMT</pubDate></item><item><title><![CDATA[Multiple Host Names in One Instance of CodeIgniter]]></title><description><![CDATA[<html><head></head><body><p>I recently had the need to manage multiple host names within a single instance of CodeIgniter. Setting the virtual hosts to all point to the same directory in the web root was the easy part. Intelligently routing the requests, once they got to CodeIgniter, such that I can be sure a domain has its own group of controllers, not so much.</p><p>To better illustrate what I am going for, let's say you had 3 different host names that you wanted to all go to your one instance of CodeIgniter:</p><pre>domain-name.com
admin.domain-name.com
shop.domain-name.com</pre><p>Now let's say you want to route requests for each of these host names to their own group of controllers:</p><pre>application
-- controllers
---- admin
------ api
-------- users.php
-------- items.php
------ home.php
------ shop.php
---- shop
------ api
-------- cart.php
------ catalog.php
---- home.php</pre>
<h3>The Solution</h3><p>Hooks! The way I made all of this work was by utilizing CodeIgniter's hook system to trick CodeIgniter's routing methods into mapping a request for a specific domain to its corresponding controller group. I also had to add an additional hook to restore the original URI information from the actual request; so that I wouldn't have to rewrite any of the code in my controllers.</p><p>I moved the code to a Gist:</p><p><a href="https://gist.github.com/chill117/5971561">HostNameRouter.php</a></p><p>The comments in the files of the Gist should be sufficient to guide you with the set up / configuration.</p>
<p>I had to go one step further for this solution to work 100% for my situation; I needed <a href="/article/controllers-in-sub-sub-folders-in-codeigniter.html">controllers in sub-sub folders</a> to work.</p><p>If all is well, you should now be able to assign any number of host names to your single instance of CodeIgniter and have those requests routed to the controller group of your choice. Fun, huh?</p></body></html>]]></description><link>https://degreesofzero.com/article/multiple-host-names-one-codeigniter.html</link><guid isPermaLink="true">https://degreesofzero.com/article/multiple-host-names-one-codeigniter.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Thu, 28 Mar 2013 19:02:31 GMT</pubDate></item><item><title><![CDATA[Handy Terminal Tips and Tricks]]></title><description><![CDATA[<html><head></head><body><p>This article contains a few useful things that can be done in terminal; with explanations of what they do and why. I'll continually add to this list as I learn more.</p>


<h3>Exclude SVN and GIT Files While Backing Up a Directory</h3>

<p>If you're like me, you have quite  a few projects that use a mix of SVN and GIT version control systems. You may want to exclude all those hidden SVN and GIT files from your backup:</p>
<pre class="code bash">sudo tar --exclude-vcs -zcvf ./backup.tar.gz /path/to/projects/directory</pre>


<h3>Limit Download Speed for Large File Downloads</h3>

<p>If you've ever needed to download a very large file, but didn't want to completely tie up your internet connection's bandwidth, you're going to love this:</p>
<pre class="code bash" data-type="code">cd ~/Downloads
wget --limit-rate=500k http://url-of-file-to-download
</pre>
<p><a href="http://manpages.ubuntu.com/manpages/precise/man1/wget.1.html">wget</a> does a GET request to a specified URL. The <i>--limit-rate</i> option sets a maximum download speed for this download.</p>


<h3>Output Result of a Terminal Command to a File</h3>

<p>Sometimes when using terminal, a command will generate such a large amount of output that it gets cut-off towards the beginning. The way to view the full output is by writing it to a file:</p>

<pre class="code bash">ls -l &gt; output.txt
</pre>

<p>Now if you open output.txt in <i>gedit</i> you should see a list of all the visible files and directories within the current user's home directory.</p>


<h3>Pipe the Output of One Command into Another</h3>

<p>Sometimes it's helpful to chain commands together, like this:</p>
<pre class="code bash">ls -l | grep somefile
</pre>
<p><span class="code">ls</span> lists the contents of the current directory. And, <span class="code">grep</span> does pattern matching on the string that is passed to it. The net result of this combination is finding all files in the current directory that at least partially match the text <span class="code">somefile</span></p>


<h3>Search Running Processes by Keyword</h3>

<p>Here we're going to pipe the output of the <span class="code">ps aux</span> command - which lists all running processes - into <span class="code">grep</span> with a keyword, to retrieve a list of running processes that matches our keyword:</p>
<pre class="code bash">ps aux | grep ssh
</pre>
<p>Where <span class="code">ssh</span> is our keyword.</p>


<h3>Manually Expire/Exit Sudo Session</h3>

<p>After you've used <span class="code">sudo</span> to execute a terminal command, the sudo session will persist for some time after. What this means is that you won't be prompted for the password for the root user when using sudo during this session. To expire the sudo session do the following:</p>
<pre class="code bash">sudo -k
</pre>


<h3>Exit su Session</h3>

<p>To manually exit from a <span class="code">su - someuser</span> session:</p>
<pre class="code bash">exit
</pre></body></html>]]></description><link>https://degreesofzero.com/article/handy-terminal-tips-and-tricks.html</link><guid isPermaLink="true">https://degreesofzero.com/article/handy-terminal-tips-and-tricks.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Sat, 23 Feb 2013 19:11:48 GMT</pubDate></item><item><title><![CDATA[Fixing the Expiring Session Problem in CodeIgniter]]></title><description><![CDATA[<html><head></head><body><p>For the most recent stable release of CodeIgniter (2.1.3), there is a rather annoying simultaneous request problem that will kill active sessions. You might have experienced this yourself if you had a website or application with lots of AJAX requests or other simultaneous requests. The tell-tail sign was that your users would be logged out after the update session time had passed (5 minutes by default).</p>

<p>The <a href="http://stackoverflow.com/questions/7980193/codeigniter-session-bugging-out-with-ajax-calls">solution</a> that seemed to work for most folks involved extending the session library with your own <i>MY_Session.php</i> library that overwrote the <i>sess_update</i> method with one that only executed the update method when <strong>not</strong> an AJAX request:</p>

<pre class="code php">&lt;?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');

require_once BASEPATH . '/libraries/Session.php';

class MY_Session extends CI_Session
{

	function __construct()
	{
		parent::__construct();

		$this-&gt;CI-&gt;session = $this;
	}

	function sess_update()
	{
		// Do NOT update an existing session on AJAX calls.
		if (!$this-&gt;CI-&gt;input-&gt;is_ajax_request())
			return parent::sess_update();
	}

}

/* End of file MY_Session.php */
/* Location: ./application/libraries/MY_Session.php */
</pre>

<p>You can either auto-load this library from <i>config/autoload.php</i>:</p>
<pre class="code php">$autoload['libraries'] = array( 'MY_Session');</pre>
<p>Or, you can load it later:</p>
<pre class="code php">$this-&gt;load-&gt;library('MY_Session');</pre></body></html>]]></description><link>https://degreesofzero.com/article/fixing-the-expiring-session-problem-in-codeigniter.html</link><guid isPermaLink="true">https://degreesofzero.com/article/fixing-the-expiring-session-problem-in-codeigniter.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Tue, 19 Feb 2013 18:05:32 GMT</pubDate></item><item><title><![CDATA[Making GIMP A Bit More Usable]]></title><description><![CDATA[<html><head></head><body><p>If you have already installed GIMP 2.8+, then you're off to a good start. Now we're going to change a couple things from the default configuration to make GIMP a little bit more useful and intuitive; or at the very least, more like the image editing programs we've become accustomed to.</p><h3>Layer Effects</h3><p>This is a big one. First, you'll need to <a href="http://registry.gimp.org/files/layerfx.2.8.py.txt">download layerfx for GIMP 2.8</a>. Yes, for some reason it opens as a text file in your browser instead of downloading as a file. That's ok. Left click once on the text (anywhere). Select all (<i>CTRL + A</i>) and copy (<i>CTRL + C</i>).</p><p><i>Note: The reason I suggest using the python version (.py) is because it allows <strong>Previewing</strong>.</i></p><p>Now, open up terminal. Create a new python plugin in your GIMP plugins directory:</p><pre class="code bash">cd ~/.gimp-2.8/plug-ins
sudo gedit layerfx.py</pre><p>Paste the contents of the layerfx python plugin (<i>CTRL + V</i>). Save and close the file.</p><p>Now back in the terminal window, set the permissions on the layerfx plugin file to be executable:</p><pre class="code bash">chmod +x layerfx.py</pre><p>Finally, open GIMP. If you had it open already, close it then reopen it.</p><p>If you installed the plugin correctly, you should be able to reach the layer effects menu at <i>Layer -&gt; Layer Effects</i>.</p><p>For more info about the Layer Effects plug-in, see:</p><p><a href="http://registry.gimp.org/node/186">http://registry.gimp.org/node/186</a></p>
<h3>Select Layer with Left-Click</h3><p>If you have ever had to chop up a layered image composition for a website or GUI, you know why this is useful. Being able to left click on a layer to select it can save you tons of time that would otherwise have been wasted searching through hundreds of layers manually.</p><p>In GIMP, go to <i>Edit -&gt; Preferences</i>. Now go to <i>Tool Options</i>. At the bottom of this screen, you'll see a check box with the following text next to it:</p><pre>Set layer or path as active</pre><p>Check this box and hit "OK."</p><p>Now when you are using the <i>Move Tool</i> and left click somewhere in an image, the layer you just clicked will become the selected / active layer.</p></body></html>]]></description><link>https://degreesofzero.com/article/making-gimp-a-bit-more-usable.html</link><guid isPermaLink="true">https://degreesofzero.com/article/making-gimp-a-bit-more-usable.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Thu, 13 Dec 2012 02:45:59 GMT</pubDate></item><item><title><![CDATA[Why does my JavaScript break in Internet Explorer?]]></title><description><![CDATA[<html><head></head><body><p>Do you use <i>console.log()</i> to debug your JavaScript? Well, I've got news for you.. <i>console.log()</i> breaks Internet Explorer; <strong>even if all instances of it are commented out!</strong> A quick check you can do to see if <span class="code">console.log()</span> is indeed the source of your problem, is to open the console in Internet Explorer and see if the problem goes away. The reason for this is that <span class="code">console.log()</span> does not exist in Internet Explorer unless the developer tools are active. So, for regular IE users,<span class="code">console.log()</span> is a show stopper.</p>

<h3>The Solution</h3>
<p>You can either make sure you don't ship your JavaScript with any instances of <span class="code">console.log()</span> in it, or you can place the following code near the top of your JavaScript to ensure that it exists (albeit non-functional):</p>
<pre class="code js">if ( ! window.console ) console = { log: function(){} };</pre></body></html>]]></description><link>https://degreesofzero.com/article/javascript-breaks-only-in-internet-explorer.html</link><guid isPermaLink="true">https://degreesofzero.com/article/javascript-breaks-only-in-internet-explorer.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Sun, 02 Dec 2012 23:58:01 GMT</pubDate></item><item><title><![CDATA[Missing Intermediate SSL Certificate Error]]></title><description><![CDATA[<html><head></head><body><h3 id="update">Update</h3>
<p>Since this article was written, there is a new initiative in broadening the adoption of HTTPS/SSL across the web: <a href="https://letsencrypt.org/">LetsEncrypt</a>. They provide free, automated SSL certificates that actually work.</p>
<h2 id="defining-the-problem">Defining the Problem</h2>
<p>I encountered a peculiar problem with my signed SSL certificates the other day. In the latest versions of Firefox and Chrome, the SSL certificate was being trusted and worked just fine. However, in Chrome in iPad (and likely other browsers with similarly limited capabilities), the certificate was deemed "untrusted."</p>
<p>I ran an <a href="https://www.ssllabs.com/ssltest/">SSL Test</a> on the domain with which I was having the problem. This yielded a bit of very useful information:</p>
<pre><code>Chain issues     Incomplete
</code></pre>
<p>This gave me what I needed to further debug the problem. I discovered that I needed to have the server send a "Certificate Chain" with the initial SSL hand shake in order for browsers that do not support "certificate discovery" to find the <a href="http://en.wikipedia.org/wiki/Root_certificate">root certificate</a>.</p>
<p>For additional information, see <a href="http://en.wikipedia.org/wiki/Intermediate_certificate_authorities">Intermediate Certificate Authorities</a>.</p>
<h2 id="fixing-the-problem">Fixing the Problem</h2>
<p>First, you will need to search your Certificate Authority's (CA) website to download their Intermediate CA (or "Certificate Chain") file. This file will contain the concatenated chain of trusted CA certificates needed to reach the root certificate. Once you find and download the chain file, you will need to upload it to your server and configure your web server to provide it to clients during SSL handshakes.</p>
<h3 id="configuring-nginx">Configuring nginx</h3>
<p>In your site's server configuration add the following:</p>
<pre><code>ssl_certificate /path/to/full-certificate-chain.pem;
ssl_trusted_certificate /path/to/certificate-chain.pem;
</code></pre>
<ul>
<li><code>certificate-chain.pem</code> - This file should contain the CA certificate chain (in descending order).</li>
<li><code>full-certificate-chain.pem</code> - This should contain the same chain of certificates as the <code>certificate-chain.pem</code> file, but with the addition of your site's certificate at the top of the file.</li>
</ul>
<p>After you're done modifying your site's configuration file, test the changes with the following command:</p>
<pre><code>nginx -t
</code></pre>
<p>If everything's ok, it's safe to reload all nginx configurations:</p>
<pre><code>service nginx reload
</code></pre>
<p>It's not necessary to do a full restart.</p>
<h3 id="configuring-apache">Configuring Apache</h3>
<p>In your website's virtual host for port 443, edit the following line:</p>
<pre><code>#SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt
</code></pre>
<p>Uncomment the line and have it reference the path to the chain file you just uploaded to the server:</p>
<pre><code>SSLCACertificateFile /path/to/full-certificate-chain.pem;
</code></pre>
<ul>
<li><code>full-certificate-chain.pem</code> - This file should contain your site's certificate along with every CA's certificate added in descending order.</li>
</ul>
<p>Restart Apache.</p>
<h2 id="is-the-problem-fixed">Is the Problem Fixed?</h2>
<p>Run the <a href="https://www.ssllabs.com/ssltest/">SSL Test</a> again. If all is well, the chain issue should be resolved. If not, you can further debug the problem by using the following command in a terminal window on your local machine (not on the server with the SSL issue):</p>
<pre><code>openssl s_client -showcerts -verify 32 -connect domain-name:443
</code></pre>
<ul>
<li>You will need to have <code>openssl</code> installed to run this command</li>
<li>Be sure to replace <em>domain-name</em> with the domain that is having the SSL issue</li>
</ul>
<p>The output of this command is quite dense and can be difficult to sort through. You will want to first find the top of the output, and then search for something like this:</p>
<pre><code>verify error:num=20:unable to get local issuer certificate
</code></pre>
</body></html>]]></description><link>https://degreesofzero.com/article/missing-intermediate-ssl-certificate-error.html</link><guid isPermaLink="true">https://degreesofzero.com/article/missing-intermediate-ssl-certificate-error.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Thu, 15 Nov 2012 19:32:35 GMT</pubDate></item><item><title><![CDATA[How to Export, Import MySQL Databases from Terminal]]></title><description><![CDATA[<html><head></head><body><p>In this tutorial I will walk you through the process of creating and restoring a database backup from Terminal in Ubuntu.</p>

<h3>Create a Database Backup</h3><p>We're going to use <i>mysqldump</i> to create a backup of the database, and we're going to compress the backup with gzip.</p><pre class="code bash" data-type="code">mysqldump -u user_name -h localhost -p database_name | gzip -9 &gt; backup_name.sql.gz</pre><p>Replace <i>user_name</i> with your MySQL user name.</p><p>Replace <i>database_name</i> with the name of the database you wish to export.</p>
<h3>Restore the Database Backup</h3><p>From the terminal of the server to which you wish to import the database backup, use the following command:</p><pre class="code bash" data-type="code">zcat /path/to/database/backup/backup_name.sql.gz | mysql -u user_name -p database_name</pre><p>Replace <i>user_name</i> with your MySQL user name. You'll be prompted for this user's password.</p><p>Replace <i>database_name</i> with the name of the database you wish to import.</p>
<p>That's it!</p><p>Now, if you are feeling extra fancy, I've included additional instructions for deploying a database backup from one remote server to another:</p>
<h3>Deploying a Database to Another Server</h3><p>Exporting and importing your database backups via terminal is also useful if you need to deploy a copy of a database from one server to another. Since you've already done the export on one server, let's focus on the server to which you want to import the database.</p>
<h4>Transferring the Database Backup</h4><p>Do <strong>NOT</strong> use File Transfer Protocal (FTP). FTP is fun and all, but here we're going to use a more efficient mechanism. <a href="http://manpages.ubuntu.com/manpages/lucid/man1/scp.1.html">Secure Copy</a> has a number of benefits over the lesser methods of transferring files:</p><ul><li><strong>One Less Trip</strong> - No need to download from one server just to upload to another.</li><li><strong>Faster Transfers</strong> - Servers tend to have good internet connections. Once <i>your</i> computer is out of the equation, the transfer rate is going to be orders of magnitude better.</li><li><strong>It's Secure</strong> - Not only does it have <i>secure</i> in the name, but it also uses SSH for authentication and data transfer.</li></ul>
<p>So let's get down to it then. From the server to which you wish to import the database, use the following command:</p><pre class="code bash" data-type="code">sudo scp user_name@ip_address:/file/path/to/backup/backup_name.sql.gz ./</pre><p>Replace <i>user_name</i> with the username you usually use to SSH into the remote server. You'll be prompted for this user's password.</p><p>Replace <i>ip_address</i> with the IP Address of the remote server that has the database export on it.</p><p>Replace <i>/file/path/to/backup/backup_name.sql.gz</i> with the full file path of the database export on the remote server.</p>
<p>Now that you've got the database backup on the second server, all you have to do is <i>Restore the Database Backup</i> using the instructions from earlier in this tutorial.</p></body></html>]]></description><link>https://degreesofzero.com/article/how-to-export-and-import-mysql-databases-from-terminal.html</link><guid isPermaLink="true">https://degreesofzero.com/article/how-to-export-and-import-mysql-databases-from-terminal.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Tue, 28 Aug 2012 19:13:26 GMT</pubDate></item><item><title><![CDATA[Windows 7 + IE9 on Ubuntu]]></title><description><![CDATA[<html><head></head><body><p>Cross browser testing with Internet Explorer and Windows just got a whole lot easier. In this article I'll walk you through the full process of downloading, installing, and configuring a Windows Virtual Machine.</p>

<h3>Get the Virtual Machine</h3>

<p>Since this is a rather large file, you may want to limit the download speed for this particular download, so that it doesn't use up all of your bandwidth. From terminal run the following:</p>
<pre class="code bash">cd ~/Downloads
wget --limit-rate=500k http://virtualization.modern.ie/vhd/IEKitV1_Final/VirtualBox/Linux/IE9_Win7.zip</pre>
<p>If the direct download link above does not work, you can get a fresh one <a href="https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/">here</a>.</p>
<p>Once the download is complete, you'll need to "unzip" the zip archive. I use quotes there because I had issues using <i>unzip</i> on the file. I had to use <i>7z</i> to extract it. If you don't have this command, you'll need to install <i>p7zip-full</i>:</p>
<pre class="code bash">sudo apt-get install p7zip-full</pre>
<p>Now you should be able to extract the <i>zip</i> file:</p>
<pre class="code bash">7z e IE9\ -\ Win7.zip</pre>
<p>Once that's done, you should have a file named <i>IE9 - Win7.ova</i></p>

<h3>Install VirtualBox</h3>
<p>First you will need to <a href="https://www.virtualbox.org/wiki/Linux_Downloads">download VirtualBox</a>. Then you'll need to install it by running the <i>.deb</i>.</p>
<p>Alternatively, you can get any previous version of VirtualBox and for many different platforms <a href="https://www.virtualbox.org/wiki/Downloads">here</a>.</p>
<h3>Set Up the Virtual Machine</h3>
<ol>
	<li>Start up VirtualBox</li>
	<li>Go to <i>File</i></li>
	<li>Then <i>Import Appliance</i></li>
	<li>Click <i>Open Appliance</i></li>
	<li>Select the <i>IE9 - Win7.ova</i> file</li>
	<li>Click <i>Next</i></li>
</ol>
<p>This should take a little while. Once it's done, start the Virtual Machine.</p>

<h3>Fitting the VM to Your Screen Resolution</h3>
<p>To make using the Virtual Machine bearable, you'll want to have it fit your screen resolution. To do this, you'll need to <a href="http://www.virtualbox.org/manual/ch04.html#additions-windows">install Guest Additions</a> in the Guest OS.</p>
<p>Download the guest additions ISO for your version of VirtualBox. It should be in the directory for your version. If you do not know what version of VirtualBox you're running:</p>
<ol>
	<li>Open VirtualBox</li>
	<li>Go to <i>Help</i></li>
	<li>Then <i>About VirtualBox</i></li>
</ol>

<h4>Installing Guest Additions</h4>
<ol>
	<li>In VirtualBox, click on the <i>IE9 - Win7</i> virtual machine.</li>
	<li>Go to <i>Settings</i></li>
	<li>Go to <i>Storage</i></li>
	<li>Under <i>Controller: IDE Controller</i>, click <i>Add CD/DVD Device</i></li>
	<li>Select the Guest Additions ISO you just downloaded</li>
	<li>Click <i>OK</i> to save the settings</li>
	<li>Start the Virtual Machine, if it isn't already</li>
	<li>In Windows, go to <i>Start</i></li>
	<li>Then, <i>Computer</i></li>
	<li>You should see the <i>VirtualBox Guest Additions</i> CD under <i>Devices with Removable Storage</i></li><li>Open the <i>VirtualBox Guest Additions</i> CD</li>
	<li>Run the <i>VBoxWindowsAdditions</i> installer</li>
	<li>Follow the instructions in the installer</li>
	<li>When the installation is complete, you'll need to restart the virtual machine</li>
</ol>

<p>After the virtual machine reboots, it should now utilize the full resolution of your screen when in full screen mode.</p>

<h3>Extras</h3>
<p>These are solutions for some edge use-cases that you may find useful.</p>

<h4>Disabling Mouse Integration Mode</h4>
<p>You may need to disable "Mouse Integration Mode" in order to use your mouse in the Virtual Machine. You can do this simply with your keyboard while in the VM window: <i>Host Key + I</i>. The Host Key is usually the right Ctrl key.</p>

<h4>Accessing Your Host Machine's Web Root</h4>
<p>If you've got a local environment set up for your projects, you'll probably want to be able to access those projects locally while on your Virtual Machine. To accomplish this, you will have to modify Windows 7's <i>hosts</i> file; which is located at:</p>
<pre>C:\Windows\System32\drivers\etc\hosts</pre>
<p>You'll have to open NotePad as Administrator in order to modify the hosts file.</p>
<p>If you only need to be able to access <i>localhost</i>, just add the following line:</p>
<pre>10.0.2.2   outer</pre>
<p>If you want to be able to use your virtual hosts too, follow the pattern below for each of your virtual hosts:</p>
<pre>10.0.2.2   domain-name.local</pre>
<p>That's it!</p></body></html>]]></description><link>https://degreesofzero.com/article/win7-plus-ie9-on-ubuntu.html</link><guid isPermaLink="true">https://degreesofzero.com/article/win7-plus-ie9-on-ubuntu.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Thu, 16 Aug 2012 18:29:48 GMT</pubDate></item><item><title><![CDATA[Prisoner's Dilemma]]></title><description><![CDATA[<html><head></head><body><p>If you're not familiar with the <a href="http://en.wikipedia.org/wiki/Prisoner's_dilemma">The Prisoner's Dilemma</a>, it's the go-to example for describing <a href="http://en.wikipedia.org/wiki/Game_theory">game theory</a>. Here's a quick explanation of how it works:</p><p>Two individuals are each presented with a choice between two options: to <i>Defect</i> or to <i>Cooperate</i>.</p><p><strong>Defecting</strong> means an individual will betray the other in order to receive a beneficial outcome for them self and a negative outcome for the other.</p><p><strong>Cooperating</strong> means an individual is hoping the other individual will choose to cooperate as well, in which case they would both get a slightly beneficial outcome.</p><p>Neither will know what choice the other has made until after they have both made a decision.</p><p>If you were to play this scenario out only once, the best course of action for either individual would be to <i>defect</i>. However, that's not all that interesting. What's more interesting is when this scenario is played out in a slightly different way dozens, or hundreds, or millions of times. When given the ability to remember at least the last few encounters you have with one individual, the rules of the game change dramatically. Knowing what prior choices your opponent has made open up a world of possibilities; many different strategies. This leads us to the <a href="http://en.wikipedia.org/wiki/Prisoner's_dilemma#The_iterated_prisoners.27_dilemma">Iterated Prisoner's Dilemma</a>.</p>

<p>So what did I do with this information? <a href="https://github.com/chill117/prisoners-dilemma/blob/master/prisoners_dilemma.php">Coded it in PHP</a>, of course. I created over a dozen different strategies, and pitted them against one another in over two and a half million iterations of the Prisoner's Dilemma. For each iteration, the two opponents' strategies were chosen randomly. Here are the results:</p>

<div class="prisoners-dilemma">
	<div class="graph" id="pd-averages" style="height: 480px"></div>
	<div class="graph" id="pd-matches" style="height: 480px"></div>
	<div class="graph" id="pd-points" style="height: 480px"></div>
</div>

<p>An interesting next step would be to extend this further to test the effect population has on the success of each individual strategy. In other words, if Tit for Tat made up only 2% of the population, and some more malicious strategies made up a large chunk of the population, would the Tit for Tat strategy fair better or worse?</p>

<p>If you're interested, the code is available <a href="https://github.com/chill117/prisoners-dilemma">in a GitHub repository.</a> If you are feeling inspired, please feel free to fork it or submit a pull request.</p>


		<script src="/js/third_party/canvasjs.min.js"></script>

		<script>

			$(document).ready(function() {

				var rawData = {
					avg: {"Always_defect":43.656,"Naive_prober":55.153,"Tit_for_tat_suspicious":55.389,"Random":57.407,"Remorseful_prober":59.297,"Tit_for_tat_and_random":59.911,"Jekyll_and_hyde":60.578,"Tit_for_two_tats_and_random":61.07,"Grudger_soft":61.996,"Grudger":63.529,"Adaptive":63.575,"Always_cooperate":64.505,"Tit_for_tat":65.391,"Pavlov":65.533,"True_peace_maker":66.008,"Tit_for_two_tats":66.138,"Naive_peace_maker":66.363},
					points: {"Always_defect":6484831,"Naive_prober":8109847,"Tit_for_tat_suspicious":8191105,"Random":8544724,"Remorseful_prober":8778899,"Tit_for_tat_and_random":8860862,"Jekyll_and_hyde":9095474,"Tit_for_two_tats_and_random":9105902,"Grudger_soft":9047731,"Grudger":9430195,"Adaptive":9481385,"Always_cooperate":9569642,"Tit_for_tat":9649395,"Pavlov":9759239,"True_peace_maker":9767129,"Tit_for_two_tats":9767175,"Naive_peace_maker":9853809},
					matches: {"Always_defect":148543,"Naive_prober":147042,"Tit_for_tat_suspicious":147882,"Random":148845,"Remorseful_prober":148049,"Tit_for_tat_and_random":147901,"Jekyll_and_hyde":150144,"Tit_for_two_tats_and_random":149105,"Grudger_soft":145941,"Grudger":148440,"Adaptive":149138,"Always_cooperate":148354,"Tit_for_tat":147565,"Pavlov":148920,"True_peace_maker":147969,"Tit_for_two_tats":147679,"Naive_peace_maker":148483}
				}

				var dataPoints = {}

				for (var type in rawData)
				{
					dataPoints[type] || (dataPoints[type] = [])

					for (var strategy in rawData[type])
					{
						var dataPoint = {}
						var label = strategy.replace(/_/g, ' ')

						dataPoint.y = rawData[type][strategy]

						switch (type)
						{
							case 'matches':
								dataPoint.legendText = label
								dataPoint.indexLabel = label
							break

							case 'avg':
							case 'points':
								dataPoint.label = label
							break
						}

						dataPoints[type].push( dataPoint )
					}
				}

				var charts = {}

				charts.avg = new CanvasJS.Chart("pd-averages", {
					fontColor: "#727272",
					backgroundColor: 'transparent',
					title:{
						fontColor: "#aaa",
						text: "Average Points Per Match",
						fontSize: 16
					},
					legend: {
						fontSize: 12
					},
					axisX: {
						labelFontSize: 12,
						gridColor: '#222',
						gridThickness: 1,
						lineColor: '#222',
						lineThickness: 1,
						tickColor: '#222',
						tickThickness: 1
					},
					axisY: {
						labelFontSize: 12,
						gridColor: '#222',
						gridThickness: 1,
						lineColor: '#222',
						lineThickness: 1,
						tickColor: '#222',
						tickThickness: 1
					},
					data: [
						{
							dataPoints: dataPoints.avg
						}
					]
				})

				charts.matches = new CanvasJS.Chart("pd-matches", {
					fontColor: "#727272",
					backgroundColor: 'transparent',
					title:{
						text: "Total Number of Matches",
						fontColor: "#aaa",
						fontSize: 16
					},
					legend: {
						fontColor: "#727272",
						fontSize: 12
					},
					data: [
						{
							type: "doughnut",
							indexLabelFontSize: 12,
							dataPoints: dataPoints.matches
						}
					]
				})

				charts.points = new CanvasJS.Chart("pd-points", {
					fontColor: "#727272",
					backgroundColor: 'transparent',
					title:{
						fontColor: "#aaa",
						text: "Total Number of Points",
						fontSize: 16
					},
					legend: {
						fontSize: 12
					},
					axisX: {
						labelFontSize: 12,
						gridColor: '#222',
						gridThickness: 1,
						lineColor: '#222',
						lineThickness: 1,
						tickColor: '#222',
						tickThickness: 1
					},
					axisY: {
						labelFontSize: 12,
						interval: 2500000,
						gridColor: '#222',
						gridThickness: 1,
						lineColor: '#222',
						lineThickness: 1,
						tickColor: '#222',
						tickThickness: 1
					},
					data: [
						{
							type: "bar",
							dataPoints: dataPoints.points
						}
					]
				})

				for (var type in charts)
					charts[type].render()

			})

		</script></body></html>]]></description><link>https://degreesofzero.com/article/prisoners-dilemma.html</link><guid isPermaLink="true">https://degreesofzero.com/article/prisoners-dilemma.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Thu, 28 Jun 2012 06:27:13 GMT</pubDate></item><item><title><![CDATA[Be the Gate Keeper of Your Personal Data]]></title><description><![CDATA[<html><head></head><body><p>Creeped out by just how much companies know about you? Maybe you heard about <a href="http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?_r=1&amp;pagewanted=all">how Target figured out a teenage girl was pregnant before her own father</a>. Or maybe you've found the ads on the websites you visit to be a little too specific. Or maybe you've heard about the seemingly endless stream of major security breaches involving hundreds of thousands of detailed customer records:</p>
<ul>
	<li><a href="http://arstechnica.com/business/news/2012/04/payment-processor-breach-hits-15-million-credit-card-holders.ars">1.5 million credit cards exported in hack of payments processor</a></li>
	<li><a href="http://www.nytimes.com/2012/03/31/business/mastercard-and-visa-look-into-possible-attack.html">MasterCard and Visa Investigate Data Breach</a></li>
	<li><a href="http://www.wired.com/threatlevel/2011/06/citi-credit-card-breach/">Citi Credit Card Data Breached for 200,000 Customers</a></li>
	<li><a href="http://www.wired.com/threatlevel/2009/10/walmart-hack/">Big-Box Breach: The Inside Story of Wal-Mart's Hacker Attack</a></li>
</ul>
<p>Think for a moment just how much information you give to these companies. How many companies know where you live? How many companies have your phone number? How many companies have your credit card or bank account information? Are those companies selling all or some of that data to other companies? And what are the odds any of those companies could suffer a security breach that could expose your personal information? Odds are some of your personal information has already fallen into the hands of criminals who could use it to do any number of nasty things:</p>
<ul>
	<li>Steal your identity</li>
	<li>Make fraudulent purchases on your credit cards</li>
	<li>Take funds directly from your bank accounts</li>
	<li>Target you with sophisticated phishing scams</li>
</ul>

<h3>What You Can Do</h3>
<p>All that being said, there are things you can do to protect yourself.</p>

<ol>
	<li><b>Avoid giving personal information to any website or service</b></li>
	<li><b>Use an ad-blocker</b></li>
	<li><b>Configure your browser to protect your privacy</b></li><b>
</b></ol><b>

<p>If you're using Firefox or Chrome, you can install uBlock Origin (<a href="https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/">for Firefox</a>, <a href="https://chrome.google.com/webstore/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm?hl=en">for Chrome</a>) to prevent third parties from tracking you while browsing the internet. The mobile version of Chrome doesn't allow you to install extensions, so it's probably a good idea to use Firefox mobile instead (with uBlock Origin installed).</p>

<p>Another thing you can do to make it more difficult for websites to track you is to disable <i>third party cookies</i>. In Chrome, go to your "Content Settings" (<a href="chrome://chrome/settings/content">chrome://chrome/settings/content</a>) and check the box next to "Block third-party cookies and site data."</p>

<p>In the end, the single most important thing you can do to protect yourself is to be mindful of the information you give to companies; whether online or off.</p>
</b></body></html>]]></description><link>https://degreesofzero.com/article/be-the-gate-keeper-of-your-personal-data.html</link><guid isPermaLink="true">https://degreesofzero.com/article/be-the-gate-keeper-of-your-personal-data.html</guid><dc:creator><![CDATA[Charles Hill]]></dc:creator><pubDate>Fri, 06 Apr 2012 07:44:09 GMT</pubDate></item></channel></rss>