{ "version": "https://jsonfeed.org/version/1", "title": "uechi.io", "description": "Random posts from uetchy", "home_page_url": "https://uechi.io", "items": [ { "id": "https://uechi.io/blog/building-mozc-for-macos/", "url": "https://uechi.io/blog/building-mozc-for-macos/", "title": "Building mozc for macOS", "date_published": "2022-05-19T15:00:00.000Z", "content_html": "

Mozc is an open-source counterpart of Google Japanese Input, a Japanese input method developed by Google.

\n

Build environment

\n
$ sw_vers\nProductName:\tmacOS\nProductVersion:\t12.6.1\nBuildVersion:\t21G217\n\n$ xcodebuild -version\nXcode 14.1\nBuild version 14B47b
\n

Build mozc

\n
# Install dependencies\nbrew install qt@5 bazel packages\n\n# Clone the repository\ngit clone https://github.com/google/mozc -b master --single-branch --recursive\n\n# Build\ncd mozc/src\nMOZC_QT_PATH=/usr/local/opt/qt@5 ANDROID_NDK_HOME= bazel build package --config macos -c opt\n\n# Install\nopen bazel-bin/mac/Mozc.pkg\nreboot
\n", "tags": [] }, { "id": "https://uechi.io/blog/root-ca-in-japan/", "url": "https://uechi.io/blog/root-ca-in-japan/", "title": "国内のルート認証局とACME対応状況", "date_published": "2022-03-04T06:00:00.000Z", "content_html": "
\n

本記事と関係ないが在日ウクライナ大使館に寄付をした。なぜかというと、非常にお世話になっているFSNotesの開発者が先月末から続いているウクライナ侵攻によって OSS 活動を無期限停止せざるを得ない状況に追い込まれており、少しでも支援をしたかったからだ。在日ウクライナ大使館のサイトにアクセスできない状態が続いているため、公式ツイートから寄付先を参照すると良い。

\n
\n

概要

\n

まず断っておきたいが、ルート認証局のせいで子孫 SSL 証明書の運用に問題が生じたことはWoSignTrustCor(TrustCor は少し毛色が違うが)の例を除いてほとんどない。わざわざ高額な手数料を支払うより、同じセキュリティ強度の証明書を無料で提供してくれる Let's Encrypt (ISRG)や ZeroSSL (Sectigo) をありがたく使わせていただく、そういう意識で問題ないのである。

\n

国内唯一のルート認証局は SECOM が所有

\n\n

個人が取得可能かつ自動更新に対応しているワイルドカード証明書の年間料金

\n

金額は 2022 年 3 月現在。

\n\n

FujiSSL、JPRS のルート認証局はどちらも Security Communication RootCA2 だ。

\n

FujiSSL は独自の自動更新ツールを提供しているが、ホストに PHP や Apache をセットアップしなければならず、HTTP サーバーを内蔵している lego や certbot と比べてやや煩雑な印象は拭えない。独自方式なので、当然 ACME 対応クライアントは使えない。

\n

ACME

\n

RFC 8555 - Automatic Certificate Management Environment (ACME) は自動化されたドメイン認証 SSL 証明書発行手続きを標準化しようとする一連の努力の成果である。

\n

JPRS が ACME に対応

\n

タイミングの良いことに、JPRS が 2022/3/2 から ACME に対応した。

\n\n

JPRS サーバー証明書認証局証明書ポリシーに目を通してどの認証方式に対応しているのか確認しよう。

\n

3.2.2.4.7 DNS Change

\n

DNS ゾーンに_acme-challenge TXT レコードを置いてドメインの所有権を確認する方式。一番簡単で制約が少ない。

\n

3.2.2.4.18 Agreed-Upon Change to Website v2

\n

HTTP/HTTPS サーバーを立ち上げて.well-known/pki-validation以下に置いたチャレンジトークンを PKI 側が確認する方式。しかし、

\n
\n

2021 年 11 月 18 日以降に発行する証明書について、先頭ラベルが\"*\"(ワイルドカード)の FQDN に対してこの方法を適用外とする。

\n
\n

ワイルドカード証明書の場合この方式が使えない。

\n

3.2.2.4.19 Agreed-Upon Change to Website - ACME

\n
\n

先頭ラベルが\"*\"(ワイルドカード)の FQDN に対してこの方法を適用外とする

\n
\n

同上

\n

4.2.4 CAA レコードの確認

\n
\n

本 CA は、RFC 6844 に従い、申請情報の審査時に CAA レコードを確認する。CAA レコードに記載する本 CA のドメインは「jprs.jp」とする。

\n
\n

DNS ゾーンに CAA レコードを置いて内容をjprs.jpにする必要がある。

\n

ACME クライアント

\n

Go 製の ACME クライアントlegoは DNS-01 認証に対応しているため、JPRS ACME 証明書の発行がスムーズにできる可能性がある。

\n
コマンド例(執筆時点では動作しない)
CLOUDFLARE_DNS_API_TOKEN=<token> \\\nlego \\\n --server 'https://acme.amecert.jprs.jp/DV/getDirectory' \\\n --eab --kid 'NUtfiBgcWr9oGCWmF8PQd2d499T7WrgqsnkxIOAPASE' --hmac 'Shn8-aYwhUw0esMLnqJL_o9Fg_BszfAgrRJjOtGQGGY' \\\n --accept-tos \\\n --path ./certs \\\n --dns cloudflare \\\n --email 'mail@example.jp' \\\n --domains '*.example.jp' \\\n run
\n

nginx-proxy/acme-companionは、ACME_CA_URI環境変数を指定することで JPRS に対応できるが、HTTP-01 認証に依存しているため ワイルドカード証明書は発行できないだろう。今後対応する可能性はある。

\n
docker run -d \\\n  --name your-proxied-app \\\n  -e \"VIRTUAL_HOST=subdomain.example.jp\" \\\n  -e \"LETSENCRYPT_HOST=subdomain.example.jp\" \\\n  -e \"ACME_CA_URI=https://acme.amecert.jprs.jp/DV/getDirectory\" \\\n  nginx
\n

JPRS に問い合わせたところ、ACME の利用には JPRS と指定事業者両方の対応が必要になるそうだ。現在どの指定事業者も対応していないため、残念ながらもう少し待たなくてはならない。今後 ACME 経由で証明書の発行を試す機会があれば記事を更新する。

\n", "tags": [] }, { "id": "https://uechi.io/blog/human-anatomy-atlas/", "url": "https://uechi.io/blog/human-anatomy-atlas/", "title": "解剖学アトラス", "date_published": "2021-06-07T15:00:00.000Z", "content_html": "

いつもの幅優先探索癖。アトラスの用途は美術解剖と消化器系の構造理解。

\n

ノート

\n

肉眼解剖学はアプローチの違いから局所解剖学と系統解剖学に分けられる。

\n

体表解剖は解剖学的構造と体表の特徴との関連を視覚的に理解するための節。 微視的レベルで観察する組織学—又の名を顕微鏡解剖学—が肉眼解剖学に対置する。

\n

他にも、胚からの成長過程に着目する発生学、部位の機能を明らかにする生理学、物質代謝の機序に興味をおく生化学等が周辺にある。

\n

テキスト、アトラス、カラーアトラスが一冊ずつあれば事足りそう。 個人的には『グレイ解剖学』、『ネッター解剖学アトラス』、『解剖学カラーアトラス』。 余裕があれば『プロメテウス解剖学アトラス 胸部/腹部・骨盤部』。

\n

教科書

\n

グレイ解剖学 原著第 4 版

\n

2019(原著 2018)

\n\n

アトラス

\n

ネッター解剖学アトラス 原著第 6 版

\n

2016(原著 2014)

\n\n

グラント解剖学図譜 第 7 版

\n

2016(原著 2013)

\n\n

プロメテウス解剖学アトラス 解剖学総論/運動器系 第 3 版

\n

2017(原著 2014)

\n\n

プロメテウス解剖学アトラス 口腔・頭頸部 第 2 版

\n

2018(原著 2015)

\n\n

プロメテウス解剖学アトラス 胸部/腹部・骨盤部 第 3 版

\n

2020(原著 2015)

\n\n

プロメテウス解剖学 コアアトラス 第 3 版

\n

2016

\n\n

カラーアトラス

\n

解剖学カラーアトラス 第 8 版 A Photographic Atlas

\n

2016(原著 2016)

\n\n

人体解剖カラーアトラス 原著第 8 版 Clinical Atlas of Human Anatomy

\n\n

アプリ

\n

Human Anatomy Atlas

\n

https://apps.apple.com/jp/app/human-anatomy-atlas-2021/id1117998129?l=en

\n\n

Web サイト

\n\n

知見

\n\n

用語

\n\n", "tags": [] }, { "id": "https://uechi.io/blog/uco-oil-lantern/", "url": "https://uechi.io/blog/uco-oil-lantern/", "title": "手のひらサイズのオイルランタン", "date_published": "2021-06-07T15:00:00.000Z", "content_html": "

UCO キャンドルランタンを改造して手のひらサイズのオイルランタンを作った。

\n

\n

特徴

\n\n

材料

\n

ほとんどのパーツはドラッグストアやホームセンターで調達できる。

\n
    \n
  1. 大正漢方胃腸薬(30 ml)
  2. \n
  3. C タイプアンカー M8(ステンレス)
  4. \n
  5. 袋ナット M8(ステンレス)
  6. \n
  7. ハードロックボルト M8(ステンレス)
  8. \n
  9. ゴムパッキン 直径 28x 内径 23x 厚さ 2mm(耐熱ではないけど今のところ耐えている)
  10. \n
  11. 3mm グラスファイバー芯
  12. \n
  13. 隙間テープ(ダイソー)
  14. \n
\n

手順

\n
    \n
  1. M8 アンカーのネジ部分を金属ノコギリで切断して断面をヤスリで整える
  2. \n
  3. キャップに印字されているラベルをヤスリで削りとる(任意)
  4. \n
  5. キャップの裏に張り付いている液漏れ防止シールを剥がす(熱で溶解するため)
  6. \n
  7. キャップの内側からドリルで穴をあけ、棒やすりでネジが入るサイズまで広げる
  8. \n
  9. ネジをキャップに差し込む
  10. \n
  11. ゴムパッキンをキャップの裏にはめ込む
  12. \n
  13. ハードロックボルトでネジの両側を固定する(レンチ等を使用して強く固定する)
  14. \n
  15. 芯をボルトの中に通す(テープで芯の先を固定するとやりやすい)
  16. \n
  17. 袋ネジで封をする
  18. \n
  19. 切断した隙間テープをランタンの底面キャップに巻きつける
  20. \n
  21. 瓶をランタンに差し込み底面キャップを締める
  22. \n
\n

使用法

\n
    \n
  1. パラフィンオイルを 4/5 ほど入れてキャップを締め、オイルが芯の先に浸透するまで 15 分以上放置する
  2. \n
  3. 芯の焦げている部分を 1mm 前後、それ以外の部分を 1mm ほど露出させてほぐす
  4. \n
  5. 芯に火を付けてホヤを上げる
  6. \n
  7. 使わないときはオイルを抜く(オイルが入ったままだと毛細管現象で染み出してくる)
  8. \n
\n

🔥 燃焼時間

\n\n

\n

製作メモ

\n

UCO キャンドルランタンの寸法 (mm)

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
筒の経筒の長さ筒口(上)土台経キャップ内径筒露出長
35.092.4520.032.541.79.0
\n

やってみたい

\n\n

失敗例: M6 ボルトは微妙

\n

ネットで広く紹介されている M6 ボルトを使う方法だと、炎がロウソク並に小さくなってしまった。あまりオススメできない。

\n

\n", "tags": [] }, { "id": "https://uechi.io/blog/affinity-thumbnail/", "url": "https://uechi.io/blog/affinity-thumbnail/", "title": "Distill Thumbnail from .afphoto and .afdesign", "date_published": "2021-02-14T04:30:00.000Z", "content_html": "

Nextcloud does not support generating thumbnails from Affinity Photo and Affinity Designer. Fine, I'll do it myself!

\n

Digging Binary

\n

Glancing at .afphoto and .afdesign in Finder, I noticed that it has a QuickLook support and an ability to show the thumbnail image. Meaning there's a chance that these files contain pre-generated thumbnail images somewhere inside its binaries, meaning I don't have to reverse-engineer their format from ground up.

\n

To verify this, I wrote a piece of Node.js script to seek for PNG blob inside an .afphoto/.afdesign file and save it as a normal PNG file.

\n

In the 11.2.1 General of the PNG spec, they stated a valid PNG image should begin with a PNG signature and end with an IEND chunk.

\n
\n

A valid PNG datastream shall begin with a PNG signature, immediately followed by an IHDR chunk, then one or more IDAT chunks, and shall end with an IEND chunk. Only one IHDR chunk and one IEND chunk are allowed in a PNG datastream.

\n
\n

Conveniently, it is also guaranteed that there should be only one IEND chunk in a PNG file, so greedy search would just work.

\n
gen_thumbnail.js
const fs = require(\"fs\");\n\n// png spec: https://www.w3.org/TR/PNG/\nconst PNG_SIG = Buffer.from([137, 80, 78, 71, 13, 10, 26, 10]);\nconst IEND_SIG = Buffer.from([73, 69, 78, 68]);\n\nfunction extractPngBlob(buf) {\n const start = buf.indexOf(PNG_SIG);\n const end = buf.indexOf(IEND_SIG, start) + IEND_SIG.length * 2; // IEND + CRC\n return buf.subarray(start, end);\n}\n\nfunction extractThumbnail(input, output) {\n const buf = fs.readFileSync(input);\n const pngBlob = extractPngBlob(buf);\n fs.writeFileSync(output, pngBlob);\n}\n\nextractThumbnail(process.argv[2], process.argv[3] || \"output.png\");
\n

That's right. This script just do indexOf on a Buffer and distill a portion of which starts with PNG signature and ends with IEND (+ CRC checksum).

\n

CRC (Cyclic Redundancy Code)

\n

You may have wondered about IEND_SIG.length * 2 part. It was to include 32-bit CRC (Cyclic Redundancy Code) for IEND to the resulting blob.

\n

Here, the byte-length of IEND chunk and its CRC checksum are coincidentally the same (4 bytes), so I just went with that code.

\n

Now I can generate a thumbnail image from arbitrary .afphoto and .afdesign file. Let's move on delving into Nextcloud source code.

\n

Tweaking Nextcloud

\n

At this point, all I have to do is to rewrite the above code in PHP and make them to behave as a Nextcloud Preview Provider.

\n
lib/private/Preview/Affinity.php
<?php\n\nnamespace OC\\Preview;\n\nuse OCP\\Files\\File;\nuse OCP\\IImage;\nuse OCP\\ILogger;\n\nclass Affinity extends ProviderV2 {\n\tpublic function getMimeType(): string {\n\t\treturn '/application\\/x-affinity-(?:photo|design)/';\n\t}\n\n\tpublic function getThumbnail(File $file, int $maxX, int $maxY): ?IImage {\n\t\t$tmpPath = $this->getLocalFile($file);\n\n\t\t$handle = fopen($tmpPath, 'rb');\n\t\t$fsize = filesize($tmpPath);\n\t\t$contents = fread($handle, $fsize);\n\t\t$start = strrpos($contents, \"\\x89PNG\");\n\t\t$end = strrpos($contents, \"IEND\", $start);\n\t\t$subarr = substr($contents, $start, $end - $start + 8 );\n\n\t\tfclose($handle);\n\t\t$this->cleanTmpFiles();\n\n\t\t$image = new \\OC_Image();\n\t\t$image->loadFromData($subarr);\n\t\t$image->scaleDownToFit($maxX, $maxY);\n\n\t\treturn $image->valid() ? $image : null;\n\t}\n}
\n

Also make sure my component to be auto-loaded on startup.

\n
lib/private/PreviewManager.php
@@ -363,6 +365,8 @@\n \t\t$this->registerCoreProvider(Preview\\Krita::class, '/application\\/x-krita/');\n \t\t$this->registerCoreProvider(Preview\\MP3::class, '/audio\\/mpeg/');\n \t\t$this->registerCoreProvider(Preview\\OpenDocument::class, '/application\\/vnd.oasis.opendocument.*/');\n+\t\t$this->registerCoreProvider(Preview\\Affinity::class, '/application\\/x-affinity-(?:photo|design)/');\n\n \t\t// SVG, Office and Bitmap require imagick\n \t\tif (extension_loaded('imagick')) {
\n
lib/composer/composer/autoload_static.php
@@ -1226,6 +1226,7 @@\n 'OC\\\\OCS\\\\Result' => __DIR__ . '/../../..' . '/lib/private/OCS/Result.php',\n 'OC\\\\PreviewManager' => __DIR__ . '/../../..' . '/lib/private/PreviewManager.php',\n 'OC\\\\PreviewNotAvailableException' => __DIR__ . '/../../..' . '/lib/private/PreviewNotAvailableException.php',\n+ 'OC\\\\Preview\\\\Affinity' => __DIR__ . '/../../..' . '/lib/private/Preview/Affinity.php',\n 'OC\\\\Preview\\\\BMP' => __DIR__ . '/../../..' . '/lib/private/Preview/BMP.php',\n 'OC\\\\Preview\\\\BackgroundCleanupJob' => __DIR__ . '/../../..' . '/lib/private/Preview/BackgroundCleanupJob.php',\n 'OC\\\\Preview\\\\Bitmap' => __DIR__ . '/../../..' . '/lib/private/Preview/Bitmap.php',
\n
lib/composer/composer/autoload_classmap.php
@@ -1197,6 +1197,7 @@\n 'OC\\\\OCS\\\\Result' => $baseDir . '/lib/private/OCS/Result.php',\n 'OC\\\\PreviewManager' => $baseDir . '/lib/private/PreviewManager.php',\n 'OC\\\\PreviewNotAvailableException' => $baseDir . '/lib/private/PreviewNotAvailableException.php',\n+ 'OC\\\\Preview\\\\Affinity' => $baseDir . '/lib/private/Preview/Affinity.php',\n 'OC\\\\Preview\\\\BMP' => $baseDir . '/lib/private/Preview/BMP.php',\n 'OC\\\\Preview\\\\BackgroundCleanupJob' => $baseDir . '/lib/private/Preview/BackgroundCleanupJob.php',\n 'OC\\\\Preview\\\\Bitmap' => $baseDir . '/lib/private/Preview/Bitmap.php',
\n

\n

Voilà! Now I can see beautiful thumbnails for my drawings in Nextcloud web interface.

\n

This is exactly why I love FOSS. It allows me to materialize any niche things I want in the FOSS without bothering its developers. This fact not only gives me confidence that I can control the functionality of the software, but it also makes me have more trust in the developers for giving me such freedom to make changes to their software.

\n

Finalized Solution

\n

Enough talking, I've pushed my Nextcloud Docker setup with the above patches included on GitHub. You can see the actual patch here. Note that it also contains the patches for PDF thumbnail generator described below, and this particular patch may pose security implications because of the usage of Ghostscript against PDF.

\n

Bonus: PDF thumbnail generator

\n

Install ghostscript on your server to make it work.

\n
lib/private/Preview/PDF.php
<?php\n\nnamespace OC\\Preview;\n\nuse OCP\\Files\\File;\nuse OCP\\IImage;\n\nclass PDF extends ProviderV2 {\n\tpublic function getMimeType(): string {\n\t\treturn '/application\\/pdf/';\n\t}\n\n\tpublic function getThumbnail(File $file, int $maxX, int $maxY): ?IImage {\n\t\t$tmpPath = $this->getLocalFile($file);\n\t\t$outputPath = \\OC::$server->getTempManager()->getTemporaryFile();\n\n\t\t$gsBin = \\OC_Helper::findBinaryPath('gs');\n\t\t$cmd = $gsBin . \" -o \" . escapeshellarg($outputPath) . \" -sDEVICE=jpeg -sPAPERSIZE=a4 -dLastPage=1 -dPDFFitPage -dJPEGQ=90 -r144 \" . escapeshellarg($tmpPath);\n\t\tshell_exec($cmd);\n\n\t\t$this->cleanTmpFiles();\n\n\t\t$image = new \\OC_Image();\n\t\t$image->loadFromFile($outputPath);\n\t\t$image->scaleDownToFit($maxX, $maxY);\n\n\t\tunlink($outputPath);\n\n\t\treturn $image->valid() ? $image : null;\n\t}\n}
\n", "tags": [] }, { "id": "https://uechi.io/blog/parseint-magic/", "url": "https://uechi.io/blog/parseint-magic/", "title": "[].map(parseInt)", "date_published": "2021-02-14T02:30:00.000Z", "content_html": "

Fun fact: [0xa, 0xa, 0xa].map(parseInt) yields [10, NaN, 2].

\n

Why

\n
parseInt(0xa, 0, [0xa, 0xa, 0xa]);
\n

The second argument is 0 so the first argument going to be treated as decimal number becoming 10.

\n
parseInt(0xa, 1, [0xa, 0xa, 0xa]);
\n

The second argument is 1 which is invalid as a radix, so the result ends up with NaN.

\n
parseInt(0xa, 2, [0xa, 0xa, 0xa]);
\n

The second argument is 2 meaning the first argument going to be handled as a binary number. 0xa is 10 in binary, which results in 2 in decimal form.

\n", "tags": [] }, { "id": "https://uechi.io/blog/split-bill/", "url": "https://uechi.io/blog/split-bill/", "title": "最小送金回数で精算する割り勘アルゴリズム", "date_published": "2021-02-13T15:00:00.000Z", "content_html": "

大人数で旅行を楽しんだあとに待っているのは耐え難き精算・送金処理だ。 次回から楽をするためにも、送金回数を最小化する制約で精算表を作る方法を考えよう。

\n

tl;dr

\n

アイディアは「最も支払わなかった人が最も支払った人に払えるだけ払う ⇢ 債権を再計算して繰り返す」

\n
    \n
  1. 全員の出費を算出(払い過ぎは正、払わなさすぎは負の数)
  2. \n
  3. 降順でソート(出費過多が先頭)
  4. \n
  5. リストの最後(最大債務者, 出費=L)がリストの最初(最大債権者, F)に を支払ってバランス(負債)を再計算
  6. \n
  7. 全員のバランスが 0 になるまで 2-3 を繰り返す
  8. \n
\n

実験

\n

実際にコードを書いて本当に望んでいる結果が得られるのかを検証する。

\n
split-bill.js
const history = [\n {\n amount: 121,\n payer: \"A\",\n involves: [\"A\", \"B\", \"C\"],\n },\n {\n amount: 98,\n payer: \"B\",\n involves: [\"A\", \"B\", \"C\"],\n },\n {\n amount: 10,\n payer: \"C\",\n involves: [\"A\", \"B\", \"C\"],\n },\n {\n amount: 10,\n payer: \"C\",\n involves: [\"A\", \"B\"],\n },\n {\n amount: 50,\n payer: \"C\",\n involves: [\"A\"], // meaning C lent A 50\n },\n];\n\n// calculate balance sheet\nconst init = { balance: 0, consumption: 0 };\nMap.prototype.fetch = function (id) {\n return (\n this.get(id) || this.set(id, Object.assign({ name: id }, init)).get(id)\n );\n};\n\nconst data = new Map();\n\nfor (const { payer, amount, involves } of history) {\n const record = data.fetch(payer);\n record.balance += amount;\n const debt = Math.ceil(amount / involves.length);\n // actual payer should not owe extra debt coming from rounded up numbers\n const payerDebt = amount - debt * (involves.length - 1);\n for (const debtor of involves.map((i) => data.fetch(i))) {\n const cost = Math.round(amount / involves.length);\n debtor.balance -= cost;\n debtor.consumption += cost;\n }\n}\n\nconsole.log(data);\n\n// calculate transaction table\nconst transaction = [];\nlet paidTooMuch, paidLess;\nwhile (true) {\n for (const [_, tbl] of data) {\n if (tbl.balance >= (paidTooMuch?.balance || 0)) {\n paidTooMuch = tbl;\n }\n if (tbl.balance <= (paidLess?.balance || 0)) {\n paidLess = tbl;\n }\n }\n\n if (paidLess.balance == 0 || paidTooMuch.balance == 0) break;\n\n const amount = Math.min(paidTooMuch.balance, Math.abs(paidLess.balance));\n\n transaction.push({\n sender: paidLess.name,\n receiver: paidTooMuch.name,\n amount,\n });\n\n paidTooMuch.balance -= amount;\n paidLess.balance += amount;\n}\n\nconsole.log(\"Settled\");\n\nconsole.log(\"\\n# Transaction table\");\nfor (const ev of transaction) {\n console.log(`${ev.sender} owes ${ev.receiver} ¥${ev.amount}`);\n}\n\nconsole.log(\"\\n# History\");\nfor (const { payer, amount, involves } of history) {\n if (involves.length === 1) {\n console.log(`${payer} lent ¥${amount} to ${involves[0]}`);\n } else {\n console.log(`${payer} paid ¥${amount} for ${involves.join(\", \")}`);\n }\n}\n\nconsole.log(\"\\n# Expenses\");\nfor (const [_, { name, consumption }] of data) {\n console.log(`${name} virtually paid ¥${consumption} in total`);\n}
\n

historyに支払い履歴を書き込んで実行すると、「送金表」「履歴」「実質支払総額」が得られる。

\n
# Transaction table\n\nA owes B ¥10\nC owes B ¥6\n\n# History\n\nA paid ¥121 for A, B, C\nB paid ¥98 for A, B, C\nC paid ¥10 for A, B, C\nC paid ¥10 for A, B\nC lent ¥50 to A\n\n# Expenses\n\nA virtually paid ¥131 in total\nB virtually paid ¥81 in total\nC virtually paid ¥76 in total
\n

旅行中、A と B は 1 回、C は 3 回支払いを建て替えた。そのうち 3 回は普通の割り勘だが、他 2 回はそれぞれ「C が A と B の分を建て替えた」「C が A の分を建て替えた(=C が A に金を貸した)」である。

\n

このようなケースで一件ずつナイーブに精算しようとすると、合計 12 回のお金のやり取りが発生することになる。しかし負債を同額の債権で打ち消す操作を繰り返して最適化した結果、たった 2 回お金のやり取りをするだけで全員分の精算を完了できることがわかった。

\n

プログラムに落とし込むことができたら、あとはスプレッドシートのマクロにするなりスマホのアプリにするなり自由だ。面倒なことは全部コンピューターにやらせよう!

\n", "tags": [] }, { "id": "https://uechi.io/blog/braille/", "url": "https://uechi.io/blog/braille/", "title": "点字の表現力", "date_published": "2021-02-12T16:00:00.000Z", "content_html": "

「n 種類の文字を表現できる点字を作らないといけなくなったらどうしよう」

\n

例としてドット が 3 つの点字を用意した。

\n\n", "tags": [] }, { "id": "https://uechi.io/blog/server-2020/", "url": "https://uechi.io/blog/server-2020/", "title": "新しい自宅サーバーの構成", "date_published": "2021-02-12T15:00:00.000Z", "content_html": "

10 年ぶりにサーバーを更新した。初めての AMD、初めての DDR4、初めての NVM Express!

\n

用途

\n\n

スペック

\n

重いタスクを並列してやらせたいので最優先は CPU とメモリ。メモリはDDR4-3200 32GBx2 を、CPU は昨今のライブラリのマルチコア対応を勘案して Ryzen 9 3950X を選んだ。CPU クーラーは静音性を考えて Noctua の NH-D15 Black

\n
\n

追記: メモリ異常を起因とするシステム誤動作により、/sbin 以下がゼロ上書きされカーネルが起動しなくなる災害が起きた。後日 ECC 付きのメモリに交換してからは、現在に至るまでメモリ関連の異常は発生していない。常時稼働するサーバーには最初から ECC メモリを選ぼう。

\n
\n
\n

追記 2: 結果から言うとメモリは 64GB では足りなかった。巨大な Pandas データフレームを弄ったり、10 億レコード以上ある MongoDB を走査したりするたびに OOM が発動してしまう。最終的に 128GB まで増やす羽目になった。

\n
\n
\n

追記 3: 結局 128GB でも OOM になる場面が出てきたが、スロットが埋まっていてもうこれ以上増設できないし、できたとしても Ryzen シリーズが 128GB までしかサポートしてないのでどちらにしろ詰みである。

\n

これから機械学習/OLAP サーバーを構築しようと考えている読者は、最初から DIMM スロットが豊富な Dual-socket (CPU が 2 つ挿せる) マザーボードと、サーバー向け CPU (EPYC または Xeon)の組み合わせを検討すべきだ。サーバー向け CPU に関しては、EOL を迎えてデータセンターから放出された 5 年ほど前のモデルが eBay で安く手に入るはず。

\n
\n
\n

追記 4: eBay で良いパーツを探すコツ

\n
    \n
  1. 発送時期が異様に長くない (所有しているものを出品している)
  2. \n
  3. シリアル番号が写っている現物の写真がある (偽物や Engineering Sample の可能性が低い)
  4. \n
  5. 通電に加えて正常な動作確認を行っている (疑問点があれば出品者に質問を送ってしっかり言質をとっておくこと)
  6. \n
\n
\n

GPU は古いサーバーに突っ込んでいた NVIDIA GeForce GTX TITAN X (Maxwell)を流用した。VRAM が 12GB ちょっとしかないが、最大ワークロード時でも 5GB は残るので今のところ十分。

\n
\n

追記: 結果から言うと GPT-J や Megatron-LM を始めとした億パラメータ級のモデルを学習・推論させるには、DeepSpeed の助けがあったとしても最低 16GB の VRAM が必要だった。他の例を挙げると、Stable Diffusion の Fine-tuning には最低 30GB 前後の VRAM が必要になるし、OpenAI Whisper の large モデルを動かす際にも 13GB は見ておく必要がある。

\n

今から GPU を買うなら、2022 年 10 月現在中古市場で 10 万前後で推移している RTX 3090 (24GB)を 2 枚買う戦略が筋が良いだろう。お金持ちなら A100 を買えば良い。

\n
\n
\n

追記 2: RTX 3090 を安く仕入れることができたが、サイズが大きすぎて Meshify 2 ケースの HDD ベイに干渉してしまった。いくつか HDD を移動させることで上手く挿入できたが、かわりに 3 つ分のベイが使用できなくなった。

\n

これからケースを買おうとしている読者は、最初から ATX ケースを通り越して Meshify 2 XL など、E-ATX/EE-ATX 対応ケースを選ぶことをおすすめする。どちらにせよそのようなケースでないと上記の Dual-socket マザーボードは挿せないし、十分な冷却環境を確保できない。

\n
\n

記憶装置は WD HDD 3TB 2 台と Samsung 970 EVO Plus 500GB M.2 PCIe、そして古いサーバーから引っこ抜いた Samsung 870 EVO Plus 500GB SSD 。NVMe メモリは OS とキャッシュ用、SSD/HDD はデータ用にする。

\n

マザーボードは、X570 と比較して実装されているコンデンサーやパーツがサーバー向きだと感じたASRock B550 Taichi にした。

\n

電源は今後 GPU を追加することを考えて Seasonic PRIME TX 850 を選んだ。実際にサーバーを稼働させながら使用電力を計測したところ、アイドル時に 180W 前後、フル稼働時でも 350W を超えない程度だった。

\n

ケースは Fractal Design の Meshify 2

\n

OS は長年付き合ってきた Ubuntu と袂を分かち、Arch Linux を選んだ。ミニマルと実用の間のバランスが取れていて好み。

\n

Arch Linux のセットアップは個別に記事を書いた。

\n

また、AUR (Arch User Repository)にパッケージを公開したい人向けに、Docker 自動ビルド・テストツールをGitHub で公開した。

\n

パーツ選定時のポイント

\n\n

組立ての勘所

\n\n", "tags": [] }, { "id": "https://uechi.io/blog/installing-arch-linux/", "url": "https://uechi.io/blog/installing-arch-linux/", "title": "Installing Arch Linux on Bare-metal Server", "date_published": "2021-02-11T15:00:00.000Z", "content_html": "

This is all the commands I typed when I set up Arch Linux on my new compute server.

\n
\n

PSA[1]: I published a toolchain for creating/testing PKGBUILD in a clean-room Docker container https://github.com/uetchy/archpkgs

\n
\n
\n

PSA[2]: Also published cfddns (AUR), a Cloudflare DDNS client written in Rust.

\n
\n

Table of Contents

\n\n

Goals

\n\n

Why XFS for analytical database storage? Refer to Production Notes — MongoDB Manual and Configure Scylla | Scylla Docs.

\n

Setup

\n

Wipe a disk

\n
# Erase file-system magic strings (insecure but super fast, suitable when reusing a disk)\nwipefs -a /dev/sdN\n\n# or\n\n# Write (random then zeroes) to the device (takes longer but more secure, suitable when selling a disk)\nshred -v -n 1 -z /dev/sdN
\n

Create partitions

\n

This command will calculate optimal sector alignments correctly. You can confirm it by running sfdisk -d /dev/sda.

\n
# Overwrite new GPT\nsgdisk -og /dev/sda\n\n# Create 1GiB EFI system partition\nsgdisk -n 1:0:+1G -c 1:boot -t 1:ef00 /dev/sda\n\n# Fill the rest with a LUKS partition\nsgdisk -n 2:0:0 -c 2:crypt -t 2:8308 /dev/sda\n\n# Data disks\nsgdisk -og /dev/sdb\nsgdisk -n 1:0:0 -c 1:vault -t 1:8308 /dev/sdb # LUKS\n\nsgdisk -og /dev/sdc\nsgdisk -n 1:0:0 -c 1:backups /dev/sdb\n\nsgdisk -og /dev/sde\nsgdisk -n 1:0:0 -c 1:analytics /dev/sdb\n\n# Verify the result\nsgdisk -p /dev/sdN
\n
\n

NOTE: Since my server has 128GB of physical memory, I would rather let OOM Killer do its job than creating a swap partition. Should the need for swap comes up later, consider swap file (no perf difference in general)

\n
\n\n

Write file systems

\n
# VFAT32 ESP\nmkfs.vfat -F 32 -n ESP /dev/sda1\n\n# LUKS2\ncryptsetup luksFormat /dev/sda2\ncryptsetup \\\n  --allow-discards \\\n  --perf-no_read_workqueue \\\n  --perf-no_write_workqueue \\\n  --persistent \\\n  open /dev/sda2 crypt\n\ncryptsetup luksFormat /dev/sdb1\ncryptsetup open /dev/sdb1 vault\n\n# Verify the LUKS devices\ncryptsetup luksDump /dev/sdN # Dump LUKS2 header\ndmsetup table # Show flags for the currently opened devices\n\n# Also, backup the LUKS headers to safe storage\ncryptsetup luksHeaderBackup /dev/sdN --header-backup-file /path/to/luks_header_sdN\n\n# Btrfs for root partition\nmkfs.btrfs -L crypt /dev/mapper/crypt\nmount /dev/mapper/crypt /mnt # Temporary mounted to create subvolumes\n# btrfs [su]bvolume [cr]eate\nbtrfs su cr /mnt/@\nbtrfs su cr /mnt/@home\nbtrfs su cr /mnt/@cache\nbtrfs su cr /mnt/@log\nbtrfs su cr /mnt/@srv # Home for Docker Compose stacks\nbtrfs su set-default 256 /mnt # Required for remote unlocking\numount /mnt\n\n# Btrfs\nmkfs.btrfs -L vault /dev/mapper/vault\nmkfs.btrfs -L backups /dev/sdc1\n\n# XFS\nmkfs.xfs -L analytics /dev/sde1
\n

See Discard/TRIM support for solid state drives (SSD) - Dm-crypt - ArchWiki for the reasoning behind these cryptsetup flags.

\n\n

Mount partitions

\n
# Root partition\nmount /dev/mapper/crypt /mnt\nmount -m -o subvol=@home /dev/mapper/crypt /mnt/home\nmount -m -o subvol=@cache /dev/mapper/crypt /mnt/var/cache\nmount -m -o subvol=@log /dev/mapper/crypt /mnt/var/log\nmount -m -o subvol=@srv /dev/mapper/crypt /mnt/srv\n\n# EFI system partition\nmount -m /dev/sda1 /mnt/boot\n\n# Extra disks\nmount -m /dev/mapper/vault /mnt/mnt/vault\nmount -m /dev/sde1 /mnt/mnt/analytics\nmount -m /dev/sdc1 /mnt/mnt/backups
\n

Install Linux kernel

\n
# This is necessary for older Arch ISO image\npacman -Sy archlinux-keyring\n\n# Choose between 'linux-lts' and 'linux'\npacstrap /mnt base linux-lts linux-firmware \\\n  btrfs-progs xfsprogs vim man-db man-pages
\n

Generate fstab

\n
# Generate fstab based on current /mnt structure\ngenfstab -U /mnt >> /mnt/etc/fstab
\n

Tweak pacman

\n
# Optimize mirrorlist (replace `country` params with your nearest countries)\npacman -S --needed pacman-contrib\ncurl -s 'https://archlinux.org/mirrorlist/?use_mirror_status=on&protocol=https&country=JP&country=KR&country=HK' | sed -e 's/#//' -e '/#/d' | rankmirrors -n 10 - > /mnt/etc/pacman.d/mirrorlist\n\n# Colorize output\nsed '/#Color/a Color' -i /mnt/etc/pacman.conf\n\n# Parallel downloads\nsed '/#ParallelDownloads/a ParallelDownloads = 5' -i /mnt/etc/pacman.conf\n\n# ILoveCandy\nsed '/# Misc/a ILoveCandy' -i /mnt/etc/pacman.conf
\n

Chroot into the installation

\n
# Chroot\narch-chroot /mnt\n\n# Change root password\npasswd
\n

Finish structuring file systems

\n
# Verify fstab entries\nfindmnt --verify --verbose
\n

crypttab

\n
echo \"crypt UUID=$(blkid /dev/sda2 -s UUID -o value) none luks\" >> /etc/crypttab\necho \"vault UUID=$(blkid /dev/sdb1 -s UUID -o value) none luks\" >> /etc/crypttab\n\ncat /etc/crypttab
\n

Remote unlocking

\n
pacman -S --needed mkinitcpio-systemd-tool openssh cryptsetup tinyssh busybox mc python3\n\n# crypttab for initramfs\necho \"crypt UUID=$(blkid /dev/sda2 -s UUID -o value) none luks\" >> /etc/mkinitcpio-systemd-tool/config/crypttab\n# [!] Add every other device whose password is different from `crypt` device\n#     to make sure that all the passwords will be asked during the remote unlocking\n\n# fstab for initramfs\necho \"UUID=$(blkid /dev/mapper/crypt -s UUID -o value) /sysroot auto x-systemd.device-timeout=9999h 0 1\" >> /etc/mkinitcpio-systemd-tool/config/fstab\n\n# Append 'systemd systemd-tool' to and remove 'udev' from mkinitcpio HOOKS\nsed -r '/^HOOKS=/s/^/#/' -i /etc/mkinitcpio.conf\nsed -r '/^#HOOKS=/a HOOKS=(base autodetect modconf block filesystems keyboard fsck systemd systemd-tool)' -i /etc/mkinitcpio.conf\n\n# Change SSH port\nmkdir -p /etc/systemd/system/initrd-tinysshd.service.d\ncat > /etc/systemd/system/initrd-tinysshd.service.d/override.conf <<EOD\n[Service]\nEnvironment=\nEnvironment=SSHD_PORT=12345\nEOD\n\n# Assign static IP because we are behind NAT\ncat > /etc/mkinitcpio-systemd-tool/network/initrd-network.network <<EOD\n[Match]\n# [!] use kernel interface name, not udev name\nName=eth0\n\n[Network]\nAddress=10.0.1.2\nGateway=10.0.1.1\nDNS=9.9.9.9\nEOD\n\n# Enable required services\nsystemctl enable initrd-cryptsetup.path\nsystemctl enable initrd-tinysshd\nsystemctl enable initrd-debug-progs\nsystemctl enable initrd-sysroot-mount\n\n# Generate host SSH key pair\nssh-keygen -A\n\n# Download SSH public keys to use ([!] tinysshd only supports ed25519)\ncurl -s https://github.com/<username>.keys >> /root/.ssh/authorized_keys\n\n# Build initramfs\nmkinitcpio -P\n\n# Verify initramfs contents\nlsinitcpio -l /boot/initramfs-linux-lts.img
\n\n

Periodic TRIM

\n
systemctl enable fstrim.timer
\n

Run lsblk --discard to see the TRIM-supported devices (it does if both DISC-GRAN and DISC-MAX have non-empty values).

\n

Solid state drive - ArchWiki

\n

SSH

\n
vim /etc/ssh/sshd_config\n# Change port\nsed '/#Port /a Port 12345' -i /etc/ssh/sshd_config\n# Limit to pubkey auth\nsed '/#PasswordAuthentication /a PasswordAuthentication no' -i /etc/ssh/sshd_config\nsystemctl enable sshd
\n

Bootloader (systemd-boot)

\n

Because GRUB's LUKS2 support is still limited (It does not support cryptsetup's default Argon2id yet. I've tested in a VM and confirmed it doesn't work).

\n

In the end, I end up liking systemd-boot (formerly Gummiboot) more. It's refreshingly simple and easier to understand, doesn't this sound like Arch Linux?

\n
# Install AMD microcode updates (pick `intel-ucode` for Intel CPU)\npacman -S amd-ucode\n\n# Install systemd-boot on /boot\nbootctl install\n\n# Add bootloader config\ncat > /boot/loader/loader.conf <<EOD\ndefault arch-lts.conf\ntimeout 3\nconsole-mode max\neditor no\nEOD\n\n# Add an entry for `linux-lts` (omit -lts for `linux`)\ncat > /boot/loader/entries/arch-lts.conf <<EOD\ntitle Arch Linux (LTS)\ninitrd /amd-ucode.img\ninitrd /initramfs-linux-lts.img\nlinux /vmlinuz-linux-lts\noptions root=/dev/mapper/crypt\nEOD
\n

options are kernel params.

\n\n

Network

\n

systemd-networkd

\n
/etc/systemd/network/wired.network
[Match]\n# `ip l` to find the right interface\nName=enp5s0\n\n[Network]\nAddress=10.0.1.2/24\nGateway=10.0.1.1\nMulticastDNS=yes\n#DHCP=yes
\n
systemctl enable systemd-networkd
\n

systemd-resolved

\n
mkdir /etc/systemd/resolved.conf.d\ncat > /etc/systemd/resolved.conf.d/dns.conf <<EOD\n[Resolve]\nDNS=1.1.1.1 1.0.0.1\nDNSOverTLS=yes\nEOD\nsystemctl enable systemd-resolved
\n\n

sysctl

\n
# Increase max map count for Elasticsearch on Docker\n# https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_linux\necho \"vm.max_map_count=262144\" > /etc/sysctl.d/96-map-count.conf\n\n# Increase inotify limit to avoid \"too many open files\"\necho \"fs.inotify.max_user_watches=1048576\" > /etc/sysctl.d/97-inotify.conf\n\n# Auto reboot after 60s of kernel panic\necho \"kernel.panic=60\" > /etc/sysctl.d/98-kernel-panic.conf\n\n# Tweak swappiness value for memory-rich servers\n# https://linuxhint.com/understanding_vm_swappiness/\necho \"vm.swappiness=10\" > /etc/sysctl.d/99-swappiness.conf
\n

faillock

\n

Change deny from 3 to 5:

\n
sed '/^# deny/a deny = 5' -i /etc/security/faillock.conf
\n

NVIDIA driver

\n
# 'nvidia' for 'linux'\npacman -S nvidia-lts
\n

Create operator user

\n
# Install ZSH and sudo\npacman -S zsh sudo\n\n# Add operator user (op) with wheel membership\nuseradd -m -s /bin/zsh -G wheel op\n\n# Change operator user password\npasswd op\n\n# Populate SSH public keys\nmkdir /home/op/.ssh\ncurl -s https://github.com/<username>.keys >> /home/op/.ssh/authorized_keys\nchown -R op:op /home/op/.ssh\n\n# [!] Don't put SSH key pairs on the server. Use SSH agent forwarding instead.\n\n# Grant wheel group sudo priv\n(umask 0337; echo \"%wheel ALL=(ALL) ALL\" > /etc/sudoers.d/wheel)\n\nvisudo -c # Verify sudoers\nuserdbctl # Verify users\nuserdbctl group # Verify groups
\n\n

Time and locales

\n
# Set time zone\nln -sf /usr/share/zoneinfo/Asia/Tokyo /etc/localtime\n\n# Enable NTP\nsystemctl enable systemd-timesyncd\n\n# Sync system time to hardware clock\nhwclock --systohc
\n
sed '/#en_US.UTF-8 UTF-8/s/^#//' -i /etc/locale.gen\nlocale-gen\necho \"LANG=en_US.UTF-8\" >> /etc/locale.conf
\n

Leave chroot and reboot

\n
exit # leave chroot\n\n# Symlink stub resolver config (non-chroot required)\nln -rsf /run/systemd/resolve/stub-resolv.conf /mnt/etc/resolv.conf\n\numount -R /mnt # unmount /mnt recursively\nreboot
\n

[!] From now on, run all commands as the operator user (use sudo if necessary)

\n

Set hostname

\n
hostnamectl set-hostname tako\nhostnamectl set-chassis server\necho \"127.0.0.1 tako\" >> /etc/hosts
\n\n

Check-ups

\n
# Check network status\nnetworkctl status\nresolvectl status\nresolvectl query uechi.io\nresolvectl query -p mdns tako.local\n\n# Verify time and NTP status\ntimedatectl status\n\n# Verify sysctl values\nsysctl --system
\n
\n

If networkctl keeps showing enp5s0 as degraded, then run ip addr add 10.0.1.2/24 dev enp5s0 to manually assign static IP address for the workaround.

\n
\n

S.M.A.R.T.

\n
pacman -S smartmontools\n\n# Needed for sending email\npacman -S s-nail
\n

Automated disk health check-ups and reporting

\n
/etc/smartd.conf
# Scan all but removable devices and notify any test failures\n# Also, start a short self-test every day around 1-2am, and a long self test every Saturday around 3-4am\nDEVICESCAN -a -o on -S on -n standby,q -s (S/../.././02|L/../../6/03) -m me@example.com
\n
\n

Tips: Add -M test immediately after DEVICESCAN to send test mail

\n
\n
systemctl enable --now smartd
\n

Manual testing

\n
smartctl -t short /dev/sda\nsmartctl -l selftest /dev/sda
\n\n

AUR Helper (yay)

\n
pacman -S base-devel git\ngit clone https://aur.archlinux.org/yay.git\ncd yay\nmakepkg -si
\n

Docker

\n
pacman -S docker docker-compose\nyay -S nvidia-container-runtime
\n
/etc/docker/daemon.json
{\n \"log-driver\": \"json-file\", // default: \"json-file\"\n \"log-opts\": {\n \"max-size\": \"10m\", // default: -1 (unlimited)\n \"max-file\": \"3\" // default: 1\n },\n \"runtimes\": {\n // for Docker Compose\n \"nvidia\": {\n \"path\": \"/usr/bin/nvidia-container-runtime\",\n \"runtimeArgs\": []\n }\n }\n}
\n
systemctl enable --now docker\n\n# Allow operator user to run docker command without sudo (less secure)\n#   Re-login for the changes to take effect\nusermod -aG docker op\n\n# Enable Swarm\ndocker swarm init --advertise-addr $(curl -s https://ip.seeip.org)\n# Create overlay network for Swarm stack\n# docker network create --attachable -d overlay --subnet 10.11.0.0/24 <network>\n\n# Verify installation\ndocker run --rm --gpus all nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu20.04 nvidia-smi
\n\n

Cross-platform build support (BuildKit, QEMU)

\n
docker run --rm --privileged multiarch/qemu-user-static --reset --persistent yes\n\n# Verify\ndocker run --rm --platform linux/arm64/v8 -t arm64v8/ubuntu uname -m # => aarch64
\n

Tips: Use journald log driver in Docker Compose

\n

This is particularly useful when you want to feed container logs to fail2ban through journald.

\n
services:\n  web:\n    logging:\n      driver: \"journald\"\n      options:\n        tag: \"{{.ImageName}}/{{.Name}}/{{.ID}}\" # default: \"{{.ID}}\"
\n\n

DNS resolver (Pi-hole + unbound)

\n
git clone https://github.com/uetchy/docker-dns /srv/dns\ncd /srv/dns\nrm -rf .git\ncp .env.example .env\nvim .env\nmkdir -p data/unbound\ncp examples/unbound/forward-records.conf data/unbound/\nvim data/unbound/forward-records.conf # see below\ndocker compose up -d
\n
\n

For Quad9 resolver, I chose ECS-enabled resolver because their nearest anycast server from Tokyo is in another country (Singapore), which could confuse CDN server selection and result in higher latency.

\n

If your favorite DNS resolver DOES have their anycast servers near your city, you don't need ECS at all.

\n

If you are in Japan, I would recommend IIJ Public DNS. They offer secure DoT/DoH resolvers (actually, they don't support \"normal\" unencrypted DNS queries so there's no room for the \"accidental fallback to an unencrypted query in Opportunistic TLS configuration\" scenario)

\n
\n
/etc/systemd/network/dns-shim.netdev
# workaround to route local dns lookups to Docker managed MACVLAN interface\n[NetDev]\nName=dns-shim\nKind=macvlan\n\n[MACVLAN]\nMode=bridge
\n
/etc/systemd/network/dns-shim.network
# workaround to route local dns lookups to Docker managed MACVLAN interface\n[Match]\nName=dns-shim\n\n[Network]\nIPForward=yes\n\n[Address]\nAddress=10.0.1.103/32\nScope=link\n\n[Route]\nDestination=10.0.1.100/30
\n
cat >> /etc/systemd/network/wired.network <<EOD\n# workaround to route local dns lookups to Docker managed MACVLAN interface\nMACVLAN=dns-shim\nEOD\n\ncat > /etc/systemd/resolved.conf.d/dns.conf <<EOD\n[Resolve]\nDNS=10.0.1.100\nEOD
\n
\n

If you want to do the same thing but using ip:

\n
ip link add dns-shim link enp5s0 type macvlan mode bridge # add macvlan shim interface\nip a add 10.0.1.103/32 dev dns-shim # assign the interface an ip address\nip link set dns-shim up # enable the interface\nip route add 10.0.1.100/30 dev dns-shim # route macvlan subnet (.100 - .103) to the interface
\n
\n

DDNS (cfddns)

\n

Dynamic DNS for Cloudflare.

\n
\n

Star the GitHub repository if you like it :)

\n
\n
yay -S cfddns
\n
/etc/cfddns/cfddns.yml
token: <token>\nnotification:\n # You'll need local mail transfer agent such as Mailu/Mailcow\n enabled: true\n from: cfddns@localhost\n to: me@example.com\n server: localhost
\n
/etc/cfddns/domains
example.com\ndev.example.com\nexample.org
\n
systemctl enable --now cfddns
\n

Reverse proxy (nginx-proxy)

\n

nginx-proxy serves as an ingress gateway for port 80 and 443, as well as a TLS terminal.

\n
git clone --recurse-submodules https://github.com/evertramos/nginx-proxy-automation.git /srv/proxy\ncd /srv/proxy/bin\n\n./fresh-start.sh --yes -e your_email@domain --skip-docker-image-check
\n\n

ACME CA (step-ca)

\n

With nginx-proxy, you can generate and auto-rotate self-signed ACME certificates for private Docker containers.

\n
/srv/ca/docker-compose.yml
version: \"3\"\nservices:\n step-ca:\n image: smallstep/step-ca:0.22.1\n restart: unless-stopped\n ports:\n - \"9000:9000\"\n environment:\n DOCKER_STEPCA_INIT_NAME: ${DOCKER_STEPCA_INIT_NAME}\n DOCKER_STEPCA_INIT_DNS_NAMES: ${DOCKER_STEPCA_INIT_DNS_NAMES}\n volumes:\n - \"./data/step-ca:/home/step\"\n dns:\n # Split horizon DNS server for private web services (also point <domain> to the server)\n - 10.0.1.100
\n
/srv/ca/.env
DOCKER_STEPCA_INIT_NAME=MySign Root CA\nDOCKER_STEPCA_INIT_DNS_NAMES=localhost,<hostname>,<domain>
\n
pacman -S step-cli\n\n# Start step-ca\ndocker compose up -d\n\n# Show CA password\ndocker compose exec step-ca cat secrets/password\n\n# Enable ACME module\ndocker compose exec step-ca step ca provisioner add acme --type ACME\n\n# Download root cert and CA configuration\nCA_FINGERPRINT=$(docker compose exec step-ca step certificate fingerprint certs/root_ca.crt)\nstep-cli ca bootstrap --ca-url https://localhost:9000 --fingerprint $CA_FINGERPRINT\n\n# Test installation\nstep-cli certificate inspect $(step-cli path)/certs/root_ca.crt\nstep-cli certificate inspect https://<domain>:9000\n\n# Install root cert system-wide\nstep-cli certificate install $(step-cli path)/certs/root_ca.crt
\n

Auth gateway and identity provider (Authelia)

\n

Authelia acts as:

\n\n
/srv/authelia/docker-compose.yml
version: \"3.9\"\nsecrets:\n JWT_SECRET:\n file: ./data/authelia/secrets/JWT_SECRET\n SESSION_SECRET:\n file: ./data/authelia/secrets/SESSION_SECRET\n STORAGE_PASSWORD:\n file: ./data/authelia/secrets/STORAGE_PASSWORD\n STORAGE_ENCRYPTION_KEY:\n file: ./data/authelia/secrets/STORAGE_ENCRYPTION_KEY\n OIDC_HMAC_SECRET:\n file: ./data/authelia/secrets/OIDC_HMAC_SECRET\n PRIVATE_KEY:\n file: ./data/authelia/keys/private.pem\n\nservices:\n server:\n container_name: authelia\n image: authelia/authelia:4\n restart: unless-stopped\n networks:\n - default\n - webproxy\n secrets:\n - JWT_SECRET\n - SESSION_SECRET\n - STORAGE_PASSWORD\n - STORAGE_ENCRYPTION_KEY\n - OIDC_HMAC_SECRET\n - PRIVATE_KEY\n environment:\n AUTHELIA_JWT_SECRET_FILE: /run/secrets/JWT_SECRET\n AUTHELIA_SESSION_SECRET_FILE: /run/secrets/SESSION_SECRET\n AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE: /run/secrets/STORAGE_PASSWORD\n AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE: /run/secrets/STORAGE_ENCRYPTION_KEY\n AUTHELIA_IDENTITY_PROVIDERS_OIDC_HMAC_SECRET: /run/secrets/OIDC_HMAC_SECRET\n AUTHELIA_IDENTITY_PROVIDERS_OIDC_ISSUER_PRIVATE_KEY_FILE: /run/secrets/PRIVATE_KEY\n VIRTUAL_PROTO: https\n VIRTUAL_HOST: ${VIRTUAL_HOST}\n LETSENCRYPT_HOST: ${VIRTUAL_HOST}\n volumes:\n - ./data/authelia/config:/config\n - ${AUTHELIA_CERTS}:/certs:ro\n depends_on:\n - redis\n - postgres\n\n redis:\n image: redis:7-alpine\n restart: unless-stopped\n volumes:\n - ./data/redis:/data\n\n postgres:\n image: postgres:11-alpine\n restart: unless-stopped\n secrets:\n - STORAGE_PASSWORD\n environment:\n POSTGRES_USER: authelia\n POSTGRES_PASSWORD_FILE: /run/secrets/STORAGE_PASSWORD\n POSTGRES_DB: authelia\n volumes:\n - ./data/postgres:/var/lib/postgresql/data\n\nnetworks:\n webproxy:\n external: true
\n
/srv/authelia/.env
VIRTUAL_HOST=auth.example.com\n# Use nginx-proxy managed TLS cert\nAUTHELIA_CERTS=/srv/proxy/data/certs/auth.example.com
\n

Mail server (Mailu)

\n

See Setup a new Mailu server — Mailu, Docker based mail server

\n

Nextcloud

\n
git clone https://github.com/uetchy/docker-nextcloud.git /srv/cloud\ncd /srv/cloud\ncp .env.example .env\nvim .env # fill the blank variables\nmake # pull, build, start\nmake applypatches # apply custom patches (run only once after the update)
\n

Monitor (Telegraf + InfluxDB + Grafana)

\n

Grafana + InfluxDB (Docker)

\n
git clone https://github.com/uetchy/docker-monitor.git /srv/monitor\ncd /srv/monitor\ndocker compose up -d
\n

Telegraf (Host)

\n
yay -S telegraf
\n
/etc/telegraf/telegraf.conf
# Global tags can be specified here in key=\"value\" format.\n[global_tags]\n\n# Configuration for telegraf agent\n[agent]\n interval = \"15s\"\n round_interval = true\n metric_batch_size = 1000\n metric_buffer_limit = 10000\n collection_jitter = \"0s\"\n flush_interval = \"10s\"\n flush_jitter = \"0s\"\n precision = \"\"\n hostname = \"tako\"\n omit_hostname = false\n\n# Read InfluxDB-formatted JSON metrics from one or more HTTP endpoints\n[[outputs.influxdb]]\n urls = [\"http://127.0.0.1:8086\"]\n database = \"<db>\"\n username = \"<user>\"\n password = \"<password>\"\n\n# Read metrics about cpu usage\n[[inputs.cpu]]\n percpu = true\n totalcpu = true\n collect_cpu_time = false\n report_active = false\n\n# Read metrics about disk usage by mount point\n[[inputs.disk]]\n ignore_fs = [\"tmpfs\", \"devtmpfs\", \"devfs\", \"iso9660\", \"overlay\", \"aufs\", \"squashfs\"]\n\n# Read metrics about disk IO by device\n[[inputs.diskio]]\n\n# Get kernel statistics from /proc/stat\n[[inputs.kernel]]\n\n# Read metrics about memory usage\n[[inputs.mem]]\n\n# Get the number of processes and group them by status\n[[inputs.processes]]\n\n# Read metrics about system load & uptime\n[[inputs.system]]\n\n# Read metrics about network interface usage\n[[inputs.net]]\n interfaces = [\"enp5s0\"]\n\n# Read metrics about docker containers, requires docker group membership for telegraf user\n[[inputs.docker]]\n endpoint = \"unix:///var/run/docker.sock\"\n perdevice = false\n total = true\n\n[[inputs.fail2ban]]\n interval = \"15m\"\n use_sudo = true\n\n# Pulls statistics from nvidia GPUs attached to the host\n[[inputs.nvidia_smi]]\n timeout = \"30s\"\n\n[[inputs.http_response]]\n interval = \"5m\"\n urls = [\n \"https://example.com\"\n ]\n\n# Monitor sensors, requires lm-sensors package\n[[inputs.sensors]]\n interval = \"60s\"\n remove_numbers = false
\n
/etc/sudoers.d/telegraf
Cmnd_Alias FAIL2BAN = /usr/bin/fail2ban-client status, /usr/bin/fail2ban-client status *\ntelegraf ALL=(root) NOEXEC: NOPASSWD: FAIL2BAN\nDefaults!FAIL2BAN !logfile, !syslog, !pam_session
\n
chmod 440 /etc/sudoers.d/telegraf\nchown -R telegraf /etc/telegraf\nusermod -aG docker telegraf\n\n# Verify config\ntelegraf -config /etc/telegraf/telegraf.conf -test\n\nsystemctl enable --now telegraf
\n

Bruce-force attack mitigation (fail2ban)

\n
pacman -S fail2ban
\n
/etc/fail2ban/jail.local
[DEFAULT]\nignoreip = 127.0.0.1/8 10.0.1.0/24 10.0.10.0/24\n\n[sshd]\nenabled = true\nport = 12345\nbantime = 1h\nmode = aggressive\n\n# https://mailu.io/1.9/faq.html?highlight=fail2ban#do-you-support-fail2ban\n[mailu]\nenabled = true\nbackend = systemd\nfilter = mailu\naction = docker-action\nfindtime = 15m\nmaxretry = 10\nbantime = 1w\n\n[gitea]\nenabled = true\nbackend = systemd\nfilter = gitea\naction = docker-action\nfindtime = 30m\nmaxretry = 5\nbantime = 1w
\n
/etc/fail2ban/filter.d/mailu.conf
[INCLUDES]\nbefore = common.conf\n\n[Definition]\n__date = \\d{4}/\\d{2}/\\d{2} \\d{2}:\\d{2}:\\d{2}\n__mailu_prefix = ^%(__prefix_line)s%(__date)s \\[info\\] \\d+#\\d+: \\*\\d+ client login failed:\n__mailu_suffix = while in http auth state, client: <HOST>,\nfailregex =\n %(__mailu_prefix)s \"AUTH not supported\" %(__mailu_suffix)s\n %(__mailu_prefix)s \"Authentication credentials invalid\" %(__mailu_suffix)s\njournalmatch = CONTAINER_NAME=mail-front-1
\n
/etc/fail2ban/filter.d/gitea.conf
[INCLUDES]\nbefore = common.conf\n\n[Definition]\nfailregex = ^%(__prefix_line)sDisconnected from invalid user \\S+ <HOST> port \\d+ \\[preauth\\]\njournalmatch = CONTAINER_NAME=gitea
\n
/etc/fail2ban/action.d/docker-action.conf
[Definition]\n\nactionstart = iptables -N f2b-bad-auth\n iptables -A f2b-bad-auth -j RETURN\n iptables -I DOCKER-USER -p tcp -j f2b-bad-auth\n\nactionstop = iptables -D DOCKER-USER -p tcp -j f2b-bad-auth\n iptables -F f2b-bad-auth\n iptables -X f2b-bad-auth\n\nactioncheck = iptables -n -L DOCKER-USER | grep -q 'f2b-bad-auth[ \\t]'\nactionban = iptables -I f2b-bad-auth 1 -s <ip> -j DROP\nactionunban = iptables -D f2b-bad-auth -s <ip> -j DROP
\n
# Test regex pattern or specific filter against journald logs\nfail2ban-regex systemd-journal -m 'CONTAINER_NAME=gitea' ': Disconnected from invalid user .+ <HOST> port \\d+ \\[preauth\\]'\nfail2ban-regex systemd-journal -m 'CONTAINER_NAME=gitea' gitea --print-all-matched\n\n# Test config\nfail2ban-client --test\n\nsystemctl enable --now fail2ban\nfail2ban-client status
\n\n

Firewall (ufw)

\n
pacman -S ufw\nsystemctl enable --now ufw
\n\n

VPN (WireGuard)

\n
pacman -S wireguard-tools\n\n# gen private key\n(umask 0077; wg genkey > server.key)\n\n# gen public key\nwg pubkey < server.key > server.pub\n\n# gen preshared key for each client\n(umask 0077; wg genpsk > secret1.psk)\n(umask 0077; wg genpsk > secret2.psk)\n...
\n
/etc/wireguard/wg0.conf
[Interface]\nAddress = 10.0.10.1/24\nListenPort = 121212\nPrivateKey = <content of server.key>\n\nPostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o dns-shim -d 10.0.1.100/32 -j MASQUERADE; iptables -t nat -A POSTROUTING -o enp5s0 ! -d 10.0.1.100/32 -j MASQUERADE\nPostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o dns-shim -d 10.0.1.100/32 -j MASQUERADE; iptables -t nat -D POSTROUTING -o enp5s0 ! -d 10.0.1.100/32 -j MASQUERADE\n\n[Peer]\nPublicKey = <public key>\nPresharedKey = <content of secret1.psk>\nAllowedIPs = 10.0.10.2/32\n\n[Peer]\nPublicKey = <public key>\nPresharedKey = <content of secret2.psk>\nAllowedIPs = 10.0.10.3/32
\n
ufw allow 121212/udp # If ufw is running\n\nsysctl -w net.ipv4.ip_forward=1\n\nsystemctl enable --now wg-quick@wg0\n\n# Show active settings\nwg show
\n\n

Backup (restic)

\n
pacman -S restic
\n
/etc/restic/systemd/restic.service
[Unit]\nDescription=Daily Backup Service\n\n[Service]\nNice=19\nIOSchedulingClass=idle\nKillSignal=SIGINT\nExecStart=/etc/restic/cmd/run
\n
/etc/restic/systemd/restic.timer
[Unit]\nDescription=Daily Backup Timer\n\n[Timer]\nOnCalendar=*-*-* 0,6,12,18:0:0\nRandomizedDelaySec=15min\nPersistent=true\n\n[Install]\nWantedBy=timers.target
\n
/etc/restic/cmd/config
export RESTIC_REPOSITORY=/mnt/backups/restic\nexport RESTIC_PASSWORD_FILE=/etc/restic/key # a file contains password\nexport RESTIC_CACHE_DIR=/var/cache/restic\nexport RESTIC_PROGRESS_FPS=1
\n
/etc/restic/cmd/run
#!/bin/bash -ue\n\n# https://restic.readthedocs.io/en/latest/040_backup.html#\n\nDIR=$(dirname \"$(readlink -f \"$0\")\")\nsource \"$DIR/config\"\n\ndate\n\n# system\necho \"> system\"\nrestic backup --tag system -v \\\n --one-file-system \\\n --exclude .cache \\\n --exclude .vscode-server \\\n --exclude TabNine \\\n --exclude /swapfile \\\n --exclude \"/lost+found\" \\\n --exclude \"/var/lib/docker/overlay2/*\" \\\n / /boot /home /srv\n\n# vault\necho \"> vault\"\nrestic backup --tag vault -v \\\n --one-file-system \\\n --exclude 'appdata_*/preview' \\\n --exclude 'appdata_*/dav-photocache' \\\n /mnt/vault\n\necho \"! prune\"\nrestic forget --prune --group-by tags \\\n --keep-last 4 \\\n --keep-within-daily 7d \\\n --keep-within-weekly 1m \\\n --keep-within-monthly 3m\n\necho \"! check\"\nrestic check
\n
/etc/restic/cmd/show
#!/bin/bash -ue\n\nDIR=$(dirname \"$(readlink -f \"$0\")\")\nsource \"$DIR/config\"\n\nTAG=${TAG:-system}\nID=$(restic snapshots --tag $TAG --json | jq -r \".[] | [.time, .short_id] | @tsv\" | fzy | awk '{print $2}')\n\nTARGET=${1:-$(pwd)}\nMODE=\"ls -l\"\nif [[ -f $TARGET ]]; then\n TARGET=$(realpath ${TARGET})\n MODE=dump\nfi\n>&2 echo \"Command: restic ${MODE} ${ID} ${TARGET}\"\n\nrestic $MODE $ID ${TARGET}
\n
/etc/restic/cmd/restore
#!/bin/bash -ue\n\n# https://restic.readthedocs.io/en/latest/050_restore.html\n\nDIR=$(dirname \"$(readlink -f \"$0\")\")\nsource \"$DIR/config\"\n\nTARGET=${1:?Specify TARGET}\nTARGET=$(realpath ${TARGET})\n\nTAG=$(restic snapshots --json | jq -r '[.[].tags[0]]|unique|.[]' | fzy)\nID=$(restic snapshots --tag $TAG --json | jq -r \".[] | [.time, .short_id] | @tsv\" | fzy | awk '{print $2}')\n\n>&2 echo \"Command: restic restore ${ID} -i ${TARGET} -t /\"\n\nread -p \"Press enter to continue\"\n\nrestic restore $ID -i ${TARGET} -t /
\n
(umask 0377; echo -n \"<password>\" > /etc/restic/key)\nchmod 700 /etc/restic/cmd/config\nln -sf /etc/restic/systemd/restic.{service,timer} /etc/systemd/system/\nsystemctl enable --now restic.timer\nsystemctl status restic.timer\nsystemctl status restic
\n\n

Miscellaneous stuff

\n

Kubernetes

\n
pacman -S minikube\n\n# see https://github.com/kubernetes/minikube/issues/4172#issuecomment-1267069635\n#   for the reason having `--kubernetes-version=v1.23.1`\nminikube start \\\n  --driver=docker \\\n  --cpus=max \\\n  --disable-metrics=true \\\n  --subnet=10.100.0.0/16 \\\n  --kubernetes-version=v1.23.1\n\nalias kubectl=\"minikube kubectl --\"\n\n# Allow the control plane to allocate pods to itself\nkubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-\n\n# NGINX Ingress\nminikube addons enable ingress\nminikube service list\n\n# Verify\ndocker network inspect minikube\nminikube ip # => should be 10.100.0.2\nkubectl cluster-info\nkubectl get cm -n kube-system kubeadm-config -o json | jq .data.ClusterConfiguration -r | yq\nkubectl get nodes\nkubectl get po -A\n\n# Hello world\nkubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0\nkubectl expose deployment web --type=NodePort --port=8080\nkubectl get service web\ncurl $(minikube service web --url)\n\n# Hello world through ingress\nkubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml\nkubectl get ingress\ncurl -H \"Host: hello-world.info\" http://$(minikube ip)
\n\n

Install useful tools

\n
# Tips: to find packages that provide specific command, say `pygmentize`:\npacman -Fy pygmentize # => python-pygments\n\nyay -S --needed htop mosh tmux direnv ncdu fx jq yq fd ripgrep exa bat fzy peco fastmod rsync \\\n  antibody-bin hub lazygit git-lfs git-delta difftastic ghq-bin ghq-gst iperf gptfdisk lsof lshw lostfiles \\\n  ffmpeg yt-dlp prettier age gum pyenv neofetch pqrs tea
\n

Make SSH forwarding work with tmux + sudo

\n
/home/op/.ssh/rc
if [ ! -S ~/.ssh/ssh_auth_sock ] && [ -S \"$SSH_AUTH_SOCK\" ]; then\n ln -sf $SSH_AUTH_SOCK ~/.ssh/ssh_auth_sock\nfi
\n
/home/op/.tmux.conf
set -g update-environment -r\nsetenv -g SSH_AUTH_SOCK $HOME/.ssh/ssh_auth_sock
\n
(umask 0337; echo \"Defaults env_keep += SSH_AUTH_SOCK\" > /etc/sudoers.d/ssh)
\n

See also: Happy ssh agent forwarding for tmux/screen · Reboot and Shine

\n

Temperature sensors

\n
pacman -S lm_sensors\nsensors-detect\nsystemctl enable --now lm_sensors\n\n# Now you can configure htop to show the CPU temps\nhtop
\n\n

Telegram notifier

\n
/usr/local/bin/telegram-notifier
#!/bin/bash\n\nBOT_TOKEN=<your bot token>\nCHAT_ID=<your chat id>\nPAYLOAD=$(ruby -r json -e \"print ({text: ARGF.to_a.join, chat_id: $CHAT_ID}).to_json\" </dev/stdin)\n\nOK=$(curl -s -X \"POST\" \\\n -H \"Content-Type: application/json; charset=utf-8\" \\\n -d \"$PAYLOAD\" \\\n https://api.telegram.org/bot${BOT_TOKEN}/sendMessage | jq .ok)\n\nif [[ $OK == true ]]; then\n exit 0\nelse\n exit 1\nfi
\n

Audio

\n
pacman -S alsa-utils # may require rebooting system\n\n# Grant op user audio priv\nusermod -aG audio op\n\n# List devices as root\naplay -l\narecord -L\ncat /proc/asound/cards\n\n# Test speaker\nspeaker-test -c2\n\n# Test mic\narecord -vv -Dhw:2,0 -fS32_LE mic.wav\naplay mic.wav\n\n# GUI mixer\nalsamixer\n\n# For Mycroft.ai\npacman -S pulseaudio pulsemixer\npulseaudio --start\npacmd list-cards
\n
/etc/pulse/default.pa
# INPUT/RECORD\nload-module module-alsa-source device=\"default\" tsched=1\n# OUTPUT/PLAYBACK\nload-module module-alsa-sink device=\"default\" tsched=1\n# Accept clients -- very important\nload-module module-native-protocol-unix\nload-module module-native-protocol-tcp
\n
/etc/asound.conf
pcm.mic {\n type hw\n card M96k\n rate 44100\n format S32_LE\n}\n\npcm.speaker {\n type plug\n slave {\n pcm \"hw:1,0\"\n }\n}\n\npcm.!default {\n type asym\n capture.pcm \"mic\"\n playback.pcm \"speaker\"\n}\n\n#defaults.pcm.card 1\n#defaults.ctl.card 1
\n\n

Maintenance

\n

Quick checkups

\n
htop # show task overview\nsystemctl --failed # show failed units\nfree -h # show memory usage\nlsblk -f # show disk usage\nnetworkctl status # show network status\nuserdbctl # show users\nnvidia-smi # verify nvidia cards\nps aux | grep \"defunct\" # find zombie processes
\n

Delve into system logs

\n
journalctl -p err -b-1 -r # show error logs from previous boot in reverse order\njournalctl -u sshd -f # tail logs from sshd unit\njournalctl --no-pager -n 25 -k # show latest 25 logs from the kernel without pager\njournalctl --since=\"6 hours ago\" --until \"2020-07-10 15:10:00\" # show logs within specific time range\njournalctl CONTAINER_NAME=service_web_1 # show error from the docker container named 'service_web_1'\njournalctl _PID=2434 -e # filter logs based on PID and jump to the end of the logs\njournalctl -g 'timed out' # filter logs based on a regular expression. if the pattern is all lowercase, it will become case-insensitive mode
\n\n

Force overriding installation

\n
pacman -S <pkg> --overwrite '*'
\n

Check memory modules

\n
pacman -S lshw dmidecode\n\nlshw -short -C memory # lists installed mems\ndmidecode # shows configured clock speed
\n\n
smartctl -a /dev/sdN\n\n# via USB bridge\nsmartctl -a -d sat /dev/sdN
\n

Ext4

\n
# e2fsck with badblocks (non-destructive read-write test) and preen enabled\n# [!] umount the drive before this ops\n# [!] Never perform this on an unmounted LUKS partition, as it may lead to data loss\ne2fsck -vcckp /dev/sdNn
\n\n

Fix broken file system headers

\n
testdisk /dev/sdN
\n

Troubleshooting

\n

Longer SSH login (D-bus glitch)

\n
systemctl restart systemd-logind\nsystemctl restart polkit
\n\n

Annoying systemd-homed is not available messages flooding journald logs

\n

Move pam_unix before pam_systemd_home.

\n
/etc/pam.d/system-auth
#%PAM-1.0\n\nauth required pam_faillock.so preauth\n# Optionally use requisite above if you do not want to prompt for the password\n# on locked accounts.\nauth [success=2 default=ignore] pam_unix.so try_first_pass nullok\n-auth [success=1 default=ignore] pam_systemd_home.so\nauth [default=die] pam_faillock.so authfail\nauth optional pam_permit.so\nauth required pam_env.so\nauth required pam_faillock.so authsucc\n# If you drop the above call to pam_faillock.so the lock will be done also\n# on non-consecutive authentication failures.\n\naccount [success=1 default=ignore] pam_unix.so\n-account required pam_systemd_home.so\naccount optional pam_permit.so\naccount required pam_time.so\n\npassword [success=1 default=ignore] pam_unix.so try_first_pass nullok shadow\n-password required pam_systemd_home.so\npassword optional pam_permit.so\n\nsession required pam_limits.so\nsession required pam_unix.so\nsession optional pam_permit.so
\n\n

Annoying systemd-journald-audit logs

\n
/etc/systemd/journald.conf
Audit=no
\n

Missing /dev/nvidia-{uvm*,modeset}

\n

This usually happens right after updating the Linux kernel.

\n\n

[sudo] Incorrect password while password is correct

\n
faillock --reset
\n

Useful links

\n\n", "tags": [] }, { "id": "https://uechi.io/blog/oauth-jwt-rfcs/", "url": "https://uechi.io/blog/oauth-jwt-rfcs/", "title": "OAuth 2.0 と JWT 関連 RFC", "date_published": "2021-02-10T15:00:00.000Z", "content_html": "

個人的な調査のために OAuth 2.0 と JWT 関連 RFC を発行日順に並べた。

\n

RFC6749 — The OAuth 2.0 Authorization Framework

\n

2012 年 10 月

\n

OAuth 1.0a に代わる新たな認証基盤 OAuth 2.0 のコアを規定しており、特筆すべき点がいくつかある。

\n\n

Authorization Grant

\n

トークンエンドポイントでaccess_tokenを発行してもらう際に使用できる Grant (許可証)は、提案中の拡張仕様を含めて 5 つある。

\n
    \n
  1. Authorization Code Grant: RFC6749 – Section 1.3.1\n
      \n
    1. grant_type=authorization_code
    2. \n
    3. Authorization Code Grant with PKCE
    4. \n
  2. \n
  3. Implicit Flow: RFC6749 – Section 1.3.2\n
      \n
    1. もともと CORS (Cross Origin Resource Sharing) が登場する以前の SPA で、POST リクエストを回避しつつ Access Token を得る\"妥協案\"として策定された
    2. \n
    3. CSRF 耐性が無い (RFC6819 - Section 4.4.2.5)ため、使うべきではない
    4. \n
  4. \n
  5. Resource Owner Password Credentials Grant: RFC6749 – Section 1.3.3\n
      \n
    1. 直接パスワードで認証する形式
    2. \n
  6. \n
  7. Client Credentials Grant: RFC6749 – Section 1.3.4\n
      \n
    1. クライアントシークレットでトークンを取得する形式。
    2. \n
  8. \n
  9. Device Grant: RFC Draft — OAuth 2.0 Device Authorization Grant\n
      \n
    1. 入力機器が無い場合もある組み込みデバイス向けの認証フロー
    2. \n
  10. \n
\n

RFC6750 — The OAuth 2.0 Authorization Framework: Bearer Token Usage

\n

2012 年 10 月

\n

OAuth 2.0 において、access_tokenをリソースサーバーに渡す手法を規定する。OAuth 2.0 JWT Bearer Token Flowではない

\n

手法として 3 つが挙げられている。

\n
    \n
  1. Bearer Token (SHOULD)
  2. \n
  3. Form Encoded Parameters (SHOULD NOT)
  4. \n
  5. URI Query Parameters (SHOULD NOT)
  6. \n
\n

OIDC — OpenID Connect Core 1.0

\n

2014 年 11 月

\n

OAuth 2.0 の上にいくつか仕様を足したサブセットで、OAuth (Authorization)に Authentication の機能を付与した画期的なプロトコル。

\n

RFC7515 — JSON Web Signature (JWS)

\n

2015 年 5 月

\n

JSON ベースの署名プロトコル。

\n

RFC7516 — JSON Web Encryption (JWE)

\n

2015 年 5 月

\n

JSON ベースの暗号化プロトコル。

\n

RFC7517 — JSON Web Key (JWK)

\n

2015 年 5 月

\n

JWT の署名チェックに用いる公開鍵を配信するためのプロトコル。

\n

RFC7518 — JSON Web Algorithms (JWA)

\n

2015 年 5 月

\n

JWS、JWE、JWK で利用されるアルゴリズム (alg)やその他プロパティを規定する。

\n

RFC7519 — JSON Web Token (JWT)

\n

2015 年 5 月

\n

JWT は JSON を利用して Assertion を生成するための仕様。

\n

RFC7521 — Assertion Framework for OAuth 2.0 Client Authentication and Authorization Grants

\n

2015 年 5 月

\n

任意の Assertion を OAuth 2.0 Client Authentication の Client Credentials として使ったり、あるいは Authorization Grant として Access Token と交換するための仕様。

\n

トークンエンドポイントに強化されたクライアント認証を付与する。続く RFC で、それぞれ SAML と JWT を使用したパターンを規定している。

\n

OAuth 2.0 JWT Bearer Token Flowとも呼ばれている。

\n\n

2015 年 5 月 https://tools.ietf.org/html/rfc7523

\n

RFC Draft — JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens

\n

2019 年 7 月

\n

リソースサーバーへ渡す Access Token に JWT を使用することを定めている。

\n", "tags": [] }, { "id": "https://uechi.io/blog/secure-dev-server/", "url": "https://uechi.io/blog/secure-dev-server/", "title": "Securing Local Dev Server", "date_published": "2020-02-06T06:00:00.000Z", "content_html": "

Sometimes you want to interact with a local webserver with https support because of some browser APIs that are only available in an https environment.

\n

You can easily create a self-signed TLS cert for development purposes with mkcert.

\n
brew install mkcert\nmkcert -install # Install the local CA in the OS keychain
\n

After installing mkcert and generating system-wide local CA cert, you can create a certificate for each project.

\n
cd awesome-website\nmkcert localhost # this will generate ./localhost.pem and ./localhost-key.pem\nnpm install -g serve\nserve --ssl-cert ./localhost.pem --ssl-key ./localhost-key.pem
\n", "tags": [] }, { "id": "https://uechi.io/blog/bose-noise-cancelling-headphones-700%E3%83%AC%E3%83%93%E3%83%A5%E3%83%BC/", "url": "https://uechi.io/blog/bose-noise-cancelling-headphones-700%E3%83%AC%E3%83%93%E3%83%A5%E3%83%BC/", "title": "Bose Noise Cancelling Headphones 700レビュー", "date_published": "2019-10-25T01:49:01.000Z", "content_html": "

Bose Noise Cancelling Headphones 700 を使い始めて一ヶ月が経ったので、現時点での印象をまとめる。

\n

Pros

\n\n

Cons

\n\n", "tags": [] }, { "id": "https://uechi.io/blog/welch-t-test/", "url": "https://uechi.io/blog/welch-t-test/", "title": "プログラムの速度改善が誤差かどうかを統計的に調べる", "date_published": "2019-10-02T23:21:00.000Z", "content_html": "

Welch の t 検定を用いて 2 つのベンチマークの分布の平均が等しい(速度差は誤差の範疇)か、あるいは異なる(=有意な速度改善が成されている)かどうかを判定したい。

\n

ベンチマーク用に TypeScript プログラムを用意した。

\n
a.ts
function a() {\n const noise = Math.random() - 0.5;\n const offset = 1.0;\n const t = noise * 2 + offset;\n setTimeout(() => console.log(t), t * 1000);\n}\na();
\n
b.ts
function b() {\n const noise = Math.random() - 0.5;\n const offset = 2.0;\n const t = noise * 2 + offset;\n setTimeout(() => console.log(t), t * 1000);\n}\nb();
\n

まずhyperfineで 2 つの プログラムのベンチマークを取り、result.jsonに保存する。

\n
hyperfine 'ts-node a.ts' 'ts-node b.ts' -r 50 --warmup 3 --export-json ab.json
\n

result.jsonの中身は以下のようになる。

\n
result.json
{\n \"results\": [\n {\n \"command\": \"ts-node a.ts\",\n \"mean\": 1.9369869248950002,\n \"stddev\": 0.6074252496423262,\n \"median\": 2.005230080295,\n \"user\": 1.549546345,\n \"system\": 0.08031985000000001,\n \"min\": 0.8807363742950001,\n \"max\": 2.830435366295,\n \"times\": [\n 1.4010462692949999,\n 2.830435366295,\n 1.010024359295,\n 1.159667609295,\n 1.8311979602950001,\n ...\n ]\n },\n {\n \"command\": \"ts-node b.ts\",\n \"mean\": 2.833931665055,\n \"stddev\": 0.6505564501747996,\n \"median\": 2.7373719187950005,\n \"user\": 1.5474132649999999,\n \"system\": 0.07978893000000001,\n \"min\": 1.938184970295,\n \"max\": 3.946562622295,\n \"times\": [\n 2.2806011012950003,\n 2.0140897212950004,\n 2.1835023382950003,\n 2.304886362295,\n 3.8122057912950003,\n ...\n ]\n }\n ]\n}
\n
\n

t 検定はサンプルが正規分布に従っているという仮定を置いているため、大数の法則から本当はもっと試行回数を増やした方が良い。

\n
\n

このresult.jsontimes配列を受け取り、2 つの分布間に有意差があるかどうかを判定する。

\n
import fs from \"fs\";\nimport { jStat } from \"jstat\";\n\nconst log = console.log;\n\nconst sum = (x: number[]) => x.reduce((a: number, b: number) => a + b); // 総和\nconst sqsum = (x: number[], mu: number) =>\n  x.reduce((a: number, b: number) => a + (b - mu) ** 2); // 自乗誤差の総和\n\nfunction ttest(X: number[], Y: number[]) {\n  const Xn = X.length; // サンプル数\n  const Yn = Y.length;\n  log(`Xn = ${Xn}`);\n  log(`Yn = ${Yn}`);\n\n  const X_mu = sum(X) / Xn; // 平均\n  const Y_mu = sum(Y) / Yn;\n  log(`X_mu = ${X_mu}`);\n  log(`Y_mu = ${Y_mu}`);\n\n  const X_sigma = sqsum(X, X_mu) / (Xn - 1); // 不偏分散\n  const Y_sigma = sqsum(Y, Y_mu) / (Yn - 1);\n  log(`X_sigma = ${X_sigma}`);\n  log(`Y_sigma = ${Y_sigma}`);\n  const t = (X_mu - Y_mu) / Math.sqrt(X_sigma / Xn + Y_sigma / Yn); // t値\n  log(`t = ${t}`);\n  const df =\n    (X_sigma + Y_sigma) ** 2 /\n    (X_sigma ** 2 / (Xn - 1) + Y_sigma ** 2 / (Yn - 1)); // 自由度\n  log(`df = ${df}`);\n  return jStat.studentt.cdf(-Math.abs(t), df) * 2.0; // p値\n}\n\nconst filename = process.argv.slice(2)[0];\nconst result = JSON.parse(fs.readFileSync(filename).toString());\nconst X = result.results[0].times;\nconst Y = result.results[1].times;\nconst p = ttest(X, Y);\nlog(`p = ${p}`);\nlog(`p < 0.05 = ${p < 0.05}`);\nlog(p < 0.05 ? \"Possibly some difference there\" : \"No difference\");
\n

ここでX_muは分布 X の平均、X_sigmaは分布 X の不偏分散だ。

\n

\n

これを X と Y 両方に対して求めます。さらに以下のようにして t を求める。

\n

\n

t 分布の累積密度関数 (Cumlative Distribution Function; CDF) を定義する。面倒すぎたのでjstatstudentt.cdfを使った。コードを見ると、分子の積分はシンプソンの公式を使って近似していた。

\n

\n

CDF を用いて p 値を求める。両側検定をするので 2 を掛ける。t 分布の自由度 (degree of freedom; df) はなので、両分布の自由度をで与える。本当は

\n

\n

で求める必要があるが、さぼって近似した。

\n

\n

結果

\n

異なる実行時間を示すプログラムa,bを比較すると、2 つの分布の平均が異なることが示唆された。

\n
❯ ts-node test.ts ab.json\nXn = 10\nYn = 10\nX_mu = 1.8022945422950003\nY_mu = 2.9619571628950006\nX_sigma = 0.6067285795623545\nY_sigma = 0.6593856215802901\nt = -3.2590814831310353\ndf = 17.968919419652778\n-0.0001571394779906754\np = 0.004364964634417297\np < 0.05 = true\nPossibly some difference there
\n

p 値が 0.05 未満となり、帰無仮説「2つの分布は等しい」が棄却されたので「2つの分布は等しくない」ことがわかった。では、同じプログラム同士でベンチマークを取るとどうなるか?

\n
❯ ts-node test.ts aa.json\nXn = 10\nYn = 10\nX_mu = 1.7561671737900002\nY_mu = 1.9892996860899999\nX_sigma = 0.5127362786380443\nY_sigma = 0.442053230382934\nt = -0.754482245774979\ndf = 17.901889803947558\n-27359.526584574112\np = 0.4603702896905685\np < 0.05 = false\nNo difference
\n

p 値が 0.05 未満ではないため、帰無仮説は棄却されず、つまり「2つの分布は等しい」ことがわかった。

\n

ウェルチの t 検定はスチューデントの t 検定と違って等分散性(2つの分布の分散が等しいこと)を仮定しないため、とても取り扱いやすい検定だ。もちろん等分散性のある分布でも使用できるので、基本的にはウェルチの方法を使う方針で良さそうだ。

\n

参考文献

\n\n", "tags": [] }, { "id": "https://uechi.io/blog/give-your-app-slick-name/", "url": "https://uechi.io/blog/give-your-app-slick-name/", "title": "Give Your App Slick Name with namae.dev", "date_published": "2019-08-28T09:12:00.000Z", "content_html": "

Have you ever struggled with naming your new OSS project or web app? While hoping no one claimed your desired one in GitHub, npm, Homebrew, PyPI, Domains, etcetera, choosing the best name is weary work.

\n

That's why I created namae.

\n

namae

\n

namae is an inter-platform name availability checker for developers and entrepreneurs.

\n

Once you fill out a form with a name you want to use, namae will check through various registries and check if the name is already in use or not.

\n

\n

Supported Platforms

\n

namae supports 15 package registries and web platforms, and it's growing.

\n\n

Additionally, the search result comes with a list of projects which has a similar name on GitHub and App Store.

\n

Name Suggestion

\n

namae also has a unique feature called Name Suggestion. It suggests auto-generated names made up of common prefix/suffix and synonyms. Take look at some examples.

\n

\n

\n

Clicking the suggestion, namae completes the form with it and start searching around the registries.

\n

Open Source

\n

namae is completely open-sourced and the entire source code is available at GitHub. It consists​ of Node.js Lambda for APIs and React app for the web frontend, and is running on ZEIT Now.

\n

Conclusion

\n

namae saves your time searching for a universally available name around a set of hosting providers and package registries.

\n

Go to namae.dev and grab a report for the availability of your future product name. If you have any suggestion, poke me on Twitter ([@uechz](https://twitter.com/uechz)).

\n", "tags": [] }, { "id": "https://uechi.io/blog/sign-and-notarize-electron-app/", "url": "https://uechi.io/blog/sign-and-notarize-electron-app/", "title": "Electronアプリをコード署名してApple 公証 (Notary) を通過させる方法", "date_published": "2019-06-04T06:00:00.000Z", "content_html": "

electron-builder を利用して macOS 向け Electron アプリをコード署名し、公証を通過させる。

\n
\n

tl;dr: コード署名と公証に対応した macOS アプリ Juno のリポジトリをGitHub で公開している。

\n
\n

Code Sign

\n

アプリのコード署名はelectron-builderによって自動で行われる。内部的にはelectron-osx-signが使用される。

\n

リリース用のアプリにコード署名をするには、Keychain に有効な Developer ID Certificate が格納されている必要がある。macOS Developer Certificate は開発用のコード署名にしか使えないため、リリース用としては不十分だ。

\n

まだ証明書を発行していない場合は、Apple Developerで証明書の追加ウィザードに進み、Developer ID Applicationを選択して証明書を発行する。

\n

Notarize

\n

コード署名済みのアプリをelectron-notarizeを使用して Apple Notary Service に提出する。

\n
const { notarize } = require(\"electron-notarize\");\nnotarize({\n  appBundleId,\n  appPath,\n  appleId,\n  appleIdPassword,\n  ascProvider,\n});
\n\n

electron-builder の afterSign フック

\n

electron-builder の afterSign フックを使用して、コード署名が済んだアプリを自動で Notary に提出する。

\n

フックスクリプトを./scripts/after-sign-mac.jsに置く。

\n
const path = require(\"path\");\nconst { notarize } = require(\"electron-notarize\");\n\nconst appleId = process.env.APPLE_ID;\nconst appleIdPassword = process.env.APPLE_PASSWORD;\nconst ascProvider = process.env.ASC_PROVIDER;\n\nconst configPath = path.resolve(__dirname, \"../package.json\");\nconst appPath = path.resolve(__dirname, \"../dist/mac/App.app\");\nconst config = require(configPath);\nconst appBundleId = config.build.appId;\n\nasync function notarizeApp() {\n  console.log(`afterSign: Notarizing ${appBundleId} in ${appPath}`);\n  await notarize({\n    appBundleId,\n    appPath,\n    appleId,\n    appleIdPassword,\n    ascProvider,\n  });\n  console.log(\"afterSign: Notarized\");\n}\n\nexports.default = async () => {\n  await notarizeApp();\n};
\n

package.jsonbuildafterSignを追加して、コード署名が終わった後にスクリプトが実行されるようにする。

\n
\"build\": {\n  \"afterSign\": \"./scripts/after-sign-mac.js\"\n}
\n

Hardened Runtime and Entitlements

\n

このままでは公証に失敗する。デフォルトで書き出されるバイナリでは、セキュリティの強化されたHardened Runtimeが有効になっていないためだ。以下のようなエラーメッセージが帰ってくる。

\n
{\n  \"status\": \"Invalid\",\n  \"statusSummary\": \"Archive contains critical validation errors\",\n  \"statusCode\": 4000,\n  \"issues\": [\n    {\n      \"severity\": \"error\",\n      \"code\": null,\n      \"path\": \"App.zip/App.app/Contents/MacOS/App\",\n      \"message\": \"The executable does not have the hardened runtime enabled.\",\n      \"docUrl\": null,\n      \"architecture\": \"x86_64\"\n    },\n  }\n}
\n

そこで、package.jsonbuild.mac.hardenedRuntimetrueにして Hardened Runtime を有効にする。

\n
\"build\": {\n  \"mac\": {\n    \"hardenedRuntime\": true\n  }\n}
\n

Hardened Runtime 下では、必要に応じて Entitlement を指定しなければならない。Electron の実行にはallow-unsigned-executable-memory Entitlement が必要だ。そこで、entitlement.plistファイルをbuildフォルダに作成し、以下のような plist を記述する。

\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n<plist version=\"1.0\">\n  <dict>\n    <key>com.apple.security.cs.allow-unsigned-executable-memory</key>\n    <true/>\n  </dict>\n</plist>
\n

package.jsonentitlements及びentitlementsInheritに Entitlement が記述された plist のファイルパスを指定する。

\n
\"build\": {\n  \"mac\": {\n    \"hardenedRuntime\": true,\n    \"entitlements\": \"./src/build/entitlement.plist\",\n    \"entitlementsInherit\": \"./src/build/entitlement.plist\"\n  }\n}
\n

Hardened Runtime で Electron を実行することができるようになったので、Notary を通過できる状態になった。

\n

実際にelectron-builderを実行して、すべてのプロセスが滞りなく動作することを確かめよう。

\n

Verify Notary Status

\n

ただしく公証を得られたかどうかはaltoolで調べることができる。

\n

公証通過後に送られてくるメールにRequest Identifierが記載されているのでメモする。

\n
Dear uetchy,\n\nYour Mac software has been notarized. You can now export this software and distribute it directly to users.\n\nBundle Identifier: <Bundle ID>\nRequest Identifier: <UUID>\n\nFor details on exporting a notarized app, visit Xcode Help or the notarization guide.\nBest Regards,\nApple Developer Relations
\n

xcrun altool --notarization-infoコマンドに UUID と Apple ID、パスワードを指定して公証ステータスを確認する。

\n
xcrun altool --notarization-info <UUID> -u $APPLE_ID -p $APPLE_PASSWORD
\n

正しく公証が得られている場合は以下のようなメッセージが表示される。おめでとう!

\n
2019-06-05 13:51:18.236 altool[5944:261201] No errors getting notarization info.\n\n   RequestUUID: <UUID>\n          Date: 2019-06-05 04:45:54 +0000\n        Status: success\n    LogFileURL: https://osxapps-ssl.itunes.apple.com/itunes-assets/Enigma123/v4/<Log file identifier>\n   Status Code: 0\nStatus Message: Package Approved
\n

参考文献

\n\n", "tags": [] }, { "id": "https://uechi.io/blog/english-note/", "url": "https://uechi.io/blog/english-note/", "title": "英語メモ", "date_published": "2019-01-17T01:31:00.000Z", "content_html": "

雑多なメモです。

\n\n
\n

And this is a Tech Forum, AND it's supposed to be a safe space. — そしてここは技術フォーラムであり、かつ心理的安全性が確保されているべき場所だ。 If you don't want to answer questions for \"n00b\", then don't answer. — \"初心者\"の質問に答えたくないなら、ただ口をつぐんでくれ

\n
\n
\n

Have you thought of what you’ll do after you retire? — 引退したらやりたいことについて考えたことある?

\n
\n\n
\n

me: it’s not that I mind freelancing, I love it. It’s just that the social interaction is pretty minimal and extremely uneven day-to-day and sometimes I wonder how that will affect me long term, you know? barista: ok are you going to order

\n
\n\n
\n

glad you’re happily situated elsewhere now. — あなたが他の場所で活躍しているようで嬉しい

\n
\n\n
\n

I would also half-expect the following sentences to contain further information on this particular group of students, like this:

\n
\n
\n

It uses the definite article before \"majority of the students\", as if referring to a known group.

\n
\n
\n

This sentence uses the indefinite article before the \"majority of the students\". This has the effect of introducing this group of students to the reader or the listener.

\n
\n
\n

It may give a hint that the exact number of the students that will vote is uncertain: it could be 51%, but then it could be 88%.

\n
\n
\n

Design is not just a visual thing, it's a thought process. it's a skill.

\n
\n\n
\n

Lots of bittersweet feelings over this

\n
\n\n
\n

learning foreign languages is for me lifelong hobby. knowledge likely to requires much time to acquiring but it never decays.

\n
\n
\n

sees client, whistles loudly

\n
\n
\n

weston comes out of nowhere running on all fours like a gorilla towards client

\n
\n
\n

I don't even understand how there can be HOURS of planned downtime for a service like this.

\n
\n
\n

And what could go wrong, in depending on the internet to keep the child warm

\n
\n
\n

the developer advocate in question understands the language barrier, it would seem; he’s not a native English speaker. regardless, his stance puts many who haven’t had a strong English education at a disadvantage. it prioritizes “getting things done” over people...

\n
\n
\n

save my dying cactus. what's wrong. cactus should have been easy stuff since they barely need water.

\n
\n
\n

higher form of something

\n
\n
\n

i hope it will inspire Americans to pursue a second language

\n
\n
\n

I love the fact that

\n
\n
\n

let's see if the result is the same (spoiler alert: it wasn't)

\n
\n
\n

Feature engineering is when we use existing features to create new features to determine if they provide new signals to predict our outcome.

\n
\n
\n

The happiest moment of my life was when I passed the entrance test.

\n
\n
\n

Controversial opinion: Every single opinion is controversial.

\n
\n\n
\n

i feel like i'm most productive when i'm avoiding some other kind of responsibility

\n
\n\n

Go

\n\n

Get

\n\n
\n

make sure you know what the author expects you to know.

\n
\n\n
\n

Thanks to his help, we managed to finish the work in time. 彼のおかげでなんとか時間内に仕事を終えることできた

\n
\n
\n

Thanks to the development of medicine, people have been able to live longer.

\n
\n
\n

Owing to the rise in oil prices, people are forced to use public transportation.

\n
\n
\n

The balloon is likely to burst at any time その風船は今にも爆発しそうだ

\n
\n
\n

It is because the air was polluted that the experiment was a failure. The experiment was a failure because the air was polluted.

\n
\n
\n

Thesis. Remote companies have less of an echo chamber since they are distributed. Thoughts?

\n
\n
\n

Obviously they should A, and B when C. Not D forever.

\n
\n\n
\n

I think people are angry because Slack A, B and is C. No D, no E.

\n
\n
\n

Because he's the kind of person who likes helping others and is always gentle and kind as you described

\n
\n
\n

it was more difficult to get it wrong than guess it right! XD

\n
\n
\n

I'm pretty sure there was a lot of uproar about the Equifax thing

\n
\n

Idioms

\n\n
\n

Hello, I just came across this wonderful library and was really intrigued by it :)

\n
\n
\n

When teaching, be careful not to mix up \"\" and \"\".

\n
\n\n
\n

Researchers spend a great deal of time doing something — 研究者は多くの時間を〜するのに割いています。

\n
\n\n
\n

The iPad already has external keyboard support and an external trackpad support would go a long way to making the iPad a Mac replacement.

\n
\n
\n

Countless wrong inferences could be drawn from those observation.

\n
\n\n
\n

I felt like a coward hiding this from people — それを隠すことで自分が臆病者に思えた

\n
\n
\n

While training for the marathon, she was relentless in following the same schedule — マラソンの練習の間、彼女は同じスケジュールをしつこいほど守った。

\n
\n\n
\n

A was quick to spot the potential of the Internet — A はインターネットの可能性をいち早く見出した

\n
\n\n

p.p. the present progressive form, 現在進行系

\n\n

一般に前置詞+ who(m)は既知の情報を確認する際に, who +前置詞は新しい情報を求める際に好まれる

\n
\n

This is sure to satisfy those who are into computers. これはコンピュータにはまっている人をきっと満足させるだろう

\n
\n
\n

I've always wanted to study abroad someday but I haven't been able to yet.

\n
\n

複文と分詞構文

\n\n

when

\n
    \n
  1. 未来の条件を表す場合は現在形を使う When I arrive at the station, I'll call you.
  2. \n
  3. 過去の時を表す場合は過去形 When I was a child, I used to like soccer.
  4. \n
  5. 未来の可能性を話す場合は未来形 I don't know when he will come.
  6. \n
\n\n

不定詞は名詞・形容詞・副詞的働きをする動詞ベースの詞のこと

\n

to 不定詞が unrealized(未実現)、-ing が realized(実現)

\n\n
    \n
  1. She is studying English hard to study abroad.
  2. \n
  3. She study English hard to study abroad.
  4. \n
\n

違いは?

\n\n

倒置法

\n\n
\n

\"We live in a world of churn, but we do not have to accept high churn rates as the only reality.\" from \"Subscription Marketing: Strategies for Nurturing Customers in a World of Churn (English Edition)\" by Anne Janzer 45 Ways To Avoid Using The Word 'Very' - Writers Write

\n
\n
\n

I still hate that \"atheist\" now is code for \"bigoted asshole.\"

\n
\n
\n

Life only comes around once. So do whatever makes you happy and be with whoever makes you smile!๑╹◡╹)ノ…

\n
\n
\n

This is exactly what I thought was gonna happen when I saw those tweets, thank you…

\n
\n
\n

@Nodejs and @the_jsf have merged to form the #OpenJSFoundation! The OpenJS Foundation is to become the central place to support collaborative development of #JavaScript and web technologies.

\n
\n
\n

I love critical thinking and I admire skepticism, but only within a framework that respects the evidence.

\n
\n
\n

In earlier versions of TypeScript, we generalized mapped types to operate differently on array-like types. This meant that a mapped type like Boxify could work on arrays and tuples alike.

\n
\n
\n

a cool thing about the last few years is that the U.S. became the leading exporter of the intellectual machinery of western fascism and one of the leading domestic debates about it is whether undergrads are treating the people behind it politely enough

\n
\n
\n

Perhaps it would be less of a debate if they were more circumspect in who/what they call fascism. The main criticism seems to be that they have a lot of false positives in their \"fascist-test\" and treat anyone who points this out as fascist fellow travellers

\n
\n
\n

Universities are filled with people who feel like they are competing against their colleagues. When you do this, you end up constantly looking in front of you, feeling bitter. Or looking behind you, becoming arrogant. Run your own race. — Marc Lamont Hill https://twitter.com/marclamonthill/status/1109500482500939776?s=12

\n
\n
\n

Much discussion about bias in ML due to the training dataset - this has been an active area of study for a long time in population genetics, and is still not fully resolved — Miriam Huntley https://twitter.com/iam_mir_iam/status/1108819635959418881?s=12

\n
\n
\n

Welcome to the FGO NA Twitter. If you look to your left, you'll see the salty people club that are currently sulling about not pulling Okita. If you look to your right, you'll see the angry weeb club still screeching about Emiya Alter. Please enjoy your stay.… — ℭ𝔦𝔫𝔡𝔢𝔯 𝔉𝔞𝔩𝔩 https://twitter.com/zettainverse/status/1109231751019278337?s=12

\n
\n
\n

Probably one of the most auto-bookmarkable post I've seen in a while, regardless of skill level with git: — Ben Halpern 🤗 https://twitter.com/bendhalpern/status/1135319291568562176?s=12

\n
\n
\n

Now announcing tsconfig-api 🎉 An experimental microservice for retrieving @typescript compiler option details 🔎 100% open source and hosted by @zeithq Now — Matterhorn https://twitter.com/matterhorndev/status/1138610398159147008?s=12

\n
\n
\n

Amid all of the chaos, an answer could be found. There’s a special kind of joy in UI libraries when you see small primitives working. Like maybe it’s a two rectangle demo. But you’ve already learned that thanks to composition, if it worked for two rectangles, you can make it work for a whole FB/Twitter-level app. — Dan Abramov https://twitter.com/dan_abramov/status/1143911059717263360?s=12

\n
\n
\n

Yes, assuming something will work like literally every other software that has ever been created will work is subjective. Super annoying that these people are so insecure about themselves that they have to do that kind of thing.… — Kent C. Dodds https://twitter.com/kentcdodds/status/1147142716280602629?s=12

\n
\n
\n

Honest question: how does banning people from an opportunity to build a professional presence and potentially escape an oppressive regime advance human rights?… — Dan Abramov https://twitter.com/dan_abramov/status/1154871232459956224?s=12

\n
\n
\n

The national flag of Japan is a rectangular white banner bearing a crimson-red disc at its center. This flag is officially called Nisshōki (日章旗, the \"sun-mark flag\"), but is more commonly known in Japan as Hinomaru (日の丸, the \"circle of the sun\"). It embodies the country's sobriquet: Land of the Rising Sun. reading this tweet is weird because at first you're like \"hahaha they're not used to how gacha works\" then you realize you've just been conditioned into being ok with predatory game models You're offering subjective value assessments, not facts.… — Levi Roach https://twitter.com/DrLRoach/status/1172907254892421120?s=17

\n
\n
\n

My takeaway is that I'm starting a support group for design systems engineers across the world. 😛 We're all going through different versions of the same challenges at each of our companies and it's always encouraging to share information about where we are in this journey. — Maja Wichrowska https://twitter.com/majapw/status/1187891828189589504?s=17

\n
\n", "tags": [] }, { "id": "https://uechi.io/blog/padsize/", "url": "https://uechi.io/blog/padsize/", "title": "padStartにおけるpadSizeの求め方", "date_published": "2019-01-13T06:00:00.000Z", "content_html": "

padStart における適切な padSize の求め方。

\n

\n
const padSize = Math.ceil(Math.log10(arr.length + 1));\n\narr.forEach((item, index) => {\n  console.log(`${index.padStart(padSize, \"0\")}: ${item}`);\n});
\n

結果は以下のようになる。

\n
01: item1\n02: item2\n03: item3\n04: item4\n05: item5\n06: item6\n07: item7\n08: item8\n09: item9\n10: item10
\n", "tags": [] }, { "id": "https://uechi.io/blog/math-api/", "url": "https://uechi.io/blog/math-api/", "title": "Math API: LaTeX Math as SVG image", "date_published": "2018-10-22T09:19:00.000Z", "content_html": "

I've always wanted to put LaTeX Math equations on a web page where MathJax is not allowed to run inside it.

\n

Spending some time, I made Math API, that renders LaTeX Math markup into an SVG image.

\n

So you can place your equation on almost everywhere on which you could put <img> or Markdown (![]()), such as GitHub, Jupyter Notebook or dev.to (here!).

\n
![](https://math.now.sh?from=\\LaTeX)
\n
\n\"Equation\"
Equation
\n
\n
![](https://math.now.sh?from=\\log\\prod^N_{i}x_{i}=\\sum^N_i\\log{x_i})
\n
\n\"Equation\"
Equation
\n
\n

Inline image

\n

\n

It is possible to generate an inline equation by changing the query from from to inline.

\n
<img src=\"https://math.now.sh?inline=\\\\LaTeX\" />
\n

Online Editor

\n

Also, there is the online editor available at https://math.now.sh.

\n

\n

Conclusion

\n

The source code is available on GitHub. Give it a try and leave a comment/idea for a new feature.

\n", "tags": [] }, { "id": "https://uechi.io/blog/comparing-oss-on-github/", "url": "https://uechi.io/blog/comparing-oss-on-github/", "title": "Comparing OSS on GitHub", "date_published": "2018-09-22T09:21:00.000Z", "content_html": "

You are making a decision on which open source project you would adopt for your newly developing application.

\n

This time it is a little bit difficult for you because the candidates are seemingly almost the same in a functional perspective.

\n

So let's delve into this from a different perspective: contributors and users activities.

\n\n

I made a simple tool to get you covered with the above guidelines.

\n

gh-compare

\n

\n

gh-compare is a simple terminal app to explore your candidates and aggregate a result into a nice-looking report.

\n
npm install -g gh-compare\ngh-compare facebook/react vuejs/vue riot/riot
\n

\n

You will see the GitHub activities for each candidate at once. It could help you to decide which library you would adopt!

\n

Warmly welcome to any comments/ideas to improve gh-compare!

\n", "tags": [] } ] }