{ "version": "https://jsonfeed.org/version/1", "title": "dchest.com", "home_page_url": "https://dchest.com", "feed_url": "https://dchest.com/feed.json", "author": { "name": "Dmitry Chestnykh" }, "items": [ { "id": "https://dchest.com/2025-08-09-mlkem-webcrypto", "title": "ML-KEM in WebCrypto API", "url": "https://dchest.com/2025/08/09/mlkem-webcrypto/", "tags": ["Programming","Security","Cryptography"], "date_published": "2025-08-09T00:00:00Z", "content_html": "
Web Cryptography API is not everyone’s favorite thing, but it’s the only way to do cryptography client-side in web browsers or web views if you want to avoid third-party libraries (e.g., for compliance reasons, or to save keys securely in IndexedDB).
\n\nOnly recently have we gotten wide X25519 and Ed25519 support in browsers.
\n\nThe next big thing is ML-KEM, a post-quantum key encapsulation mechanism standardized by NIST last year.
\n\nML-KEM protects against future quantum computers that could break current public key algorithms. Implementing it now is important for many use cases, because attackers can record data encrypted with classical algorithms today and decrypt it later when quantum computers that can break those algorithms become available (if this ever happens). Implementing quantum-secure signatures, such as ML-DSA and SLH-DSA, on the other hand, is less pressing for many applications (but not all), because most signatures are verified today, when there’s no quantum computer capable of forging them.
\n\nAs of August 2025, the WebCrypto API does not yet support ML-KEM. But there is a draft specification written by Daniel Huigens. Browsers already ship with ML-KEM support internally for TLS, so I expect they’ll implement this spec sometime soon. I personally hope for 2026, but who knows — X25519 was in TLS for a long time before it got into WebCrypto. There are some other signals that ML-KEM support everywhere is on the horizon, such as Apple adding it to CryptoKit and encouraging developers to use it in their WWDC 2025 session.
\n\nMeanwhile, if, like me, you are developing a web app that uses the WebCrypto API today, you are left with the following options:
\n\nDon’t implement post-quantum cryptography until it’s available, and then suffer the migration pain.
Use a third-party library that implements ML-KEM in JavaScript or WASM and then switch to the WebCrypto API when it becomes available, rewriting the code (don’t forget that WebCrypto is async, so potentially, you’ll have to turn all the functions that use it async).
Use a third-party library that implements ML-KEM with a WebCrypto API-like interface, and then switch to the WebCrypto API when it becomes available, with almost no code changes.
I decided to go with the third option, but there was no such library available, so I wrote one! mlkem-wasm implements ML-KEM-768 in WebAssembly with an API from the Modern Algorithms in the Web Cryptography API draft spec. It’s a single 50KB JavaScript file (16 KB gzipped), with WASM embedded in it, so it’s easy to use, and, in theory, it would be easy to switch to the WebCrypto API by removing the import and replacing mlkem with crypto.subtle in the code.
Unlike with my previous crypto libraries, I wrote none of the ML-KEM code myself. Instead, I compiled mlkem-native, which is a memory-safe, type-safe, high-performance C library used by AWS. I only wrote some build scripts and the TypeScript wrapper implementing the WebCrypto-like interface.
\n\nWhile it calls WASM, it would not be hard to customize it to use a different ML-KEM implementation, for example, if you want to use Apple’s CryptoKit with a WKWebView-based native app. I will probably release bindings for it when the time comes (assuming native WebCrypto API comes later than that).
\n\nIt’s fast, here are benchmark results on my M1 MacBook Air in Chromium:
\n\nBenchmark Results (10000 iterations each):\n\nKeypair Generation:\n• Total: 360.70ms\n• Average: 0.04ms per operation\n• Throughput: 27724 ops/sec\n\nEncapsulation:\n• Total: 318.40ms\n• Average: 0.03ms per operation\n• Throughput: 31407 ops/sec\n\nDecapsulation:\n• Total: 358.60ms\n• Average: 0.04ms per operation\n• Throughput: 27886 ops/sec\n\n\nYou can try the demo here: https://dchest.github.io/mlkem-wasm/.
\n\nThe source code is at https://github.com/dchest/mlkem-wasm. Don’t forget to read the “Limitations” section in the README to see if it fits your use case.
\n\nNote that since ML-KEM is fairly new and less studied compared to elliptic curve algorithms, most implement it in addition to classical algorithms via various hybrid schemes alongside X25519 or P-256, not replacing them.
\n\nSo, my plan is to use mlkem-wasm in production until the WebCrypto API with ML-KEM ships in all browsers, and then switch to it with minimal changes.
Update (2025-08-26): I also implemented ML-DSA-65 post-quantum signature algorithm (previously known as Dilithium3) in a separate package: mldsa-wasm.
\n" }, { "id": "https://dchest.com/2025-06-17-how-to-store-web-data-in-keychain", "title": "How to store web app data in the system keychain", "url": "https://dchest.com/2025/06/17/how-to-store-web-data-in-keychain/", "tags": ["Programming","Security","Cryptography"], "date_published": "2025-06-17T00:00:00Z", "content_html": "While there are no APIs to store web app data in the system keychain, there is a simple method that allows you do almost the same thing using WebCrypto API. This also applies to native apps that use WKWebView.
\n\nGenerate a non-extractable AES CryptoKey using window.crypto.subtle.generateKey:
const key = await window.crypto.subtle.generateKey(\n {\n name: "AES-GCM",\n length: 256,\n },\n false, // non-extractable flag, important!\n ["encrypt", "decrypt"]\n );\n\n\nSave this key into IndexedDB.
Use this key to encrypt and decrypt data that you want to be tied to the keychain,\nfor example, using AES-GCM with a random nonce:
\n\nasync function encrypt(data: Uint8Array): Promise<Uint8Array> {\n // Retrieve the key from IndexedDB (implement this yourself).\n const key = await getEncryptionKey();\n // Generate a random nonce.\n const iv = window.crypto.getRandomValues(new Uint8Array(12));\n // Encrypt the data.\n const encrypted = await window.crypto.subtle.encrypt(\n {\n name: "AES-GCM",\n iv,\n tagLength: 128,\n },\n key,\n data\n );\n // Prepend nonce to the encrypted data.\n const result = new Uint8Array(iv.length + encrypted.byteLength);\n result.set(iv);\n result.set(new Uint8Array(encrypted), iv.length);\n return result;\n}\n\nasync function decrypt(data: Uint8Array): Promise<Uint8Array> {\n // Retrieve the key from IndexedDB.\n const key = await getEncryptionKey();\n // Extract the nonce and the encrypted data.\n const iv = data.slice(0, 12);\n const encrypted = data.slice(12);\n try {\n // Decrypt the data.\n return new Uint8Array(\n await window.crypto.subtle.decrypt(\n {\n name: "AES-GCM",\n iv,\n tagLength: 128,\n },\n key,\n encrypted\n )\n );\n } catch (e) {\n console.error("Failed to decrypt data", e);\n throw e;\n }\n}\n\n\nStore the encrypted data anywhere you like, for example, in IndexedDB or localStorage (base64-encoded). That’s it! The data is protected.
If you store a CryptoKey in IndexedDB, it will be encrypted by another key stored in the system keychain (or other mechanism that eventually uses the keychain). If you have an app that uses WKWebView, then this encryption key will be specific to the app.

(The usual disclaimer for “browser crypto bad” people: none of this prevents cross-site scripting attacks from stealing data, or ton of other attacks, that’s not the point of this post.)
\n" }, { "id": "https://dchest.com/2020-07-09-blurring-is-not-enough", "title": "Blurring is not enough", "url": "https://dchest.com/2020/07/09/blurring-is-not-enough/", "tags": ["Security"], "date_published": "2020-07-09T00:00:00Z", "content_html": "You’ve probably heard of that thing that restored (well, tried to restore) pixelated images.
\n\n![]()
You may have heard about the criminal who got caught after he posted a swirled photo of himself. Police was able to undo the deformation to reveal his face.
\n\n
Turns out, blurring can also be undone in some cases:
\n\n
This is the result of Restoration of defocused and blurred images project by Vladimir Yuzhikov. Of course, it won’t magically unblur any photo, but the results are impressive nonetheless.
\n\nIf you want to make something unrecognizable in a photo, just slap a big black rectangle on top. Make sure that the rectangle is opaque. Then take a screenshot of the censored image just to be safe and use it. To be completely sure, print and scan it back if you’re paranoid! Make sure your printer or scanner drivers don’t send pictures somewhere. Ah, screw it, just don’t post the picture!
\n\n* * *
\n\nSee also this Michał Zalewski’s article.
\n" }, { "id": "https://dchest.com/2020-07-08-swiftui-is-the-future", "title": "SwiftUI is the future", "url": "https://dchest.com/2020/07/08/swiftui-is-the-future/", "tags": ["Programming","Swift"], "date_published": "2020-07-08T00:00:00Z", "content_html": "SwiftUI is Apple’s UI framework, which is quite similar to React. It lives on top of their other UI frameworks: you declare components, state, and some callbacks, and the system will figure out how to render everything. It was announced last year. This year Apple improved it, added many missing features, and began using it for new widgets, Apple Watch complications, etc.
\n\n
What’s interesting is that Apple is clearly going for the ease of cross-platform development. With the same UI code base, the same components adjust their behavior according to the target platform: watchOS, iOS, iPadOS, macOS, and tvOS. (glassOS in the future?)
\n\nIn The WWDC 2020 Talk Show Craig Federighi said that they are not declaring a single framework a winner for the future, everyone can continue using UIKit and AppKit. This makes sense — for now — since you can do things with them that are not yet possible to do with SwiftUI (and vise versa since iOS 14). But to me, SwiftUI seems like the future of development for Apple’s platforms. It’s easier to write and understand, it can be more performant, and more importantly, Apple has more control of the final result due to its declarative nature.
\n\nI don’t expect them to abandon everything else quickly, but this day may come.
\n\nWhat do you think?
\n" }, { "id": "https://dchest.com/2020-06-27-platform-authenticators-for-web-authentication-in-safari-14", "title": "Platform authenticators for Web Authentication in Safari 14", "url": "https://dchest.com/2020/06/27/platform-authenticators-for-web-authentication-in-safari-14/", "tags": ["Security","Software"], "date_published": "2020-06-27T00:00:00Z", "content_html": "Safari 14 will support platform authenticators for Web Authentication API (also known as WebAuthn). Current versions of Safari already support WebAuthn for security keys, such as YubiKey, which are called roaming authenticators, but soon you will be able to authenticate using Touch or Face ID on supported devices without any external keys; this is called a platform authenticator.
\n\nThis is already supported by Chrome on Macs, but the importance of the new development is that millions of iOS and iPadOS users will be able to use WebAuthn without dongles.
\n\nHere’s how it works, briefly. You sign up normally with username and password, and then add your device (iPhone, iPad, MacBook with Touch ID) for passwordless log in. The next time you sign in, you don’t even have to enter your password — your device will ask you for your fingerprint or face, and you’re in. Since the cryptographic keys used for WebAuthn are stored securely on the device, if you want to sign in on a different device, you will have to enter your password for the first log in.
\n\n
This flow is much better than the standard two-factor authentication flow, and I expect it to replace TOTP, 2FA with WebAuthn/U2F, and other multifactor authentication methods for most people, now that platform authenticators are becoming available on iOS, iPadOS, macOS, Android, and Windows (with Windows Hello). Which is great, because nobody wants 2FA unless they are forced to use it. (We still need a solution for the first sign in on device, though.)
\n\nIt looks like for now, desktop Linux users will have to figure out how to use their TPM modules (the same modules that hardcore free software people have been opposing for ages) or stick to security keys. If you know about any developments regarding this at Red Hat and Canonical, please let me know in the comments below, I’d love to know.
\n" }, { "id": "https://dchest.com/2020-06-15-does-salt-need-to-be-random-for-password-hashing", "title": "Does salt need to be random for password hashing?", "url": "https://dchest.com/2020/06/15/does-salt-need-to-be-random-for-password-hashing/", "tags": ["Security","Cryptography"], "date_published": "2020-06-15T00:00:00Z", "content_html": "You probably know that salting is needed to make each password hash unique so that an attacker couldn’t crack multiple hashes at once.
\n\nThis was already known to the Unix creators, according to the paper written by Robert Morris and Ken Thompson in 1979:
\n\n\n\n\n3. Salted Passwords
\n\nThe key search technique is still likely to turn up a\nfew passwords when it is used on a large collection of\npasswords, and it seemed wise to make this task as\ndifficult as possible. To this end, when a password is first\nentered, the password program obtains a 12-bit random\nnumber (by reading the real-time clock) and appends\nthis to the password typed in by the user. The concatenated\nstring is encrypted and both the 12-bit random\nquantity (called the salt) and the 64-bit result of the\nencryption are entered into the password file.
\n\nWhen the user later logs in to the system, the 12-bit\nquantity is extracted from the password file and appended\nto the typed password. The encrypted result is\nrequired, as before, to be the same as the remaining 64\nbits in the password file. This modification does not\nincrease the task of finding any individual password\nstarting from scratch, but now the work of testing a given\ncharacter string against a large collection of encrypted\npasswords has been multiplied by 4,096 (212). The reason\nfor this is that there are 4,096 encrypted versions of each\npassword and one of them has been picked more or less\nat random by the system.
\n\nWith this modification, it is likely that the bad guy\ncan spend days of computer time trying to find a pass.\nword on a system with hundreds of passwords, and find\nnone at all. More important is the fact that it becomes\nimpractical to prepare an encrypted dictionary in advance.\nSuch an encrypted dictionary could be used to\ncrack new passwords in milliseconds when they appear.
\n\nThere is a (not inadvertent) side effect of this modification.\nIt becomes nearly impossible to find out whether\na person with passwords on two or more systems has\nused the same password on all of them, unless you\nalready know that.
\n
An attacker who gets your leaked hashes verifies guesses by hashing a password guess and comparing the result with a leaked hash. If there is no salt, the attacker can compare the current guess against every hash, and thus has more chances of finding the correct password for at least one of the users (and they can even precompute hashes before your password database leaks). However, if each password has been hashed with a unique salt, the attacker cannot do it — they will have to do the hashing (which is the expensive part) for each of the leaked hashes to see if they’ve got the match.
\n\nOK, so salt should be unique for each user. What else? It also should be unique for each password. This means that if a user changes their password, their new password should be hashed with a new salt, not reuse the old one. This is needed to prevent an attacker that gets access to the historical hashes of a user from attacking them all at once and from learning whether the user changed the password to a previously used one.
\n\nWhen I say unique, I mean globally unique. A user on your system must have a different salt than the same user on my system, otherwise attackers will instantly see if the user reused their password, and won’t have to crack it twice to attack two systems.
\n\nWhat else? It should be unpredictable to the attacker, otherwise they can precompute their guesses even before they acquire the leaked hashes.
\n\nSounds good! We established that salt should be globally unique per password and unpredictable. But should it be random? Can’t we somehow derive unique and unpredictable salts instead of storing them? Modern programmers hate state (otherwise they wouldn’t use JWT for sessions). Today everything needs to be stateless or it’s not web scale! Let’s see.
\n\nCan we deterministically derive a unique unpredictable salt per user? Sure, let’s try. We need a fixed secret key that our server knows, let’s call it global salt, and a user identifier. We can, for example, use HMAC-SHA256(globalSalt, userID) to calculate 32 bytes, which are unique for each user, and unpredictable for attackers as long as our global salt stays secret. We can use these bytes or a part of them as a salt…
However, this doesn’t satisfy the other requirement: that the salt must be unique per password, not per user. What can we do to fix this? Simple — just introduce a global counter. After each hashed password, the server increments the counter, and does something like HMAC-SHA256(globalSalt, counter || userID) to derive salts. (Note that || here designates concatenation, not logical OR). Boom! We have a unique unpredictable salt per password. Again, as long as the global salt is not leaked (or the attacker will be able to predict future salts and precompute password hashes for them), and as long as our counter doesn’t repeat (no race conditions, safe against VM restores and crashes). But the counter is now a state which you all hate! Also, we’ll have to store the counter along with the hash, just like we store the salt. Well, at least, we saved some disk space.
Alternatively, instead of the counter and even userID, we can use system clock in our HMAC construction, provided that we can always generate a different timestamp, and store the timestamp next to the corresponding hash.
\n\nWhat have we done? Basically, we have invented our own inferior, complicated, high maintenance pseudorandom number generator, which breaks in many cases! (Following in the UUID inventors’ footsteps, aren’t we?) In fact, we are not too far from reinventing a random byte generator that your operating system provides, except our secret key is fixed, but they update it from time to time.
\n\nThis is why salts everywhere are random — it is much easier and more secure to just use the proper random number generator and store the generated salt next to the password hash.
\n\nPS Of course, you should use the proper password hashing function! For more details, check out my book.
\n\n
Many apps with client-side encryption that use passwords derive both encryption and server authentication keys from them.
\n\nOne such example is Bitwarden, a cross-platform password manager. It uses PBKDF2-HMAC-SHA-256 with 100,000 rounds to derive an encryption key from a user’s master password, and an additional 1-round PBKDF2 to derive a server authentication key from that key. Bitwarden additionally hashes the authentication key on the server with 100,000-iteration PBKDF2 “for a total of 200,001 iterations by default”. In this post I’ll show you that these additional iterations for the server-side hashing are useless if the database is leaked, and the actual strength of the hashing is only as good as the client-side PBKDF2 iterations plus one HKDF and one HMAC. I will also show you how to fix this.
\n\n(Note that this post is from 2020 and discusses Bitwarden’s authentication scheme as it was at that time. Since then, Bitwarden were working on improving it.)
\n\nSince Bitwarden doesn’t have publicly available documentation (which is unfortunate for an open source project), I rely on documentation provided by Joshua Stein who did a great job of reverse-engineering the protocol for his Bitwarden-compatible server written in Ruby.
\n\nOn the client side, PBKDF2 is used with a user’s password and email to derive a master key. (The email address is used a salt. I’m not a big fan of such salting, but we won’t discuss it today). This master key is then used to encrypt a randomly generated 64-byte key (which encrypts user’s data) — we’ll refer to the result as a protected key. The master key is then again put through PBKDF2 (with one iteration this time), to derive a master password hash, which is used to authenticate with the server. The server runs the master password hash through PBKDF2 and stores the result in order to authenticate the user. It also stores the protected key.
\n\n
To authenticate a user, the server receives the master password hash, hashes it with PBKDF2, and compares the result with the stored hash. If they are equal, authentication is successful, and the server sends the protected key to the client, which will decrypt it to get the encryption key.
\n\nThe additional hashing on the server has two purposes: to prevent attackers that get access to the stored hash from authenticating with the server (they need to undo the last hashing for that, which is only possible with a dictionary attack), and to improve resistance to dictionary attacks by adding a random salt and more rounds to the client-side hashed password.
\n\nThe last part does not work well in the described scheme. Have you noticed that the protected key is the random key encrypted with the master key (which is derived from the password)?
\n\nprotectedKey = AES-CBC(masterKey, key)\n\n\nThat random key is used to encrypt and authenticate user data, such as login information, passwords, and other information that the password manager deals with. One part of the key (let’s call it key1) is used for encryption with AES-CBC, another part (key2) is used with HMAC to authenticate the ciphertext:
\n\nencryptedData = AES-CBC(key1, data)\nencryptedAndAuthenticatedData = HMAC(key2, encryptedData)\n\n\n(I’m skipping initialization vectors and other details for clarity.)
\n\nThe result, encryptedAndAuthenticatedData, is stored on the server. To decrypt data, Bitwarden client needs the user’s password, from which it derives the master key with PBKDF2, then decrypts the key from the protectedKey it fetched from the server, and then uses that key to decrypt data.
\n\nIf Bitwarden’s server database is leaked, the attackers do not need to run dictionary attacks on the master password hash, which has additional PBKDF2 rounds. Instead, they can run the attack as follows:
\n\nmasterKey = PKBDF2(passwordGuess)(Update (January 2023): actually, we don’t even need AES decryption and verification, since it uses Encrypt-then-MAC — we only need HMAC, plus HKDF for deriving the MAC key.)
\n\nAs you can see, instead of PBKDF2(PBKDF2(passwordGuess, 100,001), 100,000), which is 200,001 iterations, attackers can run HMAC(AES-CBC(PBKDF2(passwordGuess, 100,000)))HMAC(HKDF(PBKDF2(passwordGuess, 100,000))), which is 100,000 iterations of PBKDF2, two AES blocks (they only need 32 bytes from protectedKey) HKDF, and one HMAC. This attack is not necessary cheaper in Bitwarden’s case than the standard one on PBKDF2, since it includes the cost for an additional circuit for AES and passing around a bit more data compared to just running an additional PBKDF2-HMAC-SHA-256 with 100,000 iterations. However, when this authentication scheme is used with a better server-side password hashing function, the attack cost can be significantly reduced.
The fix is simple. But let’s first generalize our authentication/encryption scheme:
\n\n
Derive two keys from a password using a password-based key derivation function: wrappingKey and authKey. The first one is used to encrypt random key material to get protectedKey, which is then sent to the server during the registration or re-keying, the second one is sent to the server for authentication.
\n\nIn Bitwarden’s case, authKey will be again hashed and stored, while protectedKey will be stored as is (allowing attackers to use it to verify password guesses without additional hashing).
\n\n
The improvement that I propose on the server side looks like this:
\n\n
Instead of deriving one key (aka hash, aka verifier) with the password hashing function, derive two keys: serverEncKey and verifier. (The verifier has the same purpose as in the original scheme.)\n(Note: don’t use the password hashing function twice! Instead, with a modern password hashing function derive a 64-byte output and split it into two 32-byte keys, or if you’re stuck with PBKDF2, use HKDF to derive two different keys from a single 32-byte output of PBKDF2).
Encrypt protectedKey received from the client with serverEncKey and store the result of this encryption (serverProtectedKey) instead of protectedKey.
That’s it. When a user logs in, perform the same key derivation, and if verifier is correct, decrypt serverProtectedKey to get the original protectedKey and send it to the client. (In fact, if we use authenticated encryption, we can just use the fact that serverProtectedKey is successfully decrypted to ensure that the user entered the correct password and not store the verifier, but I like the additional measure in case the authenticated encryption turns out to have side-channel or other vulnerabilities.)
\n\nWhy does it work? In the original scheme, an adversary that has a leaked database can run password guessing attacks against the verifier, which requires an additional KDF, or against the protectedKey (and in case of Bitwarden, an additional piece of user’s encrypted data, since the attacker cannot verify whether the decryption of the key was successful), which is easier. In the new scheme, the attacker would have to run the same KDF that is used on the server in any case, whether they wanted to verify guesses against verifier or serverProtectedKey. Thus, we successfully added additional protection against dictionary attacks on the server side.
\n" }, { "id": "https://dchest.com/2020-05-19-why-password-peppering-in-devise-library-for-rails-is-not-secure", "title": "Why password peppering in Devise library for Rails is not secure", "url": "https://dchest.com/2020/05/19/why-password-peppering-in-devise-library-for-rails-is-not-secure/", "tags": ["Security","Cryptography","Programming"], "date_published": "2020-05-19T00:00:00Z", "content_html": "
Devise is a popular authentication solution for Ruby on Rails. Most web apps need some kind of authentication system for user accounts and Devise allows adding one with just a few lines of code. This is great for security — if all the developers need to do is to plug a third-party library, there are fewer chances to make a mistake. This, however, requires that the library itself is implemented correctly, which is, unfortunately, not the case for many of them.
\n\nPeppering is a technique for making password hashes useless without a secret key. It helps prevent a class of attacks where attackers get read-only access to the database (for example, via an SQL injection or a leaked backup dump), but don’t have access to the app server, where the secret key is stored. With peppering, it would be infeasible for attackers to perform a dictionary attack on leaked password hashes, because they don’t know and can’t guess the secret key.
\n\nOn the other hand, peppering adds another part to the system that can make it less secure by introducing bugs. Thankfully, Devise doesn’t seem to have such bugs, however, its peppering construction is badly designed and doesn’t provide the security guarantee that peppering should provide.
\n\nDevise concatenates password with a secret key (pepper) and then feeds the result to bcrypt, which then hashes them.
\n\nHere’s the code:
\n\ndef self.digest(klass, password)\nif klass.pepper.present?\n password = "#{password}#{klass.pepper}"\nend\n::BCrypt::Password.create(password, cost: klass.stretches).to_s\nend\n\n\nIn theory, without knowing the pepper, it is infeasible to perform dictionary attacks on password hashes. However, Devise developers failed to take into account a design quirk of bcrypt: it only hashes the first 72 bytes of the password, and ignores everything else after that. This means that if the concatenation of the password and the pepper exceeds 72 bytes, the rest of the bytes are ignored. Since the password comes first, the longer the password, the fewer bytes of the pepper are available for hashing. If the password is 72 bytes or longer, no peppering is done at all.
\n\nHere’s a simple Ruby program to demonstrate that two different passwords that have the same 72-byte prefix produce the same hash:
\n\nrequire 'bcrypt'\n\npassword1 = 'a'*72 + '1'\npassword2 = 'a'*72 + '2'\n\nputs password1\nputs password2\nputs "Passwords equal? #{password1 == password2}"\n\nhash1 = ::BCrypt::Password.create(password1)\nputs hash1\n\np1 = ::BCrypt::Password.new(hash1)\nputs "Password hashes equal? #{p1 == password2}"\n\n\nOutput:
\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa1\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa2\nPasswords equal? false\n$2a$12$.NF.TYUDaWe0rVvMIWqb0OzhG6TrVQj7wLERUeeM4yJdALU4oi/Wq\nPassword hashes equal? true\n\n\nAnother mistake in Devise’s peppering scheme is that the pepper is added to the password without a separator, which makes it possible for attackers to guess the pepper value. They can register an account with a 71-byte password, and then keep trying to log in with a 72-byte password by appending a character. If they manage to log in, the character they guessed is the first character of the pepper. Then they can change the password to a 70-byte value and try to log in again, but use the guessed character as the penultimate one, and guess the next character, and so on.
\n\nI recommend using the following construction:
\n\nbcrypt(encode(HMAC-SHA-256(key=pepper, password)))\n\n\nwhere pepper is used as a key for HMAC-SHA-256 and encode is Base64 or hex encoding (which is needed to avoid a fatal mistake — bcrypt expects a NUL-terminated string, but we get plain bytes from HMAC, which may include NUL). In fact, I recommend this construction for prehashing before bcrypt even if you don’t use peppering — just set pepper to “com.myapp.passwordhash” or some other constant — this way you avoid the 72-character limit: a password of any length will be hashed with HMAC-SHA-256 into 64 hex or 44 Base64 characters, which then will bee used as the password input for bcrypt. (Of course, you should still limit passwords to some reasonable length, don’t blindly accept megabytes of data.)
\n\nAs an alternative to peppering, you may consider encrypting password hashes, which has some advantages and disadvantages compared to peppering, but for bcrypt, correctly implemented peppering works well. I discuss this and many other related topics in my book Password authentication for web and mobile apps, which you should read if you want to avoid mistakes in your user authentication code or recognize them in third-party solutions.
\n" }, { "id": "https://dchest.com/2020-05-15-my-book-on-password-authentication-is-out", "title": "My book on password authentication is out", "url": "https://dchest.com/2020/05/15/my-book-on-password-authentication-is-out/", "tags": ["Books","Programming","Security","Cryptography"], "date_published": "2020-05-15T00:00:00Z", "content_html": "
I’m super excited to announce that my book, Password authentication for web and mobile apps, is out! I have a lot more to say about why I decided to write it and what the writing and publishing process was in future blog posts. Meanwhile, if you’re a developer who wants to understand password authentication and implement it for your web site or your app, please check it out: https://dchest.com/authbook/
\n" }, { "id": "https://dchest.com/2019-01-16-how-to-use-chrome-securely", "title": "How to use Chrome securely", "url": "https://dchest.com/2019/01/16/how-to-use-chrome-securely/", "tags": ["Security","Software"], "date_published": "2019-01-16T00:00:00Z", "content_html": "That is all (for now).
\n" } ] }