Key Discussions https://keydiscussions.com are tech companies delivering on basic promises? Thu, 12 Mar 2026 17:57:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Hacker News moves toward restricting Show HN posts, amid the AI slop wave https://keydiscussions.com/2026/03/09/hacker-news-moves-toward-restricting-show-hn-posts-amid-the-ai-slop-wave/ Mon, 09 Mar 2026 15:42:59 +0000 https://keydiscussions.com/?p=1564 Continue reading Hacker News moves toward restricting Show HN posts, amid the AI slop wave]]> A couple of weeks ago, there was a great blogpost Is Show HN Dead? No, But It’s Drowning . And now we see HN’s top moderator is discussing throttling Show HN somehow, responding in a thread with 600+ upvotes entitled Ask HN: Please restrict new accounts from posting.

I am working on a broader think piece on this that I plan to publish in a week or so (and have already blogged about increasing App Review times), but this thread deserved to be noted now.

]]>
Epic gagging: Industry activist Tim Sweeney agrees to be muzzled about Google as part of a settlement that opened up the Play Store https://keydiscussions.com/2026/03/05/epic-case-of-censorship-industry-activist-tim-sweeney-is-muzzled-about-google-as-part-of-a-settlement-that-opened-up-the-play-store/ Thu, 05 Mar 2026 15:11:15 +0000 https://keydiscussions.com/?p=1526 Continue reading Epic gagging: Industry activist Tim Sweeney agrees to be muzzled about Google as part of a settlement that opened up the Play Store]]> One thing you could always confidently say about Sweeney was “he’s going to speak his mind”. Going back decades, he has consistently been outspoken against powerful platforms and gatekeepers, including Windows, Apple’s App Stores, and Google’s Google Play store.

Five years ago Epic Games sued Apple and Google over their app store practices. Recently, the Play Store case resolved with some very pro-consumer outcomes: Google agreed to lower its app store cut to 20% or below and allow for sideloading to be less “scary” (Google was using dark patterns to make all software appear as malware, despite Google’s Play Store itself hosting malware over the years.)

Anyway, Sweeney’s muzzling is extraordinary on multiple fronts: 1) this is a rare high-profile non-disparagement example affecting the party that ostensibly “won”. 2) The agreement lasts through 2032, much longer than other cases of this I could find. 3) It enforces a positive/advocacy tone in statements by Sweeney about Google, similar to his first tweet about the matter. (So it goes beyond non-disparagement.)

Take the closest analog I can find: Elon musk being barred from disparaging Twitter until his deal closed: that was for a much shorter period. Most “quiet periods” are typically well under a year.

Needless to say, take extra grains of salt/discounting about anything Sweeney says about the industry for 5 years. But also, I’ll give credit where its due: thanks Sweeney for fighting to loosen the duopoly’s stranglehold on app distribution and rent-seeking.

]]>
From Tahoe bugs to long app review wait times (even app processing delays), the Apple app developer experience is fraying https://keydiscussions.com/2026/02/26/from-tahoe-bugs-to-long-app-review-wait-times-even-app-processing-delays-the-apple-app-developer-experience-is-fraying/ Thu, 26 Feb 2026 16:04:31 +0000 https://keydiscussions.com/?p=1433 Continue reading From Tahoe bugs to long app review wait times (even app processing delays), the Apple app developer experience is fraying]]> This post was linked to by legendary link blog mjtsai, which features additional discussion.

[Update: Since posting I have continued to face delays. I’ve decided to log them: My Mac app’s most approval time: Submitted mid-Saturday March 7, was “Waiting for review” until March 12 (the review itself was short).]

Average app review times for my Mac app (launched in 2019) have gone up by 3-5x and, it appears others too (tons of posts on Apple’s forums like this one). Mac app reviews now typically take 5 days, and I’m seeing lots of reports of 10+ day waits for some. While iOS app reviews have been faster for me, others have seen big delays there too. It just varies, but it’s clear that reviewers are underwater. This is very likely do to the rise in AI-assisted coding, which has in turn led to more app submissions. According to AppFigures, App Store submissions rose 24% in 2025 through November. According to Runway data, January and February’s average review time and max review times were significantly higher than they were in the last 4 months of 2025.

Apple is failing to meet the moment. Stating the obvious: they need to more heavily invest in and scale up app review to match new conditions.

This isn’t slowing down, there will be more vibe coders next month, not fewer, which will put even more strain on the app review process, which is barely tenable as-is. I wonder if/when app reviewers start leaning on agentic AI tools to help speed up their own processes.

5+ day wait times for Mac apps can not remain the new normal: it’s a clear and major regression for Apple’s app ecosystem. It will likely get worse before it gets better, but in terms of wait times, we are already basically back to where we were before Phil Schiller became the App Store czar in 2015.

My complaint isn’t directed at app reviewers, it’s aimed at their execs: allocate way more resources to this!

A couple of notes: the “Request Expedited Review” process (if you’re willing to burn one of those) can still lead to a snappy review, in a pinch. (And sometimes subsequent submissions remain in the expedited queue.) But you only get a couple(?) of those per year, and more people are likely using them, so your chance of being approved is likely lower or your expedited wait time is likely longer than what it would have been.

I and others have also run into delays in app processing too (which must happen before a build can even be submitted for review) and have personally seen it take up to 10 hours! (I never even thought this step was prone to delays.) I’m not alone; here are a few threads about it, beginning in January 2026. It only bit me once and was probably a glitch, unrelated to the rise in app submissions, but it was extremely frustrating (especially b/c I knew the review would be long too), and the threads show that it has affected developers over a several-week period in 2026.

Tahoe bugs

Scroll to the end if you want to see a couple of flashy Tahoe bugs, but I need to start with a few which have hit me harder, as a macOS user and developer.

First up: I have mitigated two Tahoe bugs where the OS essentially DDOSes my app with either 1) phantom clicks on dropdown controls or 2) by sending my app hundreds of identical system notifications per minute (specifically: NSApplication.didChangeScreenParametersNotification). The first occurred in beta, but the second cropped up around 26.2. I developed workarounds for these issues, but my guess is that plenty of other apps haven’t. My app has been around since 2019 and has never encountered this kind of erratic system behavior before. Perhaps Tahoe’s periodic lagginess that I notice is caused by other user-space apps trying to process similar messages. 🤷‍♂️

I have also encountered a Tahoe bug (that’s impossible to reproduce and that I first noticed after the betas had concluded) where the OS will stop displaying an NSPopover window when .show( is called, leading to frustrated customers who were not seeing their popover, and then a massive effort on my part to develop a fail-detection and fallback system. This bug appeared in the betas and is still active in 26.3. Again, all of these bugs are not on pre-Tahoe macOS versions (my main Mac is still running Sequoia.)

While I shared in the common complaints of ugly border radiuses, awful accessibility, and more, the bugs have been Tahoe’s big red flag for me. Tahoe should be considered a clear sign that critical processes within Apple have broken. Should Apple acknowledge the mess, like their “back to the Mac”-style pow wow, or declare macOS 27 a Snow Leopard/Mountain Lion year?

Simply trying to create an App Preview video in 2026…

This is a minor aside, but while I’m at it, I have to mention how rough it is to create a new App Preview movie using Apple-provided QuickTime and iMovie.

Even if you use iMovie’s “New App Preview” and “Share -> App Preview” options, the resulting video will be rejected by App Store Connect for multiple errors. I had to resort to two old school hacks to get a valid App Preview movie for AppStore Connect. These workarounds have been around for at least 8 years (because that’s when I remember first using them!) but have likely existed for over a decade.

First, I had to add a silent audio track to the project avoid a “corrupted audio” error. Second, the only way to get correct dimensions for the App Preview was to, before importing the video, add an image as the first frame. One would think that iMovie, which sports a “New App Preview” menu option and “App Preview” export option would be a better experience.

Maybe there’s some other app I should be using to package my App Preview?

Ending with a couple of flashy Tahoe bugs

This is what it has been like to try to grab a column resize control (with scrollbars set to hidden) for about 5 months (this was fixed on February 11, 2026, with 26.3):

And 🤪:

Apple listens more to press than they do to their own Feedbacks/Radars, so hopefully this will get through.

]]>
Don’t even “Dismiss” the “How is Claude doing this session?” prompt, as it may compromise your chat session’s privacy https://keydiscussions.com/2025/09/29/dont-even-dismiss-the-how-is-claude-doing-this-session-prompt-as-it-may-compromise-your-chats-privacy/ Mon, 29 Sep 2025 20:22:55 +0000 https://keydiscussions.com/?p=1421 Continue reading Don’t even “Dismiss” the “How is Claude doing this session?” prompt, as it may compromise your chat session’s privacy]]> You have to be obsessive compulsive to avoid the privacy landmines that companies like OpenAI and Anthropic spread throughout their products.

Yesterday I wrote a blog post about how the “How is Claude doing this session?” prompt seemed like a feature just designed to sneak more data from paying Claude users who had opted out of sharing data to improve+train Anthropic’s models. But, in that post, I could only theorize that even tapping “0” to “Dismiss” the prompt may be considered “feedback” and therefore hand over the session chat to the company for model training.

I can confirm that, tapping “0” to Dismiss, is considered “feedback” (a very important word when it comes to privacy policies). When doing so, Claude says “thanks for the feedback … and thanks for helping to improve Anthropic’s models”. (This is paraphrasing because the message lasted about 2 seconds before vanishing.) Obviously this is NOT what I or others are trying to accomplish by tapping “Dismiss”. I assume this is NOT a typo on the company’s part, but I’d be interested in having a clarification from the company either way. I would wager a fair case could be made that classifying this response as (privacy-defeating) “feedback” runs afoul of contract law (but I am not a lawyer).

Anyway, I clicked it so you won’t have to: I would not interact with that prompt at all, just ignore it.

Original post:

Assume that “How is Claude doing this session?” is a privacy loophole

I am a power user of AI models, who pays a premium for plans claiming to better-respect the privacy of users. (Btw, I am not a lawyer.)

With OpenAI, I pay $50/month (2 seats) for a business account vs a $20/month individual plan because of stronger privacy promises, and I don’t even need the extra seat, so I’m paying $30 more!

Yet with OpenAI, there is this caveat: “If you choose to provide feedback, the entire conversation associated with that feedback may be used to train our models (for instance, by selecting thumbs up or thumbs down on a model response).”

So I never click the thumbs up/down.

But I’m nervous… Notice how that language is kept open-ended? What else constitutes “feedback”?
Let’s say I’m happy with a prompt response, and my next prompt starts with “Good job. Now…” Is that feedback? YES! Does OpenAI consider it an excuse to train on that conversation? 🤷 Can I get something in writing or should I assume zero privacy and just save my $30/month?

I was initially drawn to Anthropic’s product because it had much stronger privacy guarantees out of the gate. Recent changes to that privacy policy made me suspicious (including some of the ways they’ve handled the change).

But recently I’ve seen this very annoying prompt in Claude Code, which I shouldn’t even see because I’ve opted OUT of helping “improve Anthropic AI models”:

What are its privacy implications? Here’s what the privacy policy says:

“When you provide us feedback via our thumbs up/down button, we will store the entire related conversation, including any content, custom styles or conversation preferences, in our secured back-end for up to 5 years. Feedback data does not include raw content from connectors (e.g. Google Drive), including remote and local MCP servers, though data may be included if it’s directly copied into your conversation with Claude…. We may use your feedback to analyze the effectiveness of our Services, conduct research, study user behavior, and train our AI models as permitted under applicable laws. We do not combine your feedback with your other conversations with Claude.”

This new prompt seems like “feedback” to me, which would mean typing 1,2,3 (or maybe even 0) could compromise the privacy of the entire session? All we can do is speculate, and, I’ll say it: shame on the product people for not helping users make a more informed choice on what they are sacrificing, especially those who opted out of helping to “improve Anthropic AI models”.

It’s a slap in the face for users paying hundreds of dollars/month to use your service.

As AI startups keep burning through unprecedented amount of cash, I expect whatever “principles” founders may have had, including about privacy, to continue to erode.

Be careful out there, folks.

]]>
Assume that “How is Claude doing this session?” is a privacy loophole https://keydiscussions.com/2025/09/28/how-is-claude-doing-this-session-and-the-feedback-privacy-loophole/ Sun, 28 Sep 2025 21:25:23 +0000 https://keydiscussions.com/?p=1409 Continue reading Assume that “How is Claude doing this session?” is a privacy loophole]]> I am a power user of AI models, who pays a premium for plans claiming to better-respect the privacy of users. (Btw, I am not a lawyer.)

With OpenAI, I pay $50/month (2 seats) for a business account vs a $20/month individual plan because of stronger privacy promises, and I don’t even need the extra seat, so I’m paying $30 more!

Yet with OpenAI, there is this caveat: “If you choose to provide feedback, the entire conversation associated with that feedback may be used to train our models (for instance, by selecting thumbs up or thumbs down on a model response).”

So I never click the thumbs up/down.

But I’m nervous… Notice how that language is kept open-ended? What else constitutes “feedback”?
Let’s say I’m happy with a prompt response, and my next prompt starts with “Good job. Now…” Is that feedback? YES! Does OpenAI consider it an excuse to train on that conversation? 🤷 Can I get something in writing or should I assume zero privacy and just save my $30/month?

I was initially drawn to Anthropic’s product because it had much stronger privacy guarantees out of the gate. Recent changes to that privacy policy made me suspicious (including some of the ways they’ve handled the change).

But recently I’ve seen this very annoying prompt in Claude Code, which I shouldn’t even see because I’ve opted OUT of helping “improve Anthropic AI models”:

What are its privacy implications? Here’s what the privacy policy says:

“When you provide us feedback via our thumbs up/down button, we will store the entire related conversation, including any content, custom styles or conversation preferences, in our secured back-end for up to 5 years. Feedback data does not include raw content from connectors (e.g. Google Drive), including remote and local MCP servers, though data may be included if it’s directly copied into your conversation with Claude…. We may use your feedback to analyze the effectiveness of our Services, conduct research, study user behavior, and train our AI models as permitted under applicable laws. We do not combine your feedback with your other conversations with Claude.”

This new prompt seems like “feedback” to me, which would mean typing 1,2,3 (or maybe even 0) could compromise the privacy of the entire session? All we can do is speculate, and, I’ll say it: shame on the product people for not helping users make a more informed choice on what they are sacrificing, especially those who opted out of helping to “improve Anthropic AI models”.

It’s a slap in the face for users paying hundreds of dollars/month to use your service.

As AI startups keep burning through unprecedented amount of cash, I expect whatever “principles” founders may have had, including about privacy, to continue to erode.

Be careful out there, folks.

]]>
Google AI Overviews and ChatGPT can get it wrong (or very wrong) about your product https://keydiscussions.com/2025/02/05/when-google-ai-overviews-and-chatgpt-get-it-very-wrong-about-your-product/ Wed, 05 Feb 2025 20:15:17 +0000 https://keydiscussions.com/?p=1264 Continue reading Google AI Overviews and ChatGPT can get it wrong (or very wrong) about your product]]> [If you’re here for the worst AI fail, I’ll spoil it: ChatGPT hallucinated my app’s purpose and basically described it as some sort of keylogger, which it absolutely isn’t 🤷‍♂️. Google AI Overviews also failed both of my tests. There is some discussion of it here. Now, the article:]

Google search isn’t what it used to be.

Google now displays AI Overviews, short AI-generated snippets that attempt to satisfy your query, above its search results for over 20% of informational queries in over 100 countries, reaching 1B+ users.

The feature quietly delivers subtle inaccuracies to Google users every day, undermining its own search results. Here are a few examples others have noted, including one study from October claiming that AI Overviews provided misleading or inaccurate responses in 43% of finance-related searches. Here’s a distressed business owner complaining about Google AI Overviews misrepresenting their product. The feature had a rocky debut last year, where several blatant inaccuracies led Google to pull it down temporarily.

If you are a creator building something new, one essential thing has been true for ~20 years, you need users to be able to find your thing via Google. If AI Overviews feed misinformation that prevents that, then that’s bad. Similarly, when potential customers ask AI assistants like OpenAI’s ChatGPT about your product, you don’t want them to be misinformed. Ideally, users could even discover your product through these tools.

I wanted to see how potential users might find out about my app these days, so that’s why I did this testing. I found that Google and OpenAI confidently pass along total inaccuracies to potential customers. As you’ll see below, AI search tools failed 5 out of 6 times, and egregiously failed once (via ChatGPT’s default prompt box).

Example #1 and 2, AI Overviews:

A few months ago, my app gained a simple new feature: being able to display a custom image in your Mac’s menu bar. As far as I can tell, it’s the first app that lets you do that.

Here it is showing up in Google search results, as one might expect 👍:

On pages 2 and 4 of results, that’s great for a new feature!
Ok, so what’s the problem?

This is the AI Overview sitting above all the search results for the same query:

So if I’m a person Googling for a product that does X, and Google says matter-of-factly “that doesn’t exist” (even when it does): What are the odds I’ll push past that misinformation and see that it’s actually present in the search results? Pretty low, I’d imagine.

(The second part of the AI Overview is true, but that doesn’t make up for the fact that the first part (that it highlights) is false)

As far as I can tell, Google’s AI Overviews feature is undermining search results and dramatically hurting the discovery of long tail information.

(BTW, here’s the feature in question)

When asked to generally describe my app, Google AI Overviews provided an accurate but incomplete description — one that missed the app’s primary function that it’s known for (the ability to jump between specific Spaces on a Mac and assign names/icons to them in the menu bar). 🤷‍♂️ Not great.

Examples #3 and 4, ChatGPT free tier (very wrong):

Meanwhile, the highest-use, free tier of ChatGPT fails the same test and spectacularly fails a separate one:

First, when asked the same question from above (“how can I put a custom picture in the menu bar of my mac”), it suggests some apps that do other things but not “put a custom image in the menu bar”. Whatever, they are cool apps, but the exercise is largely a waste of time given the query.

The much bigger fail is that, when asked more generally about my app, it gives a flat out wrong/hallucinated description of what my app is and does, a description that paints it in a negative light.

So it not only fails to describe my app, but ChatGPT says the app’s purpose is to keep stats on your keystrokes(?!). That’s slander as far as I’m concerned. It sounds to me like it hallucinated that description because my app’s name is CurrentKey Stats and it basically made an incorrect guess just off the name. Like all of these AI search tools, its output text reads as totally confident.

There is fine print along the bottom of the page that says “ChatGPT can make mistakes. Check important info.” Right … and that begs some obvious questions about its value as a search tool.

[Some background: my app isn’t new and the chatbots should know about it (and some do). It has been around for 6 years, has 120+ ratings in the Mac App Store globally with a 4.5 rating, has had bloggers write about it, popular reddit posts, etc.. (Is definitely in ChatGPT’s training data). One would hope an AI assistant like ChatGPT could get a basic description about a years-old app correct.]

Examples #5 and 6, ChatGPT “Search”

If you log into ChatGPT, specifically select “Search”, and ask it “how can I add a custom image to my menu bar mac”: it performs the same as the free tier and suggests some apps that don’t accomplish the task. That’s a fail but whatever.

However, with the query: “i own a mac, would currentkey stats be good for me?” (same as what was used in example #3) – it actually delivers a useful, accurate, and adequately complete description of the app, unlike the basic free tier and unlike AI Overviews. It pulls from three authoritative sources and provides links; here it is:

So how many people click “Search” in the logged in experience vs. simply using the basic ChatGPT prompt as a search engine? Only OpenAI employees know, but I’d guess far more people use the basic ChatGPT prompt.

Conclusions

How has all of this impacted the business side of my app? It’s impossible to say, because you can’t prove negatives easily. There has definitely been some impact though: AI tools are taking huge bites out of the search market. It’s easy to forget how popular ChatGPT has been (even after its record breaking launch that got a bunch of press). The app has consistently remained among the top 5 most-downloaded apps in the US for about three years.

Of course search engines have always had an incomplete picture of the world’s information and are prone to missing things. But, whereas search engines used to just omit info that had yet to be indexed, they now very clearly and confidently offer wrong information much of the time. The latter is far worse, especially for discovering new things. Over time, one has to wonder if the average person will lose the skill of finding non-obvious information. Given how widespread Google AI Overviews are, it appears to be a topic that has not been extensively covered in academia (at least based on a few arxiv.org and scholar.google.com searches).

So what can be done if you find AI search tools passing along wrong info about your creation? Maybe contact Google Support. I think the best thing that you can do is publish more correct info to the web [say, in a blog post ;)] and hope that they correctly train on it in their next pass.

]]>
Apple starts pushing AirPods Pro 2 and AirPods 4 owners into Transparency or Noise Cancellation modes repeatedly, without an easy opt out https://keydiscussions.com/2025/01/14/apple-opts-airpods-pro-2-and-airpods-4-owners-into-loud-sound-reduction-which-sounds-great-but-forces-users-into-transparency-or-noise-cancellation-modes-with-no-easy-way-to-opt-out/ Tue, 14 Jan 2025 13:55:00 +0000 https://keydiscussions.com/?p=1069 Continue reading Apple starts pushing AirPods Pro 2 and AirPods 4 owners into Transparency or Noise Cancellation modes repeatedly, without an easy opt out]]> (If you want to skip directly to the fixes, click here. If you want to skip to some genuine praise of Apple, feel free to jump to the next section. There’s a lively discussion on HN here and 9to5mac here.)

A couple of weeks ago I noticed my pair of AirPods Pro 2 aggressively switching me into Transparency mode. It seemed like a bug. Again and again I would have to manually switch back out of Transparency mode. Annoying.

Then a few days later, Apple removed the ability for me switch out of Transparency mode altogether!

There are ways to reverse each of these changes (the force switching and the Off removal), but the whole process was a major pain as a user to figure out, it wasn’t simple to reverse even once I knew how to, and there wasn’t any heads up that I remember getting from Apple explaining the changes. This led to me and a lot of people being confused.

Well over 100M people own AirPods. Here are some reddit posts (1, 2, 3, 4, 5) made by users frustrated over these specific AirPods changes. Notably, none of these reddit posts contain in their comments all of the steps needed to revert the changes.

Quick summary of Noise Control modes: Transparency, Adaptive, Noise Cancellation, and Off:

If it’s Off, that means your AirPods pipe audio into your ears without any extra processing or special sound alteration. This conserves battery and sounds better to me than Transparency. Great! I prefer this.

When Transparency mode is enabled, it “passes through” some of the noise around you, so you can have higher awareness of your surroundings. Neat! But this means that, whenever it’s enabled, I hear a hissing sound (at a minimum) that I otherwise wouldn’t. It also burns more battery.

Noise Cancellation is self explanatory — it cancels out annoying sound. I use ANC sparingly because it makes my inner ear feel different, but I think it’s great on a plane or train. Adaptive tries to intelligently switch between Transparency or Noise Cancellation. All of these active modes burn through your AirPods’ battery at a faster rate.

To recap, it aggressively started switching me into Transparency, then the “Off” option was removed by Apple altogether. (On both iOS and tvOS.) With no warning — just, poof!

After Googling about it, I learned that others were hitting this issue. Apple had indeed removed “Off” but buried the means of bringing it back. Some Googling, tap tap tap, and here’s the first buried setting I had to find:

the first buried setting you need to find

Now the Off option had returned… But not on tvOS? Anyway, I was just thankful to have fixed… oh wait…

The AirPods still kept switching me from Off to Transparency mode. So: all this Googling and time researching this, and I was merely back my initial problem! Here’s how I was feeling at this point:

My Airpods have been glitching so hard it has turned me into an airpods hater. No, for the hundredth time, i don't want them in Transparency mode (this still happens to me even after the latest firmware update where you have to manually opt back into having an "Off" state).

Spencer Dailey (@spencerdailey.bsky.social) 2025-01-07T15:50:47.736Z

In frustration, I eventually Googled myself down a rabbit hole where I learned: all of this is likely tied to a relatively new feature called Loud Sound Reduction that only works if AirPods are in an active “Noise Control” mode. So Apple perhaps recently decided that everyone needed this feature enabled, and that’s why they made all these annoying changes to Noise Control? I can only speculate.

Anyway, so surely Loud Sound Reduction can be disabled (so my AirPods would hopefully stop switching to Transparency mode)?

This was a dead end

There it is! But guess what? Nope – that’s a read only field that looks like a button but isn’t one. Hm. So… I returned to more Googling!… And found that to disable Loud Sounds Reduction you must go to: Settings -> Accessibility -> AirPods -> <Name of your AirPods> -> disable Loud Sounds Reduction.

Wow! That’s a lot of time, taps, and user frustration to merely get something back to how it originally was!

But you know what? tvOS still did not show an “Off” mode for my AirPods 2! I ended up needing to hard reset my AirPods, change all the settings mentioned above on iOS for a second time, and then let tvOS rediscover them before “Off” would appear there. PHEW!

Finally! I was able to get things back to how they were before. What a journey!

Kind of random, but for those who enjoyed this tale, you may enjoy this email from Bill Gates to leaders at Microsoft, about how hard it was for him to install a piece of software from Microsoft’s website.

Why gripe about Apple products?

I love to write about Apple. Why?

For the most part: Apple productively listens to users, reporters, podcasters, other creators, etc. Obviously they can’t please everyone (they have long-running disagreements with lots of developers over things like App Store cuts etc., and don’t always positively respond to what some think are reasonable suggestions (especially if you’re a regulator) etc.). Apple is not without its own problems and hypocrisies, but it still does a better job of listening to feedback than many tech companies.

And Apple is in stark contrast to a handful of childishly spiteful big tech companies. They shall remain nameless in this post, but they have been known to kick users off their platforms, brick users’ products, shadowban users (while simultaneously preaching the evils of shadowbanning), or sic fanboys on you if you publish opinions (or facts) that they don’t like. It’s been happening frequently since mid-2022. It’s straight up targeted censorship (despite the same companies gaslighting on this issue).

Apple seems to be run by adults most of the time and it remains rewarding to write about them.

Case in point, here are a few examples previous posts on Apple that got significant attention (1, 2, and 3). They generally lambasted Apple over product decisions. These “negative” stories about Apple garnered huge traffic (up to the top of reddit). But nowadays, having a similarly negative and popular story about certain other tech companies would come with serious potential downsides. This means, it may turn a best case scenario for a blogger (getting a lot of traffic) into a worst case scenario (being targeted with retaliation). So the ROI for me writing for free about Apple is comparatively much higher, especially as someone who is deeply invested in their platforms (as a user and app developer).

[The iOS UI/robocalls post may have affected some change, as it got to the top of reddit and came out about a year before Apple fixed the issue. I’ve written some positive things about Apple too (1,2).]

Furthermore… Whether for business reasons or not, Apple has at least preached [Steve Jobs clip] about believing in a strong free press in the US for a long time. That stance is refreshing, in an era where anti-press rhetoric and physical violence against reporters has hit a high in the US and prevailing big-tech-supported political movements are categorically painting the press as an enemy of the people. It has become popular for leaders of some other tech companies to broadly bash members of the “legacy” media (a pejorative term for traditional journalists doing their work).

Apple has also historically preached restraint [Steve Jobs clip] in its responses to (what they feel are) biased stories, and the company is better for it.

The leaders at Apple have simply been adults acting like adults, most of the time. This simple fact has led to better products and was part of its journey to becoming the world’s most profitable business in 2021. It sounds obvious but bears repeating [in light of what’s become the rage in culty tech circles these days]: Acting like childish brats toward users, creators, and the press was not a part of Apple’s recipe to becoming the world’s most profitable business. Apple understands the value of keeping critics in their own feedback loop.

These are the reasons I still like to write about Apple in 2025.

Steps to revert the late-2024 changes to your Airpods Pro 2 or Airpods 4

In the last weeks of 2024, Apple changed its newer Airpods to remove the “Off” Noise Control and (separately) push users into Transparency mode all the time. This seems to be because of their forced roll out Reduce Loud Sounds feature. These are the steps in software needed to revert the changes. If these changes aren’t reflected in other devices like Apple TV, I found that I needed to hard-reset my Airpods Pro 2, perform these settings changes (in the video), and then have my Apple TV rediscover them.

]]>
Napkin math suggests Bitcoin will perish unless its mining incentives change https://keydiscussions.com/2024/08/13/napkin-math-says-bitcoin-will-perish-unless-it-dramatically-changes-its-incentive-structure-for-miners/ Tue, 13 Aug 2024 20:45:00 +0000 https://keydiscussions.com/?p=997 Continue reading Napkin math suggests Bitcoin will perish unless its mining incentives change]]> For years, analysts have gone on channels like CNBC calling Bitcoin “digital gold”, and many everyday crypto investors truly believe that. But gold has been a “store of value” for millennia. Could Bitcoin, as we know it today, retain its value long term, say even just fifty years? The simple math tells us it likely can’t. Lets dig into why.

Bitcoin (as we know it) faces a major issue in the coming decades: its miner rewards are vanishing — essentially going to zero. This is because (written into its code) the number of bitcoins that miners receive per block mined drops by 50% every 4 years, aka the “halving”. (Put another way, well over 19.5M of the 21M bitcoins that will ever be mined have already been mined because of this. Investors gleefully scream: “scarcity!”, but I think they miss the forest for the trees.) Bitcoin miners are also rewarded via transaction fees but not even remotely to the extent that they are from mining’s base rewards. On average only about 3% of miner revenue comes from transaction fees, though on very rare (record setting) volume days, transaction fees can account for more than miner rewards[1].

A basic primer: Miners have to spend big to secure the Bitcoin network. It costs them a lot of money in electricity bills to “mine” new blocks. If mining profits drop, fewer miners will mine and -if enough drop out- the network’s difficulty rate will decline to the point that the network is no longer secure. For this very reason, many “small cap” coins suffer “51% attacks” [Many examples: 1,2,3,4,5]. They’re called “51% attacks” because it’s “an attack on the blockchain, where a group controls more than 50% of the hashing power”. In other words: any attacker who can spend more on electricity and compute hardware than the “good” miners will overwhelm and compromise the network. These attacks are devastating and usually mean a cryptocurrency is doomed. So maintaining sufficient miner rewards is… everything, in a “proof of work” system like Bitcoin’s.

A simple known truth: Twenty years from now, miners will make exactly 3.125% what they do today from base rewards per mined block. 40 years from now, that percentage drops to 0.09765625%. But miner profits are what keep the network secure, so, how will this turn out? Let’s dig further:

According to YCharts, on July 13th 2024, miners made a total of $30.1M USD worth of crypto for their work in securing the network and processing transactions. I chose July 13th because it was a fairly average day for Bitcoin. $30.1M is actually not that much money. (I can imagine plenty of nation-backed attackers who might be willing to spend far more than $30M per day to kill Bitcoin. 🤷‍♂️) But, for the purposes of this article, lets say that $30M+ is enough to secure the network. Of that ~$30.1M on July 13, only $1.167M was from transaction fees, or 3.89%. The fees per transaction on that day were fairly average if you look at a 3 year view, about $1.5 per transaction. And I’ll say I’m being charitable: that day’s $1.167M in transaction revenue was well over what it has been on average for the past month. (August 11th for example had $402K in transaction revenue on $31.27M in total revenue, so 1.3% of total miner revenue. But again, let’s be charitable and go with the July 13th numbers and 3.89%.)

So, let’s say miners need to make ~$30M/day to defend against 51% attacks over the coming decades. There are precisely four ways that miners could theoretically continue to make roughly this amount of Bitcoin per day, but none of them reflect Bitcoin “as we know it”. 1) The price of Bitcoin continues to rise, to offset the “halvings”. 2 & 3) Transaction fees make up for the loss of base rewards, either by a.) jumping ~26x to ~$37.50 ($30.1 / $1.167 from the charitable numbers above) AND the volume of transactions somehow does not go down when that happens AND the volume of transactions is not volatile or b.) the volume of transactions goes up by ~26x , or 4) Bitcoin’s incentive structure, in its code, changes to not be deflationary.

Implausible Scenario #1: The price of Bitcoin continues to rise enough to offset its every-four-year “halving”. In order for that to happen, it would need to double in value every four years, which would double its market cap every four years too. Right now, Bitcoin has a value of about $60K and a market cap of about $1.1T. So, in 20 years, it would need to have a market cap of $35.2T to offset the “halving” effect. In 40 years, $1,126T, or $1.126 quadrillion, which is more than 10 times all the money in the world combined. Just for its base miner rewards to pay out $30M/day. Reaching a market cap higher than that is too ludicrous to consider. So basically, we can definitively say that, within 40 years, it’s impossible for Bitcoin’s possible rise in value to offset the “halving” effect.

Implausible Scenario #2: As we know: base rewards will decrease by ~97% over the next 20 years, so this scenario involves transaction fees making up for that by jumping by ~26x AND somehow not seeing the volume of transactions going down by a commensurate amount. This could theoretically make up for the revenue shortfall from decreased miner rewards. But this scenario seems extremely unlikely. In this scenario, instead of Bitcoin holders paying $1.5 on average to move their coin, they would need to pay ~$37.50 to move them, which would likely cause them to make fewer transactions, which reduces overall miner revenue. Currency is only useful if you can move it easily, or transfer it between people. In the early days of Bitcoin, people would talk about using Bitcoin to buy pizza or a cup of coffee. Can you imagine if Visa had a $37.50 charge per transaction? Businesses already gripe about the ~2% that credit cards take. This increase in transaction fees would be necessary to pay the miners, but also spell doom for Bitcoin as a useful currency. There’s this thing called the Lightning Network project, which addresses transaction speeds for end users, but does nothing to directly address miners’ impending revenue shortfall.

So the first issue with scenario #2 is somehow having $37.50 avg transaction fees without killing transaction volume. But the SECOND issue is even thornier: the historical volatility of transaction volume. If you look at the chart of daily transaction fee revenue, you can see that on some days, relatively few transactions happen, while on other days a ton of transactions happen. Whereas miner rewards are perfectly consistent and predictable, transaction volume is anything but. So lets say if, 20 years from now, transaction fees somehow make up for the 97%+ decrease in miner rewards, what happens on a day with far fewer transactions? Miners would have less incentive to mine, and a 51% attack would be cheap.

Implausible Scenario #3: Okay, but what if Bitcoin’s usage (transaction volume) went up by 26x, thus making up for the revenue in this way? However unlikely, this is the only real way to get through the implausible scenarios. The bitcoin network itself isn’t built to handle a lot of transactions (its blockchain size is already super large and inefficient). But, theoretically, something like the Lightning Network (if tons of people adopt it), could lead to a ton more transactions. A few issues with that particular route: a.) I’ve found Lightning Network’s approach rather clunky (you have to transfer BTC into a Lightning node, which adds a step I’m not sure many would take). b.) It advertises “exceptionally low fees”, which kind of defeats any chance of charging people enough money per transaction to make up for the revenue shortfall. c.) It’s been live for about a year, and there are no outward-facing signs (that I know of) that it is seeing massive adoption.
And then, finally, even if transaction volume went up by 26x, via Lightning or some other project, Bitcoin’s complete reliance on transaction fees would still leave it saddled with the volatility issue (where a prolonged lull in transactions would leave the network vulnerable).

Plausible Scenario #4: Eventually acknowledging this issue, the Bitcoin core developers change the project’s code to produce a fixed rate of base rewards per block, instead of a diminishing rate (much like Ethereum does (or did before the switch to proof of stake)), or move to proof of stake or something else.

Conclusion: Scenario #4 could fix the issue. It would also mean all the big Bitcoin advocates that have talked up the “there will never be more than 21M bitcoins” aspect of the coin to retail investors would have to eat their words. It may be a big pill to swallow for some, but I honestly think it will be necessary.

Footnote: YCharts revenue per day, avg. transaction fee/day, and total transaction fee revenue.

Disclaimers: I primarily want this post here as an “I told you so” many years down the line. I’m not counting on it taking off on forums like Hacker News, as it will likely get quickly Flagged by bitcoin stans (as happened to me before on this topic in 2021). But I also want to raise this issue now for Bitcoin’s own sake, as this issue is something the project has time to address.

[1] [On 2024’s peak day, miners received $107.76M on April 20, 2024, right before rewards were split in half, leading to an unusually high number of transactions and record TX fees, which briefly spiked to $128.45/transaction, doubling the previous record and 4x higher than at any point in the last 3 years. On that day, the network’s hash rate rose 74%. On a normal day, the TX fees are $0.50 – $4.5, with the average around $1.50]

]]>
Presenting CurrentKey Stats, my (and hopefully your new) Mac app! https://keydiscussions.com/2024/06/19/presenting-currentkey-stats-my-and-hopefully-your-new-mac-app/ Wed, 19 Jun 2024 18:16:00 +0000 https://keydiscussions.com/?p=1062 Continue reading Presenting CurrentKey Stats, my (and hopefully your new) Mac app!]]> Hey there! I post musings here and edit at Techmeme, but I also write software! Today I’m proud to present CurrentKey Stats, in the Mac App Store! It helps you organize and keep track of: Whatever your accomplishing on your Mac! Here, the demo describes it pretty well:

Because it listens to AppleScript, it enables some pretty cool advanced use cases too.

My hope is that it can help you use your computer more efficiently and deliberately. 💫 If you want to read more, check out everyone’s reactions over at the MacOS community on reddit.

]]>
Companies embracing SMS for account logins should be blamed for SIM-swap attacks https://keydiscussions.com/2024/02/05/sim-swap-attacks-can-be-blamed-on-companies-embracing-sms-based-password-resets/ Mon, 05 Feb 2024 23:42:49 +0000 https://keydiscussions.com/?p=823 Continue reading Companies embracing SMS for account logins should be blamed for SIM-swap attacks]]> [UPDATE Since this was posted in 2024:
Major US telcos like AT&T, T-Mobile, and Verizon have suffered a months-long breach (the ongoing Salt Typhoon attack). These companies, of course, pass along the unencrypted SMSes vital to countless log in flows, account re-activations, and password resets. They are now known to themselves be compromised.

With all US telecoms compromised, likely well into the future, and the basic and enduring fact that SIM-SWAP attacks are rampant and an essentially intractable problem in the short or medium term: ALL companies should fully dispel the notion that SMS being used at any part of the log in flow offer any real security to its users. It is, instead, making users vulnerable. In my opinion, not as a lawyer, I feel that courts/juries should recognize the state of the industry in 2025 and hold companies fully liable for hacks emanating from SMS abuse.

Some companies, like Google, are beginning to at least acknowledge SMS’ weaknesses and move away from it, like, in limited fashion, for Gmail. But MANY others, including those that manage your money like Chase and Block/Square (and companies that manage your code, like Apple) still rely on SMS as part of their log in processes, which is insane. This, in the face of a never-ending onslaught of SIM-swap attacks. Meanwhile, the likelihood of new US regulation to enact any change on this front is very slim, and the feds’ ability to help in other ways is being dismantled. So it’s safe to say the situation has degraded further since the article’s original posting date, not gotten better.]

SIM-swap attacks continue year after year because companies (that know better) leaned into the awful idea of using SMS for password resets and account logins. These companies include Apple, Dropbox, PayPal, Block, Google, and many others.

What is a SIM-swap attack? It’s where a bad guy asks a carrier to port your cell-phone number to their phone. (Carriers are required to port your number easily because of pro-competition laws in the US.) Then, the crook triggers and receives account login info via SMSes from companies and proceeds to steal money and sensitive info from the victim. It happens all the time… Here are just a few of the higher profile instances:

Is there a way to stop SIM swap attacks? Yes, it’s simple: Companies SHOULD NOT LET CUSTOMERS LOG IN via SMS, or allow SMS-based password resetting. If SMS 2FA is offered, it must only be if they provide more secure options like Authy or Google Authenticator (and SMS should never serve as a fallback for account recovery).

For many years, people in the industry have invariably said something like: “Well… offering SMS-based authentication is better *overall* for customer security, because of its convenience (despite its shortcomings) vs other methods” (such as the far-more secure-able use of email for verification). To that I say: “who are *YOU* to deprive your customers of security?” Defending against targeted attacks must be an integral part of any company’s defense posture. It’s so arrogant to say otherwise, and it boils my blood, it really does. Offering SMS-based logins is a bad idea, and it never had a chance of being a good one.

Sending an SMS to a customer is like sending a postcard through the mail. It’s plaintext (not encrypted), and anyone can open your mailbox and intercept/read it (which is what happens in a SIM-swap attack). The protocol was never designed to be secure.

Is SMS the best option for password resets? NO! Reseting passwords via an email is far more secure. Is SMS a good 2FA option? No! Apps like Authy or using email are better. Is logging your customer in via SMS ever acceptable? No! [After reading Hacker News comments, let me be clear – I’m not just talking about SMS 2FA, in fact I’m primarily talking about the ubiquitous state of SMS-based password reseting, user onboarding, and account recovery. All are varying degrees of weak. SMS-based 2FA is, when offered as an option alongside stronger 2FA, the least-bad of the weak-security scenarios but is not the focus of the post.]

Much of the ire relating to SIM-swap attacks has, understandably, been directed at carriers. Indeed, carriers do a terrible job of securing customers’ phone numbers, and may be liable for that shortcoming. But here’s the thing: carriers’ security has always been bad, it has even been legislated into being bad, and other companies have still chosen to build mission-critical systems on top of that weak link.

Despite it being commonplace, it is important to remember that baking SMS into authentication flows was an awful, shortsighted choice made by companies. Despite offering poor security, SMS offers a nearly frictionless way to sign up new customers (think of Uber’s onboarding) and handle password resets, and companies felt they had to match competitors’ adoption of this technique. They dug the hole, pushed us in, and now they must get us out.

Companies adopt the naive outlook that, somehow, crooks won’t try hard enough to SIM swap individuals. Clearly the criminals will – even to the point of pretending to be customers at physical store locations. It’s time for them to call it on this experiment. It failed.

And I’m sorry, but after nearly a decade, we can call it: efforts to strengthen telephony protocols like SHAKEN/STIR, will never happen (to the extent of being fully adopted and strictly enforced, ie useful). If the willpower had existed in the industry, it would have happened 5 years ago. Promises of protocol upgrades never were (and certainly are not now) a satisfactory excuse to continue to send password reset codes over SMS. Nor would a stronger protocol even stop SIM swap attacks. People are being harmed day-in and day-out, while the industry equivocates. [Note: the EU’s “Sim Verify” initiative is worth a look.]

While SIM-swapping attacks are prevalent and headline-grabbing, SMSes are also vulnerable to man-in-the-middle attacks. These are likely carried out frequently by nation states. The fact that nation states can abuse SMS verification may even explain some of the overall inertia in allowing a broken system to remain.

If I sound heated, it’s because I’ve been banging this drum for over 7 years. Others have written about it years ago, and yet SIM-swap attacks continue unabated. I’m frustrated because many of these companies talk a big game about putting their customers’ safety and security first. I’m mad because, with all the intractable problems facing tech nowadays like deepfakes (including audio deepfakes that I wrote about here) and disinformation, this is one that can actually be solved, and yet nothing (concrete) is being done. We need a win, and here’s one for the taking!

To repeat: If some random person convinces T-Mobile, AT&T, Verizon, etc to port my number, MY DIGITAL SAFETY SHOULD NOT BE PORTED AS WELL.

How companies embraced this broken tech

Apple:

Apple helped seal SMS’ role in password resets and account logins via its keyboard feature it announced in 2018: Automatically fill in SMS passcodes on iPhone . It also allows scenarios where SMS can be used to reset your Apple account. Apple ALSO uses SMS/phone calls as 2nd-factor authentication for developer accounts. They must dispel this notion of SMS adding any functionally reliable layer of “security”. 2nd-factor authentication is GREAT when it’s not SMS! Apple even has a slick implementation of native 2FA that is completely undermined by having an SMS fall back.

Google:

In 2019, Google followed Apple’s bad idea with the same thing for Android, SMS autofill for one time codes.

Cloud providers like Twilio/Amazon/Microsoft/Google etc:

There is a large industrial complex behind SMS codes. Many companies have profit incentives to continue offering SMS one time codes to customers. Azure, AWS, Twilio, Google, etc. Selling these services is unethical. It’s a fundamentally broken technology, sold as a secure solution.

Money management services

Unbelievably, SMS reset/account login functionality is completely ubiquitous even when it comes to your money, as well as SMS 2FA and account recovery: Wells Fargo, Cash App (Block), Robinhood, Schwab, Paypal, Bank of America, etc etc. Again, these are SMS options offered as a way of “verifying that it’s you”, something that SIM-swapping crooks love to hear. Also, never carelessly change your phone number, you’ll be locked out of your PayPal!

Basically every other company at this point:

From food ordering services to social networks and even data storage firms like Dropbox — SMS is unfortunately, by default, a way to reset your account. If there’s even a way to turn it off, it’s incumbent upon you the user, to go in and opt out –service by service– and disable the crappy tech. Many services don’t offer an opt out.

Customers think they like SMS reset options

Customers don’t understand the broken nature of SMS resets. It’s not their job to. They appreciate that it’s more convenient than resets via email (an actually-secure option) or log in 2FA codes via 2FA apps like Authy. iPhone’s SMS autofill is oftentimes (dubiously) heralded as the best thing in iOS. The issue is: it’s not the customer’s job to understand whether systems are secure, it’s tech companies’.

And tech companies have failed, leaving all of their customers exposed in the process.

Hopefully a combination of lawsuits and legislation will eventually change the status quo. In the meantime, companies need to be brave and call the situation for what it is: a complete shit show. And then roll back their support of SMS verification services.

A few more things:

There have been really fantastic comments on Hacker News:

Traveling in an area where you don’t get SMSes? Shucks, you are locked out. Are you a customer of a bank like Bank of America, which requires SMS 2FA be enabled for any 2FA to be enabled? That’s broken! It’s only your money for crying out loud! Locked out of Viber because you changed numbers? Damn! Citibank “requiring SMS authentication to change the phone number on the account“? Not only is that silly, but that’s a bank that safeguards your hard earned cash! (One that happens to have just been sued by the NY AG for taking inadequate precautions to safeguard users from fraud and online scams, by the way.) Does your carrier stick you with SMS roaming fees? You are paying for shit security. Are you in a place where, if you forget to refill your balance, your SIM gets blocked, denying you SMS? Too bad all of these companies force you to have a SIM :/. Did you know that, as of Oct. 2023 guidance – NIST has harsh guidance when it comes to using SMS or phone calls for user identification? I did not, pass it on! Someone who works at banks in the EU notes that thankfully using SMS there remained more expensive than in the US and it never caught on as much, for SMS 2FA, which is “liable to both security breaches and locking out users”.

All of these commenters testify in a way I never could (with any authority at least) to the myriad ways you can be screwed as customers because of companies’ misguided decisions regarding SMS. It all hammers home the same point from above: the state of SMS-based verification in the industry is truly a shit show. And companies must be brave, suck it up, and roll it back. SMS is not cool anymore. Love your customers, don’t hurt them. Pass it on!

Please share this article wherever you think it may make an impact.

It is insane to me that SIM swapping attacks are entirely preventable and yet allowed to happen by flawed choices regarding the “convenience vs security” tradeoff. Please drop a link in Slack/Teams, or post to Reddit or wherever you feel that consumers or builders may be best informed. Seriously, I don’t have ads or monetize this site at all — these words are literally passion spilling out on to the page. Please share this with a buddy and lets try and change things. SIM swapping attacks are preventable.

Robust Identity services are more important than ever in the age of deepfakes.

Moving away from telephone-number-based identity services is a major and necessary step in realizing robust means of customer identification, which is even more important these days. The era of old school KYC (Know Your Customer) enforcement is over, with fake ID AI services going mainstream. Throw in “voice print” identification services, which have been used for years by financial companies, ISPs, and more – as an awful trend that won’t be useful in the wake of deepfake audio and determined hackers. Check out my post on this from 2021. In any case, we should move away from unencrypted, SIM-swap-prone verification identity services like SMS.

Many ransomware attacks are downstream of SIM-swap attacks

Another seemingly intractable problem facing IT around the world is ransomware. SIM-swapping attacks represent a significant vector for compromising a company’s network. Again, rolling back support for SMS logins could take a bite out of the ransomware scourge.

One HN commenter mentioned the “SIM Verify” initiative in the EU, where companies relying on SMS can at least check to see if a SIM had recently been ported. That’s something, and we’ll see if it goes anywhere, but if the SHAKEN/STIR rollout has taught me anything, changes like this may reach the US many years from now.

Finally, a dedicated home to this question

I created a site at a permanent URL that bluntly answers the question “Is using SMS for logins a good idea?”, for sharing with people in your industry.

]]>