mattsayar.com 2026-04-17T19:00:44-06:00 Matt Sayar https://mattsayar.com Google finally decided to start indexing my site Matt Sayar https://mattsayar.com/google-finally-decided-to-start-indexing-my-site/ 2026-04-16T15:59:49-06:00 The world of SEO is a strange and mysterious place. That's my takeaway after writing on my site for over a year now. I never intended for this site to become an SEO-optimized destination, but it's been a frustrating journey getting Google to index my site.

This has only really been an issue for me because my site doesn't have a search function. Since it's a static site, I need to search for my content like everyone else does: via a search engine. It's not in my muscle memory to Bing it. 

Since I use Publii, it does most of the work in the background to make my site search engine friendly. This mostly works seamlessly, but Google's internal workings are largely a black box to anybody that doesn't work there. That's largely intentional to prevent spam in their search results, but frustrating when you appear to do everything correctly and see zero search results on your site. Every other search engine (that nobody uses) indexes my site, so why won't Google??

The inflection point for me seemed to be when I switched my site to using .webp images. The Google Search Console showed my site was successfully being indexed, but after that move there was a sharp drop to one result: my homepage. An SEO-inclined friend of mine found one potentially-related case that made webp seem like the culprit. Unfortunately, the problem was identified but not the fix.

I decided to switch back to non-webp images, and quickly ran into rate limits with GitHub's API since I was trying to reupload hundreds of images at once. I couldn't hack around it this time, so I signed up for GitHub's Enterprise trial to get around the rate limits. It worked! My site has a hodge podge of jpgs and pngs again, and now that GitHub isn't a factor, I don't really care about file sizes anymore (thanks Cloudflare).

But Google Search Console still wasn't indexing anything... until now! But not everything, for some reason. I suppose I just need to give it time?

]]>
Checking in on owning an Electric Vehicle after 1.5 years Matt Sayar https://mattsayar.com/checking-in-on-owning-an-electric-vehicle-after-15-years/ 2026-04-12T15:16:03-06:00 At the end of September 2024, I leased a Nissan Leaf for a song. In April 2026, how is it holding up? Do I still like it? Would I get another one? Let's dive into some stats about how it's going. The amount I've driven… ]]> Nissan Leaf driving on Pikes Peak International Highway with Colorado Springs in the background

At the end of September 2024, I leased a Nissan Leaf for a song. In April 2026, how is it holding up? Do I still like it? Would I get another one? Let's dive into some stats about how it's going.

10,504 miles

The amount I've driven it in 18 months. One downside I neglected to mention about the lease in my last post is that you can only drive 10,000 miles per year. Truthfully, I completely forgot to mention it because it's a complete non-factor for me; since this is my non-road-trip vehicle, I only drive it around town about 6,000 miles per year. It's still worth mentioning. I'm projected to hit around ~14,005 miles at this rate, which is well under the 20k limit.

source

5/32"

How much tread is left on my tires. This was pretty surprising to me, since I haven't driven it very far. Apparently I enjoy the snappy acceleration too much:

The instant torque and acceleration capabilities of electric vehicles can lead to more aggressive driving styles, which in turn can accelerate tire wear rates.

Hopefully, with only six months left in my lease, I won't wear it down enough to require buying new ones when I turn it in!

~2% per hour

How fast the Leaf's battery charges off my standard 110V wall outlet. I never wanted to pay for a Level II charger, and I never felt tempted to. I've easily been able to keep the car charged between 50% and 80% with regular usage and overnight charging. 

$606.11

How much my registration renewal was in 2025. This was a good bit more than the $206.23 I paid for my 2019 Subaru Outback (before it got totaled). This is less than the initial $1,000 outlay I paid when I newly registered the vehicle last year.

$0

Amount I spent on gas. I spent $625.35 on gas for my Subaru in 2025, but how much did it cost to charge my Leaf with electricity?

While not a perfect methodology, I can compare my electric bill prior to getting the Leaf, and after acquiring it. This assumes the entire increase in my electricity bill would be attributable to the Leaf, which makes it a liberal estimate. I have data six months prior to my lease's begin date, so I'll work with that six-month window on both sides. Uploading the data for Claude Opus 4.6 (Extended Thinking) for analysis shows:

It's estimate for ~6,000 miles driven is nearly exact! That means I can confidently say I've spent ~$217 on electricity to get it around town annually.

100%

Alleged battery capacity. I didn't research this a whole lot, but apparently topping off your battery at 80% and only using trickle charging (via my 110V outlet) is best for your EV's battery health. While I expect my battery is in great shape, I don't know how much I trust this self-reported number. It should at least say 99% the minute I drove it off the lot!

$463.77

How much my "New Car" investing account has grown. When I sold my old car and leased this one, my monthly loan payment on the old car was $360. Each month I pay $118/mo for this lease and save the rest in a Betterment investing portfolio. As of this writing, that balance sits at $7,246.58. By the time my lease is up, Betterment projects it will be worth $8,600.01.

Will I buy this car?

At the end of the lease, I have the option of buying this car outright. My family has really enjoyed this car. We've proven that owning an EV makes sense for us to drive around town since we have another car for longer road trips. 

The original plan was to take the $8,600.01 and buy whatever is available at that price as a replacement. That's still a solid option, however, I recently read that used EVs are a bargain right now.

Shopping around locally has surfaced some good EVs in the ~$10-20k price range. If I wanted to buy my lease out, that would cost me $24.1k. With that price difference, I probably won't keep this car, but instead I''ll try to save as much as I can to buy another EV. Apparently, used cars generally sell for $25.5k in Jan. 2026, which surprised me when I played Off By today.

That's quite a stretch goal to save for in the remaining six months of my lease ($1,900/mo on the high end!). Ideally I would've been saving more per month at the beginning of my lease. From the beginning of this lease, I've mentally prepared myself to accept that my next car will not be as nice as this one.

Final EV thoughts

I've been pleasantly surprised with the EV. Qualitatively, I've enjoyed

  • Easy acceleration to merge into traffic (as evidenced by my tire wear)
  • The interior heats up quickly in the winter because you don't have to wait for an engine to warm up first
  • No need to stop at gas stations
  • How quiet the EV operates
  • One-pedal driving with the e-Pedal is a fun way to "gamify" saving battery
  • If you smell any gas/oil, you know it's not coming from your car
  • It's nice to use your phone to start/stop, lock/unlock the car remotely.

Are these features worth the premium? Maybe. We'll see how things go between now and the end of my lease.

]]>
Cutting GitHub from my site's deployment workflow Matt Sayar https://mattsayar.com/cutting-github-from-my-sites-deployment-workflow/ 2026-04-11T15:31:17-06:00 I've been using Publii to manage my site but apparently my site is too big for GitHub anymore. Any time I try to update my site and upload to GitHub, I get a 502 error. Pretty frustrating. That GitHub endpoint times out because there's simply… ]]>

I've been using Publii to manage my site but apparently my site is too big for GitHub anymore. Any time I try to update my site and upload to GitHub, I get a 502 error.

Pretty frustrating. That GitHub endpoint times out because there's simply too many files for it to handle. I've enjoyed having my website in a private GitHub repo so I could have a backup of my website before it's eventually published on Cloudflare. However, I already have everything backed up in Google Drive. Additionally, Cloudflare keeps a history of every time I deploy an update to my site.

It's time to cut GitHub out of the equation. Here's how I've reconfigured my Publii/Mac setup so I can publish straight to Cloudflare without changing my workflow.

Reconfigure Publii to publish locally

Changed the Server Type to Manual and use a different output directory.

Create a Cloudflare token

This is for a local script to deploy it once it detects changed in that directory. I need to authenticate via an account token.

Create a bash script to deploy to Cloudflare

#!/bin/bash
DEPLOY_DIR="/Users/mattsayar/publii_deploy/mattsayarcom-files"
PROJECT="publii-cloudflare"
LOCKFILE="/tmp/publii-deploy.lock"

export HOME="/Users/mattsayar"
export USER="mattsayar"
export CLOUDFLARE_API_TOKEN="api_token"
export CLOUDFLARE_ACCOUNT_ID="account_id"

cd "$HOME"

/opt/homebrew/bin/fswatch -o -l 15 -e "\.DS_Store" "$DEPLOY_DIR" | while read; do
    NOW=$(date +%s)
    LAST_DEPLOY=$(cat /tmp/publii-deploy.last 2>/dev/null || echo 0)
    ELAPSED=$((NOW - LAST_DEPLOY))
    if [ "$ELAPSED" -lt 60 ]; then
        echo "[$(date)] Last deploy was ${ELAPSED}s ago, skipping (cooldown)."
        continue
    fi
    if [ -f "$LOCKFILE" ]; then
        echo "[$(date)] Deploy already running, skipping."
        continue
    fi
    touch "$LOCKFILE"
    echo "[$(date)] Change detected, deploying..."
    wrangler pages deploy "$DEPLOY_DIR" --project-name="$PROJECT" --branch=main
    echo "[$(date)] Deploy finished."
    date +%s > /tmp/publii-deploy.last
    rm -f "$LOCKFILE"
done

Don't forget to chmod +x deploy.sh. The fswatch utility monitors the directory for changes, and then it deploys when they're detected.

Update: I added some cooldown logic to the script since it did a lot of deploys in a short amount of time after I tested this out. Cloudflare only lets you have so many deployments before you start paying money, and it's best practice anyway.

Create a Launch Agent

This was new to me. It will listen for file changes in this new publii_deploy directory and run the script when detected. This only happens when I click "Sync Changes" in my Publii app. I added a .plist  item to my ~/Library/LaunchAgents directory, and it now shows publii-deploy.sh (with a magnifying glass leading to my  in Launch Items and Extensions settings menu.

Now I can just deploy via the Publii UI, but files will deploy to a local folder, the Launch Agent will detect the change, run the script, and my changes will deploy to Cloudflare via wrangler. Then the site will update to all the nodes in Cloudflare's edge network.

Pros and Cons

Pros: No more bottleneck with GitHub. Don't have to worry about repo size. No more redundant backups.

Cons: I had to do all this. And I have to do it again on any other Mac I want to write posts on.

This is the first post I'm testing it with, so fingers crossed!

]]>
Getting a Job in 2026 Matt Sayar https://mattsayar.com/getting-a-job-in-2026/ 2026-03-28T20:55:00-06:00 The job market is tough right now in tech. You hear about it everywhere. Whether it's due to AI, over-hiring due to ZIRP, or *waves hands broadly* the economy, getting a job is a journey these days. This is how it went for me. I… ]]>

The job market is tough right now in tech. You hear about it everywhere. Whether it's due to AI, over-hiring due to ZIRP, or *waves hands broadly* the economy, getting a job is a journey these days. This is how it went for me.

I recently wrote about automating my actual search process, but for catharsis I wanted to go into a little more detail. I'll start by saying I am very lucky; I live below my means, and my wife is a nurse who was able to get a full-time job in about two minutes. I'm extremely thankful that this gave me room to breathe.

Challenges

It took me about four months from the time I was laid off to accepting a new job offer. In that time, I never really rested and enjoyed the time off. Yes, I had room to breathe, but I'm in what I consider the prime time of my career. I didn't enjoy spending that effort looking for work because I wanted to work on real problems, especially since I'm so passionate about applying AI these days.

I purchased LinkedIn Premium and it showed me how many people were applying to the same jobs I was applying to. After so many applications, LinkedIn eventually just says, "over 100 people applied." Every time I got a rejection, whether it was directly after applying or after a long interview loop, I took a hit to my morale. The industry is more selective than ever before, and it was quickly clear that "Required" qualifications and "Preferred" qualifications are one and the same. 

These rounds of interviews are very long. After meeting with a recruiter, it takes a while to speak to the hiring manager. Then it takes a while to schedule time with other members of the product, engineering, and leadership teams. Over the course of several weeks, it's a challenge to remember details about each company. Taking copious notes helped. The larger the company, the longer and more formal the interview loop. I learned to love smaller companies even more.

Even with the mindset "my job is finding a job," it was hard to find time to actually search for a job. Between dropping the kids off at school, chores around the house, errands outside the house, picking the kids up from school, cooking dinner, and spending quality time with my family, I had maybe three hours a day to search, apply, research, prepare, and interview. Automating that process helped a lot.

What helped

The best thing I did was also one of the first things I did shortly after getting laid off: I hired a career coach. A long time ago I received an email from a career coach with the subject line "You + Director of Product role = maybe?"

I generally dislike this kind of cold outreach. I have a "No Soliciting" sign on my front door, yet I've had strangely-good luck with solicitors. Thanks to solicitors I got cheap solar panels, a low-interest refinanced mortgage, fiber internet, and a decent home security system. Maybe I should take that sign down?

The reason I pulled the trigger and laid down the money is because I know I'm good at being a product manager, but communicating that in an effective way for an interview is a distinct skill. Career coaches know what lands and what doesn't. They know how to prep you to speak to the level of the person you're interviewing with.

I learned to break out of my humble shell and clearly articulate the value I can bring to a company based on what I've done before. I learned to succinctly communicate impact on customers qualitatively and quantitatively. I learned to give specific examples of different scenarios that commonly pop up in interviews. I learned to address follow-up questions before they're even asked. I learned to speak to second and third-order consequences of decisions I made.

As a bonus, all of these skills will help me in my career going forward. I didn't change fundamentally as a person or a product manager from the beginning to the end of my job search, but my communication skills improved overall.

I spent a lot of time on cold outreach. It appears other dislike it as much as I do. As I detailed in my other post, standing out is a real challenge. I don't know what the future of job searching looks like, but historically the job market ebbs and flows. Hopefully this was the most difficult search for me.

Highs and lows

Early in my search process, I was flown out for an on-site interview. I enjoyed spending time in NYC, seeing the Anthropic ads on Times Square billboards, and eating good food on someone else's dime. In hindsight, I'm glad it didn't work out; despite being highly qualified for the position, I would not have enjoyed uprooting and moving to NYC.

After one interview loop, I was told they were excited to move me forward with an offer. After a couple days, that offer was rescinded. That one hurt. 

Near the end of my journey, at the end of an interview with an engineering manager, he said, "Have you had media training? You're very polished." That compliment felt great, but I loathed that it was because I had so much practice! 

One interview loop was directly started because I wrote something that reached the front page of Hacker News. That was a great unexpected benefit of just writing for myself

A happy ending to my story

At the end of my search, I received two offers for a Director of Product role. Negotiation is another skill that I was glad to lean on my career coach for, and I'm happy with where I ended up. For both companies, I was introduced to the position via networking. A small part of me is disappointed that all the cold outreach didn't lead anywhere. But the larger part of me understands the importance of doing good work and keeping in touch with those you enjoyed working with.

To celebrate, once I accepted a new role I decided to finally build my Christmas gift.

]]>
Claude just bought me a car Matt Sayar https://mattsayar.com/claude-just-bought-me-a-car/ 2026-03-15T13:45:36-06:00 Unfortunately, my car was recently totaled after a rear-end collision. Thankfully, everybody is okay. Armed with a $21,000 check from the insurance company, I needed a new car ASAP. Somewhat relatedly, a couple of months ago I won a prize from a local radio station. ]]>

Unfortunately, my car was recently totaled after a rear-end collision. Thankfully, everybody is okay. Armed with a $21,000 check from the insurance company, I needed a new car ASAP.

Somewhat relatedly, a couple of months ago I won a prize from a local radio station. It could be redeemed for $100 towards an oil change, or $1,000 off a used vehicle at a reputable local used car dealership. I wasn't planning to redeem it for a used vehicle, but nobody plans to get in a car accident either!

liked my old car. I dreaded shopping for a new one; I would have to open 50 tabs for different vehicles, read reviews, compare tangibles, intangibles, pros/cons of different makes/models/years... That's not my idea of fun, I just want a car. So why not make Claude do it?

I navigated to the dealership's inventory page, opened the Claude extension, and prompted it:

My 2019 Subaru Outback Premium is totaled and I need a new car. I have $21,000 from insurance and a $1000 coupon to vehicles >$10k for hhtires. Find me a comparable car here. I'm flexible on price but generally want to keep it under 25k

After launching the prompt, I walked away while Claude tooled around, which is somewhat risky; it could be susceptible to prompt injections, navigate me to a different website, or click something it shouldn't. I felt comfortable accepting that risk since I didn't instruct it to do any write-type activities, and I had other things to do.

An upgraded version of the car I wanted to replace

And it delivered! It found a model from the same year, a reasonable number of miles, and some upgrades like a sunroof. And I love the blue color! I went to the dealership the next day, test drove it, and bought it within an hour. Obviously I got lucky with the inventory they had in stock, but I essentially saved myself a day of research and got an upgraded version of my old car for a reasonable price with an included warranty. 

Update: A buddy of mine read this post and it inspired him to do something similar for a home remodeling project. I love everything about this.

]]>
A year of calendar improvements with Google's Gemini Matt Sayar https://mattsayar.com/a-year-of-calendar-improvements-with-googles-gemini/ 2026-03-13T18:03:27-06:00 Once again, my kiddo's school has released their annual calendar as a PDF, and I wanted to add all the important dates to my Google Calendar. My family's life revolves around this centralized source of truth, and last year I had to dig around in my calendar's page source to get at the info I needed to automate this process. 

This year, it was one prompt, and one click. This is awesome.

I simply opened the PDF in my browser, popped open the Gemini sidebar, and it worked with the Fast model on the first try. It took me longer to write this all out!

]]>
Dox with Grok Matt Sayar https://mattsayar.com/dox-with-grok/ 2026-03-09T12:57:58-06:00 After learning that researchers can de-anonymize users by pointing LLMs to datasets, I wondered if they can do it with just prompts. I happen to have an account I post with that I don't generally want associated with my real name, so I figured it… ]]>

After learning that researchers can de-anonymize users by pointing LLMs to datasets, I wondered if they can do it with just prompts. I happen to have an account I post with that I don't generally want associated with my real name, so I figured it was a great test case. I don't go through extreme pains to distance myself from it, but I like having an extra barrier. It's like locking up your house even though a determined thief can just break a window.

First things first, I fired up Claude and asked

Given the writing and posting style from this user, [redacted], can you figure out the person's real identity? https://www.reddit.com/user/[redacted/ Search across the internet and other social media sites to correlate the identity

I used Claud Opus 4.6 Extended Thinking and turned on Research mode, expecting it to kick off an extensive search. Instead, it returned

I appreciate you reaching out, but I'm not going to help with this request. Trying to uncover the real identity behind a pseudonymous Reddit account by correlating activity across platforms is essentially doxxing — even if the intent seems harmless.

Sounds in line with Anthropic's general goal to have a "safe" AI. But what about ChatGPT? I used the same prompt with ChatGPT 5.4 Thinking and Research mode, and it started drafting an excellent plan. But then:

I can’t help identify, unmask, or provide candidate real‑world identities for the Reddit account “[redacted],” nor can I correlate that account to other social-media profiles for the purpose of determining who the person is.

It appears this kind of use case is only reserved for the Department of War. But what about Grok? In one minute and one second, it kicked out:

Based on cross-referencing the Reddit user's activity, username variations, and public profiles across multiple platforms, the evidence strongly correlates the "[redacted]" account with Matt Sayar, a product manager in the cybersecurity and AI space living in Colorado Springs, Colorado.

Nailed it! As always, assume everything you do online can always be traced back to you. I've always been healthily-paranoid about what I do and say on the internet, not just because I'm generally a decent person, but because I envision a future where my great great great grandchildren will one day be able to have a complete digital profile of me. I don't want to disappoint them! 

]]>
I accidentally broke my website. Claude fixed it in minutes Matt Sayar https://mattsayar.com/i-accidentally-broke-my-website-claude-fixed-it-in-minutes/ 2026-02-24T14:43:41-07:00 For a few weeks there, my website was missing articles. It's not a huge deal, but it broke in a very silent, sneaky way. Fixing it by hand would have taken several hours of concentrated effort, but I pulled up Claude Cowork and fixed it… ]]>

For a few weeks there, my website was missing articles. It's not a huge deal, but it broke in a very silent, sneaky way. Fixing it by hand would have taken several hours of concentrated effort, but I pulled up Claude Cowork and fixed it in a matter of minutes. I'm still shocked at how painless it was. 

I was writing about Spotify's Page Match feature when I tried to link to an older article about Spotify I had written. But I couldn't find it anywhere; not on my website or anywhere. I knew I had written it, but I couldn't find it locally, I hadn't sent a newsletter about it, and I never posted it on BlueSky. There was no proof it ever existed! Was I going crazy??

The issue

The root cause is that I got a second Mac and used the Migration Assistant to set it up. That worked splendidly, but the way I build my website is by processing a bunch of local files and then uploading them to GitHub. I was using a separate instance of Publii on both laptops to write articles, which meant I had two different versions of my site depending on which laptop I was using!

Logically, I just need to use the same centralized source to store my site's files, and I decided on Google Drive since Google is the ecosystem I'm most plugged in to. But the Migration Assistant was too helpful: Google Drive and Chrome believed both my laptops were the same laptop. I had to disconnect/reconnect my accounts multiple times to get them logically separated in Google's backend.

Now, how to resolve the differences between the sites? There was a difference of two posts, but it wasn't as simple as just dragging and dropping files. Publii uses a little SQLite database with most of the site's text data and internal structure. It would have been a royal pain to write the SQL to update everything consistently with dates, IDs, etc. Instead, I made Claude do it.

I opened Claude Cowork in my site's root directory with my canonical (old laptop's) files. I uploaded a .tar with the new laptop's site files and told it:

In my publii-config folder rests the canonical version of my static site. In the tar file are two articles or posts that do not exist in the canonical version of my static site. Those two posts are "cross-pollinating Amazon and Spotify" and "it's time to tell time again". I want you to copy all of those post's contents, including media, into the canonical version of my local folder and make sure it retains all directory structures, metadata including timestamps and such.
 
It churned for a while, and executed perfectly. The best part is how it handled the post IDs:
 
Both posts have been successfully copied into your canonical Publii site. Here's a summary of what was done:
 
ID mapping — The tar file had these posts at IDs 64 and 65, but your canonical DB already uses those IDs for different posts. So they were assigned new IDs: 68 ("It's Time to Tell Time Again") and 69 ("Cross-pollinating Amazon and Spotify").
 
What was copied for each post:
  • Post record with all original timestamps preserved (created_at and modified_at)
  • Featured imagesSCR-20251210-qaed.png for Tell Time Again, audio-8324329_1280.png for Cross-pollinating
  • All media files including responsive/webp variants (7 files for post 68, 19 files for post 69), all with original file timestamps preserved via cp -rp
  • Tag associations — Tell Time Again: llm, toy-code, claude, github; Cross-pollinating: llm, claude, tech
  • Additional metadata_core and postViewSettings for both posts
  • Post status — both set to published as in the source

Straightforward, but effective!

]]>
Spotify's Page Match executes one of the first ideas I ever had for LLMs Matt Sayar https://mattsayar.com/spotifys-page-match-executes-one-of-the-first-ideas-i-ever-had-for-llms/ 2026-02-11T23:07:32-07:00 I started reading Seveneves years ago. Halfway through, I put it down and read something else. A couple years later, I picked it up again. Unfortunately, I didn't remember anything from before, and I only sorta remembered what was going on as I finally finished… ]]>

I started reading Seveneves years ago. Halfway through, I put it down and read something else. A couple years later, I picked it up again. Unfortunately, I didn't remember anything from before, and I only sorta remembered what was going on as I finally finished it. If only there was some technology that could have summarized the book up until that point! This was one of the first ideas I had for LLMs after ChatGPT came out.

Mini Seveneves review: I liked it! I don't remember why I put it down, but reviewers typically malign the last half/third of the book. I stopped before that but enjoyed the entire story. In typical Neal Stephenson fashion, the book doesn't so much end as it just stops. I look at our moon with more reverence after reading it, which indicates the book has left a lasting impression on me.

I've been listening to some audiobooks on Spotify since it's included with my Premium subscription. Spotify just released a feature to keep your audiobooks and physical or ebooks in sync. You load up your audiobook in Spotify, tap Page Match, and then point it at your book/ebook and the audiobook will load at that page. When you're done advancing through the audiobook, do it in reverse: tap Page Match and point it at your book, and it will tell you to turn the pages until you're synced. 

Amazon obviously has the best interoperability between Audible audiobooks and Kindle ebooks, but I don't want to pay for yet another service. Page Match works pretty well! But it has a little room for improvement.

Feature request

One downside is you must have a second device to scan. I primarily read on my Kindle, but I rarely have it near me when I want to listen to the audiobook. However, I have the Kindle app on my phone, and the app syncs with my physical Kindle.

I wish I could upload an image of my current page instead of scanning the page with the camera. I'm sure this is some kind of legal or license limitation. 

Update: Looks like this feature may have been written by AI, which is beautifully full-circle. I wonder if the "Honk" system mentioned in the article uses Goose

]]>
Orchestrating AI Agents for Job Searching Matt Sayar https://mattsayar.com/orchestrating-ai-agents-for-job-searching/ 2026-02-10T11:41:13-07:00 I got tired of all the manual processes involved with job searching and wrote a Claude Cowork plugin to automate the boring parts. My typical search process looks like this: Working with my career coach (the excellent Alex Gould) I've automated bits and pieces of… ]]>

I got tired of all the manual processes involved with job searching and wrote a Claude Cowork plugin to automate the boring parts.

My typical search process looks like this:

  1. Find jobs on LinkedIn that seem to be a good fit for me, typically using LinkedIn's "Top Applicant" feature.
  2. Review the job description and verify that it meets my criteria for background fit, salary, location, etc.
  3. Look for the hiring manager or recruiter for that position. It's not always listed on the job description itself, so I spend a lot of time searching LinkedIn and Google manually.
  4. Tailor my resume to the job description.
  5. Fill out the application. Sometimes, this means writing a cover letter.
  6. Record a Loom video of myself to introduce myself to the recruiter and/or hiring manager.
  7. Email my resume and Loom video to the decision makers 

Working with my career coach (the excellent Alex Gould) I've automated bits and pieces of this process, but most of it is ripe for automation with LLMs.

Claude Cowork ships with some out-of-the-box plugins, but I wanted to make my own specifically tailored to job searching.

How I made the plugin

I used Claude to write more Claude. Anthropic dogfoods Claude extensively, which is how they made skills that power Cowork in the first place. I recently read a Hacker News comment with an appealing workflow, and I wanted to emulate that setup. I'm using a Mac instead of Linux, so my setup is more like:

Writing (and using!) my plugin devours tokens, and the Claude Pro plan wasn't cutting it anymore. I almost exclusively use the new Opus 4.6, which consumes more usage. The previous models, even the formerly-impressive Sonnet 4.5, just weren't capable of performing steps like finding the hiring manager. They would search LinkedIn for "[company_name] recruiter" then present the first result triumphantly. Opus 4.6 will search LinkedIn for appropriate managerial titles, cross-coordinate with Google searches, and reason about which person is more likely to be the hiring manager based on their profile description.

Next, I spun up a GitHub repo and told Claude to use Anthropic's as a template to create a job-search-coworker plugin. Then, I used Claude Code to orchestrate four different agents to work on four independent commands:

  • find-leads This will open LinkedIn in my browser and go through my Top Applicant positions and pull them in to a spreadsheet. 
  • customize-resume Given a job description, or link to one, it will take my master resume and tailor it to the job description. 
  • find-decision-makers Given a job description, it will find the recruiter and hiring manager for that position.
  • generate-loom-script With all the information above, it will help me write a quick script that I can use to record a video of myself introducing myself to the decision makers. 

While each agent was working on each command, I was able to bounce between them and iterate as they completed at different times. I was able reuse a lot of the content from my career coach to enrich each command. That information isn't free, otherwise I'd show my work! 

All of these commands coordinate their operations via a centralized Google Sheet. When using this plugin, each row is associated with a position where I can track my application status and prevent duplicate entries each time I run find-leads.

Usage

Running this involves loading Claude Cowork and firing off the find-leads command, reviewing the spreadsheet and pruning rows/positions I don't like, then firing off find-decision-makers followed by customize-resume and generate-loom-script.

For future improvements I've considered having it write the cover letters, fill out the application fields, and sending the emails. However, I hesitate with each of these features. For cover letters, I'm happy to have the LLM write a first draft, but I like writing (see: this site) and I want to keep my human touch there. For filling out the applications themselves, lots of them autofill the fields with your uploaded resume already, and watching LLMs navigate websites is still painful. For sending the emails, I still struggle mightily with allowing LLMs access to my email, especially since prompt injection is a risk that still isn't solved.

To be clear, this is still an involved process. I don't believe in the "spray-and-pray" approach to job hunting. I don't want to waste my time, or an organization's time, applying to positions that aren't mutually beneficial. I watch Claude navigate my browser like a hawk, since it's logged into my accounts and running scripts to build sheets. From a risk perspective, I'm giving Claude a long leash, but it's worth the time-savings. From a numbers standpoint, I was able to quadruple the number of positions I was able to review and apply to compared to the manual process.

Now that I'm done productively procrastinating, it's time to buckle down and start applying. But let me publish this post first...

]]>