Jekyll2025-11-23T01:34:24+00:00https://mplough.github.io/feed.xmlAn experiment was carried outMatthew PloughGodox X2T-N firmware upgrade notes2025-08-29T00:00:00+00:002025-08-29T00:00:00+00:00https://mplough.github.io/2025/08/29/godox-x2t-firmware-upgradeUpgrading the firmware on my Godox X2T-N flash controller was quick but a little more complicated than I anticipated.

The upgrade requires running a firmware upgrade program. The X2T-N v1.4 instructions instructions said to use Godox G3 MacOS v1.1 but v1.1 is Windows-only. I grabbed Godox G3 v2.0 for macOS from Godox’s firmware launcher installer download page and that ended up working fine.

I also ran into issues with connecting the controller to my computer. I needed to connect my computer to my monitor (which acts as a USB hub) and connect the X2T-N to the monitor with a USB-A to USB-C cable. Connecting directly with a USB-C to USB-C cable did not work. I learned this trick from Careless_Sun2442 on Reddit:

I also had the same problem when trying to update all my Godox flashes V1s and N350s, and the XPro IIs using my Macbook Pro M2. At first I thought it was the faulty USB-C cable but when I tried using a few other USB-C cables, I also encountered the same issue which was not recognising any of my devices when I plug it in my USB-C port. So I took an Alogic Hub which I plug into my Macbook Pro USB-C and then I plug the cable from my Godox devices to the Alogic Hub and it magically recognised the device which allowed me to update all the firmware of all my devices successfully. So if you still have any such issue, please try using a Hub as an intermediary for your connection which may work for you too.

The firmware can be upgraded with the X2T-N off and the batteries removed.

Godox G3 can download and install firmware with no need to download firmware binaries separately.

Update - November 22, 2025

This technique also works with the XProII-N flash controller.

]]>
Matthew Plough
Digilent WaveForms and macOS app entitlements2025-05-02T00:00:00+00:002025-05-02T00:00:00+00:00https://mplough.github.io/2025/05/02/app-entitlementsThe short version

Digilent makes a line of oscilloscopes that use a free application called WaveForms for control. WaveForms can also use any sound card or microphone as an oscilloscope input, albeit one without a calibrated voltage scale.

As snooping via the microphone is a privacy risk, modern operating systems require that applications request permission to use the microphone. The mechanism for doing so isn’t simple and a few months ago I discovered an issue with the WaveForms application where attempting to use the sound card as an input would result in no signal being acquired. I reported the issue and to my surprise and delight, a real person replied less than three hours later thanking me for my report and saying that the issue had already been fixed in the latest beta. Kudos to Digilent’s support team.

How I diagnosed the issue

On macOS, capabilities are requested via entitlements. For example, the entitlement for use of the microphone in a sandboxed application is called com.apple.security.device.microphone. There is a similar entitlement for hardened applications called com.apple.security.device.audio-input. It’s especially confusing how the names imply similar if not completely redundant intents for both entitlements and reveal nothing about the different contexts (sandbox vs. hardened) in which they should be requested.

Back to that signal in WaveForms.

I suspected that something was wrong when I didn’t see a prompt asking if WaveForms could use the microphone. As someone who doesn’t write native Mac apps, I needed to do some digging to learn how entitlements worked, but found that I could check the app’s current entitlements by running:

$ codesign --display --entitlements - /Applications/WaveForms.app/
Executable=/Applications/WaveForms.app/Contents/MacOS/WaveForms
[Dict]
	[Key] com.apple.security.cs.allow-jit
	[Value]
		[Bool] true
	[Key] com.apple.security.cs.allow-unsigned-executable-memory
	[Value]
		[Bool] true
	[Key] com.apple.security.cs.disable-executable-page-protection
	[Value]
		[Bool] true
	[Key] com.apple.security.cs.disable-library-validation
	[Value]
		[Bool] true

It was clear at this point that the com.apple.security.device.microphone entitlement was missing and if I were an experienced Mac developer, I would have been confident enough to make a report. I dug further though; not only am I not an experienced Mac developer, I also wanted to see if I could fix things quickly myself as I was not yet aware of the Digilent support team’s skill and responsiveness.

I created the list of entitlements that the app should have, which entailed translating the output of codesign to XML and adding the microphone permission. The resulting entitlements.plist contained:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>com.apple.security.cs.allow-jit</key>
    <true/>
    <key>com.apple.security.cs.allow-unsigned-executable-memory</key>
    <true/>
    <key>com.apple.security.cs.disable-executable-page-protection</key>
    <true/>
    <key>com.apple.security.cs.disable-library-validation</key>
    <true/>
    <key>com.apple.security.device.microphone</key>
    <true/>
</dict>
</plist>

I created a self-signed certificate in Keychain Access, made a copy of the app, and used the certificate and my entitlements file to sign the copy with the missing entitlement added:

codesign -f \
    --sign "My Code Signing Certificate" \
    --entitlements entitlements.plist \
    --force-library-entitlements \
    WaveForms\ copy.app/

When I ran the new copy and selected the microphone, I got a prompt asking for permission to use it and the app worked as it should. At that point I knew I could submit a report.

I played with permissions a few times by using tccutil, a command-line utility can reset permissions that have been previously granted. Here I only wanted to reset permissions granted to WaveForms, so I needed the app’s bundle identifier. There are a number of ways to get the bundle identifier but I ended up finding com.digilent.waveforms in the CFBundleIdentifier key inside WaveForms.app/Contents/Info.plist quickly enough that I didn’t bother to investigate any other method. Running osascript -e 'id of WaveForms' would have been quicker but that would have required knowing first that macOS’s Open Scripting Architecture even exists and second, how to use it. To reset WaveForms’s microphone permissions, I ran:

tccutil reset Microphone com.digilent.waveforms
]]>
Matthew Plough
Nikon MC-DC3 pinout2025-04-26T00:00:00+00:002025-04-26T00:00:00+00:00https://mplough.github.io/2025/04/26/mc-dc3The Nikon Z50II uses a new remote release cord, the MC-DC3. This cord plugs into the 3.5mm TRRS jack on the side of the camera.

Pinout

The TRRS connection uses the CTIA standard. The ground is on ring 2, not on the sleeve.

Pin name Remote Audio
Tip Release Left
Ring 1 Focus Right
Ring 2 Ground Ground
Sleeve Not connected Microphone

Circuit

A 22 ohm resistor is wired in series between the switches and ground. This resistor is not required; the camera will focus and release even without it, but the resistor may offer some protection to the camera if the TRRS jack is left in headphone mode when the remote is connected and used.

To focus (equivalent to half-pressing the shutter release), close SW2 on Ring 1. To release the shutter (equivalent to fully depressing the shutter release), close SW2 first, then close SW1 on Tip.

Nikon MC-DC3 cord circuit

]]>
Matthew Plough
Measuring camera sensor readout speeds2025-01-30T00:00:00+00:002025-01-30T00:00:00+00:00https://mplough.github.io/2025/01/30/camera-sensor-readout-speedsI measured the sensor readout speeds of a Z50, Z50II, and Z8 when shooting photographs with electronic shutter.

I did this because I haven’t seen any information online about the Z50II’s sensor readout speed. I found that it’s exactly the same as the Z30 and Z50, about 1/22 second. Interestingly, Thom Hogan, my go-to reviewer for Nikon, seems to have the wrong readout time listed for the Z50 on his sensor read-out speeds page.

I used an Arduino Uno R3 and the technique from https://github.com/horshack-dpreview/RollingShutter with slightly modified hardware and source code. Instead of blinking the onboard LED, which is kind of dim, I blinked a discrete LED connected to pin 3. The LED’s dome focuses the light, yielding a much more reasonable exposure.

I photographed a 500 Hz LED with a 105mm macro lens at 1/4000, f/2.8, ISO 12800, and sanity checked its blink rate by calibrating using the Z8, which has a widely known readout speed of about 1/270 second. These are the photos from each camera - the Z50, then the Z50II, then the Z8. It’s neat to have such a stark visual representation of how much faster a stacked sensor is read.

Z50 sensor readout
Z50 readout image
Z50II sensor readout
Z50II readout image
Z8 sensor readout
Z8 readout image
]]>
Matthew Plough
Forcing RGB output on an external monitor from an Apple Silicon machine2024-11-06T00:00:00+00:002024-11-06T00:00:00+00:00https://mplough.github.io/2024/11/06/rgb-output-on-apple-siliconMy Dell U2720Q monitor will start flickering when used with an Apple Silicon Mac and certain patterns are displayed on the screen. Colors also look slightly weird compared with how my old Intel Mac drove the display. This happens because by default, Intel Macs output RGB color to external displays while Apple Silicon Macs output YPbPr.

On macOS Sonoma, I was able to use the code at https://github.com/dangh/force-rgb.fish to force RGB output. I cloned the repository to https://github.com/mplough/force-rgb.fish in case the original is deleted. However, this method does not work on Sequoia 15.1 as of commit 79a3128796f3738e3e3f09d7d39609650aad9c3c. The plist files are updated properly but the display reports YPbPr mode after a logout and a reboot.

On macOS Sequoia, BetterDisplay Pro works. It makes an external monitor’s color modes selectable, allowing selection of an RGB mode:

BetterDisplay U2720Q color modes

]]>
Matthew Plough
Working notes on EXIF tags for video files2024-03-17T00:00:00+00:002024-03-17T00:00:00+00:00https://mplough.github.io/2024/03/17/video-exif-tags-notesI store my processed photos and videos in Apple Photos so I can view them from anywhere on my phone or computer. Viewing media in chronological order is fundamental but I’d like to have other metadata available if at all possible. That’s possible for images but EXIF data for video is a mess as far as I can tell from reading scattered forum posts. I’d love to be able to cite some sources on this; please let me know if there is an authoritative source for this information.

Apple Photos parses dates inside video files but doesn’t seem to parse location, camera, or lens metadata. However, it does store original files in a recoverable fashion, so if I ensure that all metadata is included in video files, there is a chance that Apple Photos (or some other application) will be able to read that metadata in the future.

As verified by experiment, Exiftool 12.761 (and likely greater versions) can read and write metadata from MOV and MP4 video files. MOV files are QuickTime containers while MP4 files are MPEG-4 containers.

EXIF data in video files is generally QuickTime tags.

I use Handbrake and other tools to compress processed video to maximize space efficiency.

The situation:

  • DSLR and mirrorless cameras write metadata to video files that they create. Apple Photos does not parse this metadata.
  • Exiftool reads metadata just fine from camera video files.
  • Various ffmpeg and ffprobe incantations didn’t demonstrate an ability to read metadata from camera video files. Maybe this is possible but if it is, it’s not easy and it’s not documented in a way that’s discoverable by even expert software programmers.
  • Apple Photos does not read metadata (apart from timestamps) in either MOV or MP4 containers.
  • Handbrake can get video files down to much more reasonable sizes while keeping decent quality, especially when writing H.265, but it doesn’t pass through EXIF data such as camera make, model, lens, or timestamps.
  • A trial of ffworks showed that it didn’t pass through metadata either.
  • Handbrake can only output MP4 containers while fftools can also output MOV containers (due to underlying ffmpeg support for both container formats).

To at least retain video metadata, do the following:

After compressing with Handbrake, use Exiftool to copy tags from the source file to the output file:

exiftool -all= -tagsfromfile [source] -all [output]

Use Exiftool to geotag a video - HoudahGeo can’t geotag video files.

Either copy location tags from a source image…

exiftool -tagsfromfile [source] -location:all [output]

…or write the location tags explicitly.

exiftool -GPSLatitude=[decimal degrees] -GPSLatitudeRef=[N|S] -GPSLongitude=[decimal degrees] -GPSLongitudeRef=[E|W]

The ffworks tool does have some nice GUI display of video quality metrics, which may prove useful for quick evaluation of quality loss when recompressing.

  1. Exiftool has weird nonstandard command-line arguments. Get the version of Exiftool by running exiftool -ver

]]>
Matthew Plough
Clearing the touch bar in macOS Sonoma2023-12-12T00:00:00+00:002023-12-12T00:00:00+00:00https://mplough.github.io/2023/12/12/touch-bar-serverIn 2020 I finally bought a retina MacBook Pro. Unfortunately it has a touch bar. I don’t like the touch bar’s functionality or illumination so I keep it entirely blank until I press the Fn key.

Prior to macOS Sonoma I had this set up in System Settings > Keyboard > Touch Bar Settings as follows:

Touch Bar Settings dialog contents

However, in macOS Sonoma, the touch bar shows “Configure in: Extensions Settings” on startup rather than being entirely blank.

Running the following command from a terminal after startup will work around the issue.

sudo killall -9 TouchBarServer

After this runs, the touch bar is blank and the Fn key brings up the expanded control strip as expected - everything works normally.

Right now I just run this in a terminal after every (infrequent) boot. I’ll update this post if I decide that I care enough to make this into a login item.

]]>
Matthew Plough
iPhone 13 Pro over-sharpening2022-03-18T00:00:00+00:002022-03-18T00:00:00+00:00https://mplough.github.io/2022/03/18/iphone-sharpeningThe iPhone 13 Pro’s camera application drastically over-sharpens photos during post-processing. The over-sharpening occurs whether shooting HEIF or ProRAW. With HEIF the over-sharpening is unrecoverable. With ProRAW, however, it is possible to defeat the over-sharpening by performing a null edit on the image in the Photos app on the phone.

To illustrate the severity of the issue and the possibility of defeating it, I took three pictures of my 2020 13-inch MacBook Pro screen displaying apple.com on the evening of March 18, 2022. The pictures look slightly orange because my screen is slightly orange; I use f.lux to lower the display’s color temperature at night.

I took these using the “3x” zoom setting from roughly the distance of the laptop’s lower case. Since the 3x camera hardware cannot focus that close, the phone uses the 1x camera and digitally zooms in. This digital zoom actually looks fine once the over-sharpening is defeated.

A screenshot of the web site:

apple site

An over-sharpened HEIF, converted to JPEG. Notice the weird ringing around the black text as well as the loss of color in the links:

oversharpened HEIF

An over-sharpened ProRAW, converted to JPEG. The ringing and loss of color are identical:

oversharpened ProRAW

A ProRAW that looks decent, converted to JPEG. The ringing is gone and the links show up in a nice blue:

decent ProRAW

Tap or click to compare the oversharpened and decent ProRAW images:

Oversharpened vs. decent ProRAW

In order to defeat the over-sharpening, I performed a null edit as follows:

  1. Take the photo.
  2. Open the Photos app on the phone.
  3. Tap “Edit”.
  4. Tap “Exposure”.
  5. Drag the slider off of zero and then back to zero.
  6. Tap “Done”.

It’s also possible to defeat the over-sharpening by allowing the photo to sync from the phone to a computer via iCloud Photos, and then perform a similar null edit in Photos on the computer.

This deficiency is exasperating; a software bug compromises the otherwise sensible defaults offered by the iPhone 13 Pro imaging pipeline.

]]>
Matthew Plough
Dark Mode PDF viewing2021-10-15T00:00:00+00:002021-10-15T00:00:00+00:00https://mplough.github.io/2021/10/15/dark-mode-pdfDark mode is great for using a laptop at night with minimal illumination from the screen. Support for dark mode is fortunately growing. E-book readers, text editors, terminal applications, and a growing contingent of web sites all support dark themes.

Viewing PDFs with a dark background is a little more difficult.

  • Apple’s Preview doesn’t support a dark background for viewing PDFs.
  • Apple’s Books application in Big Sur has a dark theme for e-books and can catalog PDFs, but opens PDFs in Preview.
  • The developers of Skim sanctimoniously rejected a feature request for a dark background mode back in 2018.
  • Inverting colors in the macOS accessibility settings results in a mouse cursor with garish glowing blue edges.
  • PDF Expert supposedly has a dedicated night mode, but the paid version is $50/year and using even the free version requires creating an account to use local desktop software. It’s as if The Right to Read was taken as a design brief rather than as a warning.

There are two halfway decent solutions:

Negative Reader is a minimum viable solution to the problem but it feels clunky. There is no way to view document thumbnails or a table of contents. It would be much nicer to use a tool that at least had feature parity with Preview.

A Hacker News comment contains the best solution I’ve seen, and it modifies the Firefox’s PDF viewer’s behavior in one line of JavaScript.

I made the code into bookmarklets to toggle dark mode. Drag the links to your bookmarks bar, open a PDF, and click your new bookmark to turn out the lights:

Dark Mode PDF Light Mode PDF

  • May 29, 2023 - This site now supports dark mode.
  • June 16, 2023 - Updated bookmarklet to handle pdf.js PR #15930. Added light mode bookmarklet to switch back.
]]>
Matthew Plough
Celestron C5: Flat frames2021-04-02T00:00:00+00:002021-04-02T00:00:00+00:00https://mplough.github.io/2021/04/02/c5-flat-framesI wanted to understand how much vignetting I could expect in the best case when taking pictures through my Celestron C5. Known vignetting characteristics will help me better plan both terrestrial and sky photography. I shot flat frames in six different configurations at three different crops, covering all possible imaging configurations with my current collection of gear.

Setup

I placed my Celestron C5 on a table and placed the screen of my 13-inch 2020 MacBook Pro directly against the front of the telescope. To maintain a consistent, known lighting source, I set the screen to the default white balance and the maximum available brightness and pulled up a pure white test screen.

While acquiring images I needed to support the camera assembly attached to the rear of the telescope to prevent the unit from flipping over backwards.

setup

I took all shots with a Nikon D750 and exposed all frames with the histogram maximum showing about 80% to the right and no clipped highlights.

I shot raw files in FX (full frame), 1.2x, and DX (1.5x) crops in six configurations, as outlined below. I used Affinity Photo 1.9.2 to post-process each shot by developing with the default settings, converting to black and white using the default black-and-white adjustment layer, and using a levels adjustment layer to set the white level as in the table.

Barlow? Reducer? T adapter ISO Shutter White level
No No Universal 100 1/2.5 83%
No No SC 100 1/2.5 85%
No Yes Universal 100 1/5 81%
No Yes SC 100 1/5 84%
Yes No Universal 100 2” 74%
Yes Yes Universal 100 1.3” 74%
  • Reducer - Celestron #94175 0.63x reducer/corrector
  • Barlow - Meade Series 4000 #126 short-focus Barlow
  • Universal T-adapter - Celestron #93625 1.25” universal T-adapter in a Celestron #93653-A 1-1/4 inch visual back
  • SC T-adapter - Celestron #93633-A T-adapter for Schmidt-Cassegrain telescopes

I used a Celestron #93402 T-ring for Nikon to attach each T-adapter to the camera.

Results

The universal T adapter vignettes significantly in full frame. It’s simply not wide enough to pass all the light gathered by the telescope. However, it’s incredibly convenient for rapid switching between visual accessories and an imaging stack for both terrestrial and casual sky photography. Unlike the screw-on T-adapter, it can also hold a Barlow lens. Its vignetting performance with a Barlow installed is far more reasonable.

A 2-inch visual back and a 2-inch camera nosepiece are in my future. These tools will offer convenient accessory switching, better light passage, and improved collimation and stability, albeit at the cost of increased weight.

The reducer makes the scope 1 1/3 stops faster, allowing for shorter exposure times, but it does not appreciably widen the maximum usable field of view. In fact, it becomes necessary to shoot in DX mode because the image edges are totally black. This relationship is not surprising; the edges are nearly black in the full frame shot without the reducer, so no more light is available at the edges of the frame. The reducer then increases the maximum field of view past the black edges by a factor of 1/0.63, which is very nearly the reciprocal of a 1.5x DX crop.

A table of vignettes is presented below.

B? R? T FX 1.2x DX
No No U vignette vignette vignette
No No SC vignette vignette vignette
No Yes U vignette vignette vignette
No Yes SC vignette vignette vignette
Yes No U vignette vignette vignette
Yes Yes U vignette vignette vignette
]]>
Matthew Plough