Skip to content

MINOR: Standardize KRaft logging, thread names, and terminology#13390

Merged
cmccabe merged 2 commits intoapache:trunkfrom
cmccabe:cmccabe_2023-03-13_rename
Mar 16, 2023
Merged

MINOR: Standardize KRaft logging, thread names, and terminology#13390
cmccabe merged 2 commits intoapache:trunkfrom
cmccabe:cmccabe_2023-03-13_rename

Conversation

@cmccabe
Copy link
Contributor

@cmccabe cmccabe commented Mar 13, 2023

Standardize KRaft thread names.

  • Always use kebab case. That is, "my-thread-name".

  • Thread prefixes are just strings, not Option[String] or Optional.
    If you don't want a prefix, use the empty string.

  • Thread prefixes end in a dash (except the empty prefix). Then you can
    calculate thread names as $prefix + "my-thread-name"

  • Broker-only components get "broker-$id-" as a thread name prefix. For example, "broker-1-"

  • Controller-only components get "controller-$id-" as a thread name prefix. For example, "controller-1-"

  • Shared components get "kafka-$id-" as a thread name prefix. For example, "kafka-0-"

  • Always pass a prefix to KafkaEventQueue, so that threads have names like
    "broker-0-metadata-loader-event-handler" rather than "event-handler". Prior to this PR, we had
    several threads just named "EventHandler" which was not helpful for debugging.

  • QuorumController thread name is "quorum-controller-123-event-handler"

  • Don't set a thread prefix for replication threads started by ReplicaManager. They run only on the
    broker, and already include the broker ID.

Standardize KRaft slf4j log prefixes.

  • Names should be of the form "[ComponentName id=$id] ". So for a ControllerServer with ID 123, we
    will have "[ControllerServer id=123] "

  • For the QuorumController class, use the prefix "[QuorumController id=$id] " rather than
    "[Controller <nodeId] ", to make it clearer that this is a KRaft controller.

  • In BrokerLifecycleManager, add isZkBroker=true to the log prefix for the migration case.

Standardize KRaft terminology.

  • All synonyms of combined mode (colocated, coresident, etc.) should be replaced by "combined"

  • All synonyms of isolated mode (remote, non-colocated, distributed, etc.) should be replaced by
    "isolated".

@cmccabe cmccabe force-pushed the cmccabe_2023-03-13_rename branch from 9f8b554 to 8eec949 Compare March 13, 2023 23:46
@mumrah mumrah self-requested a review March 14, 2023 14:25
Copy link
Member

@ijuma ijuma Mar 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We typically separate the node id via a dash. Have you tried to be consistent with what we do outside of kraft? That would help when debugging the system.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fired up this branch and yea BrokerLifecycleManager1EventHandler is not pretty :)

I'm not sure we need node ID in the thread name since we will presumably know which node a thread dump came from. It might actually be confusing since the convention we have is {thread name}-{thread number in pool} like "metrics-meter-tick-thread-1", "metrics-meter-tick-thread-2", "data-plane-kafka-request-handler-0", "data-plane-kafka-request-handler-1", etc. What about something like broker-lifecycle-manager-event-handler or BrokerLifecycleManager-EventHandler

I also noticed somewhere we are not prefixing the event queue thread name, there is just an EventHandler thread.

Copy link
Member

@ijuma ijuma Mar 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check the replica fetcher threads for an example where we do include source id in the thread name:

val threadName = s"${prefix}ReplicaFetcherThread-$fetcherId-${sourceBroker.id}-${fetcherPool.name}"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is name here and where is it supplied?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fired up this branch and yea BrokerLifecycleManager1EventHandler is not pretty :)

I'm not sure we need node ID in the thread name since we will presumably know which node a thread dump came from. It might actually be confusing since the convention we have is {thread name}-{thread number in pool} like "metrics-meter-tick-thread-1", "metrics-meter-tick-thread-2", "data-plane-kafka-request-handler-0", "data-plane-kafka-request-handler-1", etc. What about something like broker-lifecycle-manager-event-handler or BrokerLifecycleManager-EventHandler

I also noticed somewhere we are not prefixing the event queue thread name, there is just an EventHandler thread.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea of the LogContext stuff was to use key/value pairs so it's easy to enrich the log context. cc @jason for additional thoughts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this use-case is a bit of a hack. (A hack that this PR didn't add!)

Basically we're forced to use the old Scala Logging trait unless we want to rewrite all the BrokerServer stuff. But we want a nice-looking log prefix.

@cmccabe
Copy link
Contributor Author

cmccabe commented Mar 14, 2023

cc @ijuma @mumrah @hachikuji

So about the thread names thing. I’m open to changing the thread names to “kebab case” (i.e. my-thread-name)

I do think in the context of JUnit we definitely need to have broker-0-my-thread-name and controller-0-my-thread-name. I find myself looking at JUnit backtraces all too often, and having 6 different threads all named my-thread-name just doesn't work for me.

So then the big question becomes whether we would want the prefixes in prod or not. The "pro" case is that it simplifies the code to just unconditionally do that, and avoid cases where someone accidentally forgets to set the prefix. The con case is that we should know what node we’re on, so the information is redundant.

Although I’ve seen people do weird things like combine several process backtraces into one file or send ZK and Kafka logs all to the same file. So I don’“t truly believe the “we’ll never need it” case. Maybe “we rarely need it” or “we won’t need it if people are reasonable”

@mumrah mumrah added the kraft label Mar 14, 2023
@mumrah
Copy link
Member

mumrah commented Mar 14, 2023

@cmccabe, I see your point about the node ID when debugging tests -- it can be annoying to not know which broker instance a thread belongs to. You're kebab case examples look good to me 👍

Did you find where the lone EventHandler is coming from?

@cmccabe cmccabe force-pushed the cmccabe_2023-03-13_rename branch 2 times, most recently from 4194393 to 88d25c8 Compare March 15, 2023 21:48
@mumrah mumrah assigned mumrah, abbccdda and cmccabe and unassigned cmccabe and abbccdda Mar 16, 2023
@cmccabe cmccabe force-pushed the cmccabe_2023-03-13_rename branch from 88d25c8 to 9978458 Compare March 16, 2023 21:00
@mumrah
Copy link
Member

mumrah commented Mar 16, 2023

With the latest code, I got the following threads in KRaft mode:

"metrics-meter-tick-thread-1" #19 daemon prio=5 os_prio=31 cpu=0.57ms elapsed=9.63s tid=0x000000012608ce00 nid=0x8403 waiting on condition  [0x00000001737aa000]
"metrics-meter-tick-thread-2" #24 daemon prio=5 os_prio=31 cpu=0.80ms elapsed=9.61s tid=0x0000000157c1c400 nid=0x8203 waiting on condition  [0x00000001739b6000]
"kafka-1-raft-scheduler0" #25 daemon prio=5 os_prio=31 cpu=0.07ms elapsed=9.54s tid=0x00000001576dfe00 nid=0x8003 waiting on condition  [0x0000000173bc2000]
"raft-expiration-reaper" #27 prio=5 os_prio=31 cpu=4.06ms elapsed=9.52s tid=0x00000001576e9600 nid=0x7f03 waiting on condition  [0x0000000173dce000]
"kafka-1-raft-outbound-request-thread" #26 prio=5 os_prio=31 cpu=3.38ms elapsed=9.42s tid=0x0000000157bfc400 nid=0x7e03 runnable  [0x0000000173fda000]
"kafka-1-raft-io-thread" #28 prio=5 os_prio=31 cpu=86.24ms elapsed=9.42s tid=0x00000001576e8c00 nid=0x7d03 waiting on condition  [0x00000001741e6000]
"kafka-1-metadata-loaderevent-handler" #29 prio=5 os_prio=31 cpu=62.78ms elapsed=9.42s tid=0x0000000156410800 nid=0x7b03 waiting on condition  [0x00000001743f2000]
"kafka-1-snapshot-generator-event-handler" #30 prio=5 os_prio=31 cpu=0.09ms elapsed=9.42s tid=0x000000012687f400 nid=0x6c03 waiting on condition  [0x00000001745fe000]
"quorum-controller-1-event-handler" #31 prio=5 os_prio=31 cpu=44.08ms elapsed=9.41s tid=0x0000000157c25600 nid=0x7a03 waiting on condition  [0x000000017480a000]
"controller-1-ThrottledChannelReaper-Fetch" #32 prio=5 os_prio=31 cpu=0.70ms elapsed=9.40s tid=0x0000000157c24a00 nid=0x6e03 waiting on condition  [0x0000000174a16000]
"controller-1-ThrottledChannelReaper-Produce" #33 prio=5 os_prio=31 cpu=1.02ms elapsed=9.40s tid=0x0000000157c25000 nid=0x6f03 waiting on condition  [0x0000000174c22000]
"controller-1-ThrottledChannelReaper-Request" #34 prio=5 os_prio=31 cpu=0.42ms elapsed=9.40s tid=0x00000001576df800 nid=0x7003 waiting on condition  [0x0000000174e2e000]
"controller-1-ThrottledChannelReaper-ControllerMutation" #35 prio=5 os_prio=31 cpu=0.56ms elapsed=9.40s tid=0x000000015770aa00 nid=0x7603 waiting on condition  [0x000000017503a000]
"ExpirationReaper-1-AlterAcls" #36 prio=5 os_prio=31 cpu=3.64ms elapsed=9.39s tid=0x0000000157759000 nid=0x7503 waiting on condition  [0x0000000175246000]
"data-plane-kafka-request-handler-0" #37 daemon prio=5 os_prio=31 cpu=4.57ms elapsed=9.39s tid=0x0000000157757200 nid=0x7303 waiting on condition  [0x0000000175452000]
"data-plane-kafka-request-handler-1" #38 daemon prio=5 os_prio=31 cpu=3.38ms elapsed=9.39s tid=0x000000015775b800 nid=0xab03 waiting on condition  [0x000000017565e000]
"data-plane-kafka-request-handler-2" #39 daemon prio=5 os_prio=31 cpu=3.45ms elapsed=9.39s tid=0x000000015775be00 nid=0x15303 waiting on condition  [0x000000017586a000]
"data-plane-kafka-request-handler-3" #40 daemon prio=5 os_prio=31 cpu=2.33ms elapsed=9.39s tid=0x0000000157757800 nid=0x15203 waiting on condition  [0x0000000175a76000]
"data-plane-kafka-request-handler-4" #41 daemon prio=5 os_prio=31 cpu=2.46ms elapsed=9.39s tid=0x0000000157757e00 nid=0xad03 waiting on condition  [0x0000000175c82000]
"data-plane-kafka-request-handler-5" #42 daemon prio=5 os_prio=31 cpu=2.01ms elapsed=9.39s tid=0x0000000157758400 nid=0xae03 waiting on condition  [0x0000000175e8e000]
"data-plane-kafka-request-handler-6" #43 daemon prio=5 os_prio=31 cpu=2.24ms elapsed=9.39s tid=0x0000000157754400 nid=0x14f03 waiting on condition  [0x000000017609a000]
"data-plane-kafka-request-handler-7" #44 daemon prio=5 os_prio=31 cpu=2.89ms elapsed=9.39s tid=0x0000000157754a00 nid=0xb003 waiting on condition  [0x00000001762a6000]
"data-plane-kafka-network-thread-1-ListenerName(CONTROLLER)-PLAINTEXT-0" #21 prio=5 os_prio=31 cpu=16.88ms elapsed=9.38s tid=0x0000000157c24000 nid=0xb203 runnable  [0x00000001764b2000]
"data-plane-kafka-network-thread-1-ListenerName(CONTROLLER)-PLAINTEXT-1" #22 prio=5 os_prio=31 cpu=4.53ms elapsed=9.38s tid=0x0000000157c28600 nid=0xb303 runnable  [0x00000001766be000]
"data-plane-kafka-network-thread-1-ListenerName(CONTROLLER)-PLAINTEXT-2" #23 prio=5 os_prio=31 cpu=4.60ms elapsed=9.38s tid=0x0000000157c28c00 nid=0x14c03 runnable  [0x00000001768ca000]
"data-plane-kafka-socket-acceptor-ListenerName(CONTROLLER)-PLAINTEXT-9093" #20 prio=5 os_prio=31 cpu=5.88ms elapsed=9.38s tid=0x0000000157ce7a00 nid=0xb503 runnable  [0x0000000176ad6000]
"broker-1-lifecycle-managerevent-handler" #45 prio=5 os_prio=31 cpu=3.50ms elapsed=9.37s tid=0x0000000157759600 nid=0x14903 waiting on condition  [0x0000000176ce2000]
"broker-1-ThrottledChannelReaper-Fetch" #46 prio=5 os_prio=31 cpu=0.58ms elapsed=9.37s tid=0x0000000156408e00 nid=0xb603 waiting on condition  [0x0000000176eee000]
"broker-1-ThrottledChannelReaper-Produce" #47 prio=5 os_prio=31 cpu=0.39ms elapsed=9.37s tid=0x000000015640f800 nid=0xb703 waiting on condition  [0x00000001770fa000]
"broker-1-ThrottledChannelReaper-Request" #48 prio=5 os_prio=31 cpu=0.51ms elapsed=9.37s tid=0x000000012388a400 nid=0x14603 waiting on condition  [0x0000000177306000]
"broker-1-ThrottledChannelReaper-ControllerMutation" #49 prio=5 os_prio=31 cpu=0.36ms elapsed=9.37s tid=0x0000000126881600 nid=0xb903 waiting on condition  [0x0000000177512000]
"broker-1--to-controller-forwarding-channel-manager" #50 prio=5 os_prio=31 cpu=0.53ms elapsed=9.35s tid=0x0000000157d05600 nid=0x14303 runnable  [0x000000017771e000]
"broker-1--to-controller-alter-partition-channel-manager" #55 prio=5 os_prio=31 cpu=0.38ms elapsed=9.33s tid=0x000000012701be00 nid=0x14103 runnable  [0x000000017792a000]
"ExpirationReaper-1-Produce" #56 prio=5 os_prio=31 cpu=2.05ms elapsed=9.33s tid=0x000000012633ba00 nid=0x13f03 waiting on condition  [0x0000000177b36000]
"ExpirationReaper-1-Fetch" #57 prio=5 os_prio=31 cpu=1.60ms elapsed=9.33s tid=0x0000000156416400 nid=0xba03 waiting on condition  [0x0000000177d42000]
"ExpirationReaper-1-DeleteRecords" #58 prio=5 os_prio=31 cpu=2.39ms elapsed=9.33s tid=0x00000001577ffa00 nid=0xbc03 waiting on condition  [0x0000000177f4e000]
"ExpirationReaper-1-ElectLeader" #59 prio=5 os_prio=31 cpu=3.01ms elapsed=9.33s tid=0x0000000157756c00 nid=0xbd03 waiting on condition  [0x0000000290206000]
"ExpirationReaper-1-Heartbeat" #60 prio=5 os_prio=31 cpu=1.83ms elapsed=9.32s tid=0x0000000123889400 nid=0xbe03 waiting on condition  [0x0000000290412000]
"ExpirationReaper-1-Rebalance" #61 prio=5 os_prio=31 cpu=2.03ms elapsed=9.32s tid=0x0000000123889a00 nid=0xc003 waiting on condition  [0x000000029061e000]
"broker-1--to-controller-heartbeat-channel-manager" #63 prio=5 os_prio=31 cpu=27.97ms elapsed=9.30s tid=0x0000000126450600 nid=0xc203 runnable  [0x000000029082a000]
"ExpirationReaper-1-AlterAcls" #64 prio=5 os_prio=31 cpu=3.32ms elapsed=9.29s tid=0x00000001262dee00 nid=0x13b03 waiting on condition  [0x0000000290a36000]
"data-plane-kafka-request-handler-0" #65 daemon prio=5 os_prio=31 cpu=1.67ms elapsed=9.29s tid=0x0000000126467c00 nid=0xc403 waiting on condition  [0x0000000290c42000]
"data-plane-kafka-request-handler-1" #66 daemon prio=5 os_prio=31 cpu=1.05ms elapsed=9.29s tid=0x0000000126468200 nid=0xc503 waiting on condition  [0x0000000290e4e000]
"data-plane-kafka-request-handler-2" #67 daemon prio=5 os_prio=31 cpu=2.19ms elapsed=9.29s tid=0x0000000126468800 nid=0x13703 waiting on condition  [0x000000029105a000]
"data-plane-kafka-request-handler-3" #68 daemon prio=5 os_prio=31 cpu=1.28ms elapsed=9.29s tid=0x0000000157d05c00 nid=0xc603 waiting on condition  [0x0000000291266000]
"data-plane-kafka-request-handler-4" #69 daemon prio=5 os_prio=31 cpu=1.10ms elapsed=9.29s tid=0x0000000157db3400 nid=0xc803 waiting on condition  [0x0000000291472000]
"data-plane-kafka-request-handler-5" #70 daemon prio=5 os_prio=31 cpu=1.37ms elapsed=9.29s tid=0x0000000157db3a00 nid=0x13503 waiting on condition  [0x000000029167e000]
"data-plane-kafka-request-handler-6" #71 daemon prio=5 os_prio=31 cpu=0.85ms elapsed=9.29s tid=0x0000000157db4000 nid=0x13403 waiting on condition  [0x000000029188a000]
"data-plane-kafka-request-handler-7" #72 daemon prio=5 os_prio=31 cpu=1.05ms elapsed=9.29s tid=0x0000000126462200 nid=0xcb03 waiting on condition  [0x0000000291a96000]
"kafka-scheduler-0" #73 daemon prio=5 os_prio=31 cpu=0.45ms elapsed=9.23s tid=0x00000001238d8a00 nid=0xcd03 waiting on condition  [0x0000000291ca2000]
"kafka-scheduler-1" #74 daemon prio=5 os_prio=31 cpu=0.07ms elapsed=9.23s tid=0x00000001238d9000 nid=0xce03 waiting on condition  [0x0000000291eae000]
"kafka-scheduler-2" #75 daemon prio=5 os_prio=31 cpu=0.61ms elapsed=9.23s tid=0x00000001238d9600 nid=0xcf03 waiting on condition  [0x00000002920ba000]
"kafka-scheduler-3" #76 daemon prio=5 os_prio=31 cpu=0.07ms elapsed=9.23s tid=0x0000000126461400 nid=0x12f03 waiting on condition  [0x00000002922c6000]
"kafka-scheduler-4" #77 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=9.23s tid=0x00000001262d0c00 nid=0x12d03 waiting on condition  [0x00000002924d2000]
"kafka-log-cleaner-thread-0" #78 prio=5 os_prio=31 cpu=3.47ms elapsed=9.21s tid=0x0000000126881c00 nid=0x12b03 waiting on condition  [0x00000002926de000]
"kafka-scheduler-5" #79 daemon prio=5 os_prio=31 cpu=3.99ms elapsed=9.21s tid=0x00000001565c9200 nid=0xd103 waiting on condition  [0x00000002928ea000]
"kafka-scheduler-6" #80 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=9.21s tid=0x00000001565f9800 nid=0xd303 waiting on condition  [0x0000000292af6000]
"kafka-scheduler-7" #81 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=9.21s tid=0x00000001262e9a00 nid=0xd503 waiting on condition  [0x0000000292d02000]
"LogDirFailureHandler" #82 prio=5 os_prio=31 cpu=0.10ms elapsed=9.21s tid=0x000000012701d200 nid=0x12a03 waiting on condition  [0x0000000292f0e000]
"kafka-scheduler-8" #83 daemon prio=5 os_prio=31 cpu=0.04ms elapsed=9.21s tid=0x00000001268ab400 nid=0x12803 waiting on condition  [0x000000029311a000]
"group-metadata-manager-0" #84 daemon prio=5 os_prio=31 cpu=1.99ms elapsed=9.21s tid=0x00000001565f9e00 nid=0xd803 waiting on condition  [0x0000000293326000]
"transaction-log-manager-0" #85 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=9.21s tid=0x0000000126473000 nid=0x12703 waiting on condition  [0x0000000293532000]
"TxnMarkerSenderThread-1" #62 prio=5 os_prio=31 cpu=2.18ms elapsed=9.21s tid=0x0000000126473600 nid=0xdb03 runnable  [0x000000029373e000]
"kafka-scheduler-9" #86 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=9.21s tid=0x00000001238f1800 nid=0xdc03 waiting on condition  [0x000000029394a000]
"data-plane-kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-0" #52 prio=5 os_prio=31 cpu=3.74ms elapsed=9.16s tid=0x00000001238d4800 nid=0xdd03 runnable  [0x0000000293b56000]
"data-plane-kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-1" #53 prio=5 os_prio=31 cpu=1.72ms elapsed=9.16s tid=0x0000000157db8a00 nid=0x12403 runnable  [0x0000000293d62000]
"data-plane-kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-2" #54 prio=5 os_prio=31 cpu=3.71ms elapsed=9.16s tid=0x00000001565fa400 nid=0xdf03 runnable  [0x0000000293f6e000]
"data-plane-kafka-socket-acceptor-ListenerName(PLAINTEXT)-PLAINTEXT-9092"

Copy link
Member

@mumrah mumrah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

cmccabe added 2 commits March 16, 2023 15:07
Standardize KRaft thread names.

- Always use kebab case. That is, "my-thread-name".

- Thread prefixes are just strings, not Option[String] or Optional<String>.
  If you don't want a prefix, use the empty string.

- Thread prefixes end in a dash (except the empty prefix). Then you can
  calculate thread names as $prefix + "my-thread-name"

- Broker-only components get "broker-$id-" as a thread name prefix. For example, "broker-1-"

- Controller-only components get "controller-$id-" as a thread name prefix. For example, "controller-1-"

- Shared components get "kafka-$id-" as a thread name prefix. For example, "kafka-0-"

- Always pass a prefix to KafkaEventQueue, so that threads have names like
  "broker-0-metadata-loader-event-handler" rather than "event-handler". Prior to this PR, we had
  several threads just named "EventHandler" which was not helpful for debugging.

- QuorumController thread name is "quorum-controller-123-event-handler"

- Don't set a thread prefix for replication threads started by ReplicaManager. They run only on the
  broker, and already include the broker ID.

Standardize KRaft slf4j log prefixes.

- Names should be of the form "[ComponentName id=$id] ". So for a ControllerServer with ID 123, we
  will have "[ControllerServer id=123] "

- For the QuorumController class, use the prefix "[QuorumController id=$id] " rather than
  "[Controller <nodeId] ", to make it clearer that this is a KRaft controller.

- In BrokerLifecycleManager, add isZkBroker=true to the log prefix for the migration case.

Standardize KRaft terminology.

- All synonyms of combined mode (colocated, coresident, etc.) should be replaced by "combined"

- All synonyms of isolated mode (remote, non-colocated, distributed, etc.) should be replaced by
  "isolated".
@cmccabe cmccabe force-pushed the cmccabe_2023-03-13_rename branch from 9978458 to 229bce6 Compare March 16, 2023 22:32
@cmccabe cmccabe merged commit ddd652c into apache:trunk Mar 16, 2023
@cmccabe cmccabe deleted the cmccabe_2023-03-13_rename branch March 16, 2023 22:33
@cmccabe
Copy link
Contributor Author

cmccabe commented Mar 16, 2023

Thanks, @mumrah . I did another quick sweep of the thread names, like you did, and deleted the extra dash in one case, and added a dash in another. There are still a few ugly and/or unprefixed names, but this is at least a good start. I also spot checked a log file. I do think the new way is more readable

@cmccabe
Copy link
Contributor Author

cmccabe commented Mar 16, 2023

Did you find where the lone EventHandler is coming from?

Sorry, I meant to respond to this comment earlier. But for completeness, the lone EventHandler is gone now, last I checked. Might have been from MetadataLoader, but I fixed a few so I can't recall

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants