Skip to content

Add _DEFAULT_VALUES to CustomCluster#1777

Draft
TheJulianJES wants to merge 3 commits intozigpy:devfrom
TheJulianJES:tjj/custom_cluster_default_values
Draft

Add _DEFAULT_VALUES to CustomCluster#1777
TheJulianJES wants to merge 3 commits intozigpy:devfrom
TheJulianJES:tjj/custom_cluster_default_values

Conversation

@TheJulianJES
Copy link
Copy Markdown
Contributor

@TheJulianJES TheJulianJES commented Feb 20, 2026

DRAFT: Tests not committed yet + questions below (sorry for the wall of text again)

Proposed change

This adds _DEFAULT_VALUES to CustomCluster, essentially implementing this zha-quirks PR directly in zigpy:

This allows quirks to provide default values to be used by ZHA (i.e. using Cluster.get) when the device hasn't filled the attribute cache with a real value yet.

Additional information

read_attributes_raw override

The implementation for CustomCluster.get is fine, but read_attributes_raw is a bit more complicated, as we only want to use the defaults if the device doesn't support the attribute.
Now, I'm not sure whether we actually need this. AFAIK, we don't really use read_attributes output directly anywhere, but this implementation would be consistent with _CONSTANT_ATTRIBUTES.

I wonder if we can only keep that implementation in CustomCluster for _CONSTANT_ATTRIBUTES, but essentially revert the changes to read_attributes_raw for _DEFAULT_VALUES from this PR. Thoughts?

Or, should read_attributes populate the attribute cache normally, but instead of directly returning the results from the device read, we return them from the attribute cache via Cluster.get, so we can simplify some of the read attributes logic?

Attribute values leaking into zigpy DB

Also, when read_attributes reads attributes that are either in _CONSTANT_ATTRIBUTES or _DEFAULT_VALUES, those quirks-defined values will also leak into the zigpy DB. Ideally, this would not be the case at all. With how we override read_attributes_raw, this really isn't possible to do though.

But implementing support for this in read_attributes (or overriding read_attributes) also doesn't sound that great?
Maybe there could be a separate read_update_attribute method it calls per attribute, which we can override in CustomCluster to ignore updating attribute cache for reads from constant attributes (or possibly even for default values)?

Future PRs

In a future PR, I likely also want to implement _UNSUPPORTED_ATTRIBUTES, as we have some quirks that currently call self.add_unsupported_attribute(attr.id) in Cluster.__init__.

In another PR, we should allow _CONSTANT_ATTRIBUTES and _DEFAULT_VALUES to also accept ZCLAttributeDef (and attribute names?) directly. For now, this isn't really needed and is consistent with the implementation for _CONSTANT_ATTRIBUTES, which also only accepts attribute IDs.

Side note: LocalDataCluster

The LocalDataCluster is also a bit weird in general, but maybe not for the reasons you think. It's crucial for complicated quirks and actually works really nicely. Obviously, it would be nice to have a better API that's separate from ZCL, but it does allow us to point quirks v2 entities at such a LocalDataCluster. When we have a better API – that could actually be really similar to Cluster/LocalDataCluster – we can just swap out the cluster logic then and keep the same quirks v2 entities.

Now, it's a bit weird in the regard that the LocalDataCluster still allows attribute reports from the device to come through if you use a cluster_id that's already present on the device. Unfortunately, I believe some quirks rely on this behavior now...?

The LocalDataCluster also needs to re-implement read_attributes_raw somewhat to fully prevent device reads and only use _CONSTANT_ATTRIBUTES (and _DEFAULT_VALUES). I almost wonder if it makes sense to add constants to CustomCluster in a similar style, like _PREVENT_READS (and possibly prevent write, ...)?
I'm not sure though, but we'd have to repeat less of the logic in LocalDataCluster. For now, I don't think it's a big deal, since that logic is only in two places and we can easily rework it in the future.

Unintended (breaking) zigpy change?

Also, zigpy was recently changed to drop _update_attribute calls in write_attributes (previous, now). Luckily, we've not noticed any quirks breaking, so far(?)
However, the LocalDataCluster still calls _update_attribute in its write_attributes here. I guess also removing that could break actually break some quirks? I'm not sure if any quirks really rely on that behavior though, or if they use/override write_attributes. At least Tuya quirks seem to use write_attributes for the (fake) ZCL -> Tuya DP conversion.
We should look at this in the future.

@codecov
Copy link
Copy Markdown

codecov bot commented Feb 21, 2026

Codecov Report

❌ Patch coverage is 75.00000% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 99.49%. Comparing base (70d15a5) to head (3f97acb).

Files with missing lines Patch % Lines
zigpy/quirks/__init__.py 75.00% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##              dev    #1777      +/-   ##
==========================================
- Coverage   99.52%   99.49%   -0.04%     
==========================================
  Files          63       63              
  Lines       12715    12727      +12     
==========================================
+ Hits        12655    12663       +8     
- Misses         60       64       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@TheJulianJES
Copy link
Copy Markdown
Contributor Author

For reference, I've merged the quirks-only LocalDataCluster implementation for now, as that was "easier":

The quirks are all migrated as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant