GPUOrigin/Extent: remove dictionary overload from the union#1081
GPUOrigin/Extent: remove dictionary overload from the union#1081kainino0x wants to merge 1 commit intogpuweb:mainfrom
Conversation
This overload adds complexity without providing much benefit of clarity. The semantics of the values in an array are clear according to context: - 1d: [width] - 1d-array: [width, arraysize] - 2d: [width, height] - 2d-array/cube/cube-array: [width, height, arraysize] - 3d: [width, height, depth] In order to implement this, this redefines GPUOrigin2D, GPUOrigin3D, and GPUExtent3D from scratch, as if they were specified in the WebIDL spec. These essentially implement a specific case of tuples with default values, which could be expressed in pseudo-WebIDL as: ``` typedef [GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0] GPUOrigin2D; typedef [GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0] GPUOrigin3D; typedef [GPUIntegerCoordinate, GPUIntegerCoordinate, GPUIntegerCoordinate] GPUExtent3D; ``` I previously proposed this in gpuweb#299. After discussion, we compromised on this union, and it was implemented in gpuweb#319. However there were a number of voices who didn't like the added complexity (and perceived loss of type expressivity), and since then I've come to agree that the unioned version is not the best choice.
| But if we did it explicitly (e.g. with a "typecheck" algorithm), it would infect | ||
| every dictionary that contains one of these members, and every method that takes | ||
| any of those dictionaries or this type. | ||
| --> |
There was a problem hiding this comment.
As noted in this comment, an alternative to custom-defining these types would be to define them as sequence<GPUIntegerCoordinate>, create "typecheck" algorithms, and call those in all the places these types are used, recursively.
This wouldn't actually be that terrible; I'm already expecting we'll write something like this for GPUBindGroupLayoutEntry, to write in all the "default" values that are currently handwavily defined.
|
As I explained I think there are relatively obvious choices for the orders of all these arrays. I think the ergonomics are much better with arrays. It's different from color where the order is can be ambiguous in some situations. However your comment reminded me that I should look make good on the idea of actually allowing [w] and [w, h] for 1d/2d extents. Right now I've carried over the old constraints which still require 3 in all cases, making it not any clearer than before. |
Did we agree that Looking at the diff, it seems that switching to using only a sequence doesn't make things more clear. How about keeping the dictionaries for these concepts and use them internally in the spec, but define the conversions from sequences of numbers to them. Then make the conversion algorithm not require sequences of 3 and instead complete sequences that are too small with 1s (for |
kvark
left a comment
There was a problem hiding this comment.
spec-wise, the indices seem to be less clear than "width", "height", etc
I guess the alternative is, rename depth to depthOrLayerCount?
| }; | ||
| typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D; | ||
| </script> | ||
| ## GPUOrigin2D ## {#gpuorigin2d} |
There was a problem hiding this comment.
do we still need separate GPUOrigin2D and GPUOrigin3D types, now that everything is a sequence?
There was a problem hiding this comment.
They're more than just sequences; they're sequences of exactly 2 elements and 3 elements respectively. They would only be the same if we moved validation out of the types and into the methods.
There was a problem hiding this comment.
Wait, doesn't GPUOrigin3D, for example, a subject to the same confusion as the extent? I.e. in the dictionary it's not clear if z is for the depth or the array layer offset.
There was a problem hiding this comment.
Sort of yes, except calling array layer count "depth" is kind of a contradiction, whereas calling it [2] is neutral.
We don't actually have 1d-array so it doesn't matter right now. But the obvious thing is to require the array to be length exactly 2 and have it be [width, arraySize]. Anyway, I need to make revisions to this. |
I think we can solve this at the spec level. I don't think we should lose ergonomics just because we're having trouble writing the spec for it. Still thinking about how to do this. |
|
I would kind of like to allow only arrays of size 1, 2, or 3 according to the context (i.e. the dimensionality of the relevant GPUTexture(s)). It requires implementations to track dimensionality content-side, but maybe it's worth it..? |
I think forcing arrays of numbers is less clear even for developers, at least in some contexts. Can't we just keep both as we do right now, but allow arrays of 1 and 2 elements and add defaulting to 1 or 0 depending on the type? (there's been no answer to my previous comment) |
This is essentially what we have now with
As long as the length of the array is strict, it doesn't seem unclear to me. But there's perhaps some ergonomics lost in being strict to begin with. The rules I had in mind (I spent a while yesterday trying to figure out how this would work in the spec, didn't succeed yet):
This means every time we consume one of these values, we have to have a context for what the GPUTextureDimension is. We have this already, except implementations have to track it on the content process. What I haven't figured out is whether this is impractically strict. |
For a Basically all textures are 3D. Except that you can specifiy how they are stored to optimize for a specific dimensionality. Then copy operations always act on conceptually 3D textures.
Yes, the strict validation of the number of arguments breaks the mental model of all textures being embedded in a 3D spaces, and now you have to know that specifying [w, h] is the same as [w, h, 1] for 2D array textures, but one is invalid for 3D, things like that. If anything I think |
|
Thinking more about it, I like |
note to self: realized this doesn't matter that much because we already require the format to be tracked for writeTexture (and copyImageBitmapToTexture, though that one could be changed to not need it). |
After trying to come up with a good design with arrays, I may be leaning in this direction as well. Have to continue thinking about it. Also considering replacing the unions with helper functions, like |
|
I feel that we're trying to solve an almost non-existent naming issue by piling a ton more complexity with implicit conversions, helper functions and union names (in the form "this OR that"). |
…eb#1081) * Add canvas STORAGE_BINDING cases for RGBA8Unorm and RGBA16Float * Fix nits * Fix toHalf and update README
This overload adds complexity without providing much benefit of clarity.
The semantics of the values in an array are clear according to context:
In order to implement this, this redefines GPUOrigin2D, GPUOrigin3D, and
GPUExtent3D from scratch, as if they were specified in the WebIDL spec.
These essentially implement a specific case of tuples with default
values, which could be expressed in pseudo-WebIDL as:
I previously proposed this in #299. After discussion, we compromised on
this union, and it was implemented in #319. However there were a number
of voices who didn't like the added complexity (and perceived loss of
type expressivity), and since then I've come to agree that the unioned
version is not the best choice.
Preview | Diff