Skip to content

GPUOrigin/Extent: remove dictionary overload from the union#1081

Closed
kainino0x wants to merge 1 commit intogpuweb:mainfrom
kainino0x:gpuoriginextent-remove-dictionary
Closed

GPUOrigin/Extent: remove dictionary overload from the union#1081
kainino0x wants to merge 1 commit intogpuweb:mainfrom
kainino0x:gpuoriginextent-remove-dictionary

Conversation

@kainino0x
Copy link
Contributor

@kainino0x kainino0x commented Sep 16, 2020

This overload adds complexity without providing much benefit of clarity.
The semantics of the values in an array are clear according to context:

  • 1d: [width]
  • 1d-array: [width, arraysize]
  • 2d: [width, height]
  • 2d-array/cube/cube-array: [width, height, arraysize]
  • 3d: [width, height, depth]

In order to implement this, this redefines GPUOrigin2D, GPUOrigin3D, and
GPUExtent3D from scratch, as if they were specified in the WebIDL spec.
These essentially implement a specific case of tuples with default
values, which could be expressed in pseudo-WebIDL as:

typedef [GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0] GPUOrigin2D;
typedef [GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0] GPUOrigin3D;
typedef [GPUIntegerCoordinate, GPUIntegerCoordinate, GPUIntegerCoordinate] GPUExtent3D;

I previously proposed this in #299. After discussion, we compromised on
this union, and it was implemented in #319. However there were a number
of voices who didn't like the added complexity (and perceived loss of
type expressivity), and since then I've come to agree that the unioned
version is not the best choice.


Preview | Diff

This overload adds complexity without providing much benefit of clarity.
The semantics of the values in an array are clear according to context:

- 1d: [width]
- 1d-array: [width, arraysize]
- 2d: [width, height]
- 2d-array/cube/cube-array: [width, height, arraysize]
- 3d: [width, height, depth]

In order to implement this, this redefines GPUOrigin2D, GPUOrigin3D, and
GPUExtent3D from scratch, as if they were specified in the WebIDL spec.
These essentially implement a specific case of tuples with default
values, which could be expressed in pseudo-WebIDL as:

```
typedef [GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0] GPUOrigin2D;
typedef [GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0, GPUIntegerCoordinate = 0] GPUOrigin3D;
typedef [GPUIntegerCoordinate, GPUIntegerCoordinate, GPUIntegerCoordinate] GPUExtent3D;
```

I previously proposed this in gpuweb#299. After discussion, we compromised on
this union, and it was implemented in gpuweb#319. However there were a number
of voices who didn't like the added complexity (and perceived loss of
type expressivity), and since then I've come to agree that the unioned
version is not the best choice.
@kainino0x kainino0x requested a review from kvark September 16, 2020 00:47
But if we did it explicitly (e.g. with a "typecheck" algorithm), it would infect
every dictionary that contains one of these members, and every method that takes
any of those dictionaries or this type.
-->
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As noted in this comment, an alternative to custom-defining these types would be to define them as sequence<GPUIntegerCoordinate>, create "typecheck" algorithms, and call those in all the places these types are used, recursively.

This wouldn't actually be that terrible; I'm already expecting we'll write something like this for GPUBindGroupLayoutEntry, to write in all the "default" values that are currently handwavily defined.

@grovesNL
Copy link
Contributor

If #1079 is approved, it seems slightly better to be consistent with #1079 and use a dictionary for similar reasons (e.g. avoid ambiguous ordering for sequences such as [width, arraysize] and [width, height, arraysize]).

@kainino0x
Copy link
Contributor Author

As I explained I think there are relatively obvious choices for the orders of all these arrays. I think the ergonomics are much better with arrays. It's different from color where the order is can be ambiguous in some situations.

However your comment reminded me that I should look make good on the idea of actually allowing [w] and [w, h] for 1d/2d extents. Right now I've carried over the old constraints which still require 3 in all cases, making it not any clearer than before.

@kainino0x kainino0x marked this pull request as draft September 16, 2020 03:49
@Kangz
Copy link
Contributor

Kangz commented Sep 16, 2020

1d-array: [width, arraysize]

Did we agree that arraySize would be the second one and not the third one? Either way seems fine but we need to make sure it is consistent with validation in copies.

Looking at the diff, it seems that switching to using only a sequence doesn't make things more clear. How about keeping the dictionaries for these concepts and use them internally in the spec, but define the conversions from sequences of numbers to them. Then make the conversion algorithm not require sequences of 3 and instead complete sequences that are too small with 1s (for GPUExtent3D) and 0s for GPUOrigin3D and GPUOrigin2D?

Copy link
Contributor

@kvark kvark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spec-wise, the indices seem to be less clear than "width", "height", etc
I guess the alternative is, rename depth to depthOrLayerCount?

};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;
</script>
## GPUOrigin2D ## {#gpuorigin2d}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we still need separate GPUOrigin2D and GPUOrigin3D types, now that everything is a sequence?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They're more than just sequences; they're sequences of exactly 2 elements and 3 elements respectively. They would only be the same if we moved validation out of the types and into the methods.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, doesn't GPUOrigin3D, for example, a subject to the same confusion as the extent? I.e. in the dictionary it's not clear if z is for the depth or the array layer offset.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sort of yes, except calling array layer count "depth" is kind of a contradiction, whereas calling it [2] is neutral.

@kainino0x
Copy link
Contributor Author

Did we agree that arraySize would be the second one and not the third one? Either way seems fine but we need to make sure it is consistent with validation in copies.

We don't actually have 1d-array so it doesn't matter right now. But the obvious thing is to require the array to be length exactly 2 and have it be [width, arraySize].

Anyway, I need to make revisions to this.

@kainino0x
Copy link
Contributor Author

spec-wise, the indices seem to be less clear than "width", "height", etc
I guess the alternative is, rename depth to depthOrLayerCount?

I think we can solve this at the spec level. I don't think we should lose ergonomics just because we're having trouble writing the spec for it. Still thinking about how to do this.

@kainino0x
Copy link
Contributor Author

I would kind of like to allow only arrays of size 1, 2, or 3 according to the context (i.e. the dimensionality of the relevant GPUTexture(s)). It requires implementations to track dimensionality content-side, but maybe it's worth it..?

@Kangz
Copy link
Contributor

Kangz commented Sep 17, 2020

spec-wise, the indices seem to be less clear than "width", "height", etc
I guess the alternative is, rename depth to depthOrLayerCount?

I think we can solve this at the spec level. I don't think we should lose ergonomics just because we're having trouble writing the spec for it. Still thinking about how to do this.

I think forcing arrays of numbers is less clear even for developers, at least in some contexts. Can't we just keep both as we do right now, but allow arrays of 1 and 2 elements and add defaulting to 1 or 0 depending on the type? (there's been no answer to my previous comment)

@kainino0x
Copy link
Contributor Author

How about keeping the dictionaries for these concepts and use them internally in the spec, but define the conversions from sequences of numbers to them. Then make the conversion algorithm not require sequences of 3 and instead complete sequences that are too small with 1s (for GPUExtent3D) and 0s for GPUOrigin3D and GPUOrigin2D?

This is essentially what we have now with Extent3D. There are a number of problems I have with this approach (most of these problems already exist):

  • It doesn't make sense to specify an origin of {y: 1} or [x] for a 2d or 3d texture. I think all of the relevant dimensions should be specified, except maybe for arrayLayer which can default to 0.
  • It only makes sense to default extent components to 1 in certain cases. It makes sense for array size in 2d-array texture creation, but not for anything else IMO (including array size in 2d-array texture copies)
  • It only sort of makes sense to specify y/height/z/depth for 1d textures. (Or z/depth for 2d texture views, but I don't think that situation exists.)

I think forcing arrays of numbers is less clear even for developers, at least in some contexts.

As long as the length of the array is strict, it doesn't seem unclear to me. But there's perhaps some ergonomics lost in being strict to begin with.

The rules I had in mind (I spent a while yesterday trying to figure out how this would work in the spec, didn't succeed yet):

  • GPUTextureDimension "1d": Must be exactly 1: [x]; [width]
  • GPUTextureDimension "2d": Must be 2 or 3: [x, y, layer defaulting to 0]; [width, height, layers defaulting to 1]
  • GPUTextureDimension "3d": Must be 3: [x, y, z]; [width, height, depth]

This means every time we consume one of these values, we have to have a context for what the GPUTextureDimension is. We have this already, except implementations have to track it on the content process.

What I haven't figured out is whether this is impractically strict.

@Kangz
Copy link
Contributor

Kangz commented Sep 18, 2020

It doesn't make sense to specify an origin of {y: 1} or [x] for a 2d or 3d texture. I think all of the relevant dimensions should be specified, except maybe for arrayLayer which can default to 0.

For a GPUExten3D if you specify arrayLayer/depth = 0 then you're copying an empty volume, which is a noop. Even for 1D textures. I think it's good to have the API's model of texture to treat them as 3D things all the time. It makes the correspondance between 2D array and 3D clear, and even for 1D it keeps the mental model consistent (it is a line of texels embedded in a 3D space).

Basically all textures are 3D. Except that you can specifiy how they are stored to optimize for a specific dimensionality. Then copy operations always act on conceptually 3D textures.

As long as the length of the array is strict, it doesn't seem unclear to me. But there's perhaps some ergonomics lost in being strict to begin with.

Yes, the strict validation of the number of arguments breaks the mental model of all textures being embedded in a 3D spaces, and now you have to know that specifying [w, h] is the same as [w, h, 1] for 2D array textures, but one is invalid for 3D, things like that. If anything I think GPUExtent3D members should default to 1 to allow people to put only width and height and have the correct behavior (and no weird validation errors).

@kvark
Copy link
Contributor

kvark commented Sep 21, 2020

Thinking more about it, I like depthOrLayerCount (in the dictionary). It's very explicit, and it's only needed when you use 3D or array textures, and otherwise just defaults to 1.

@kainino0x
Copy link
Contributor Author

This means every time we consume one of these values, we have to have a context for what the GPUTextureDimension is. We have this already, except implementations have to track it on the content process.

note to self: realized this doesn't matter that much because we already require the format to be tracked for writeTexture (and copyImageBitmapToTexture, though that one could be changed to not need it).

@kainino0x
Copy link
Contributor Author

kainino0x commented Sep 22, 2020

Thinking more about it, I like depthOrLayerCount (in the dictionary). It's very explicit, and it's only needed when you use 3D or array textures, and otherwise just defaults to 1.

After trying to come up with a good design with arrays, I may be leaning in this direction as well. Have to continue thinking about it.

Also considering replacing the unions with helper functions, like GPU.makeOrigin(x, y, depthOrArrayLayer) and GPU.makeExtent(width, height, depthOrLayerCount).

@Kangz
Copy link
Contributor

Kangz commented Sep 22, 2020

depthOrLayerCount ew, can't we just say that depth is the array size for 2D textures, and height the array size for 1D textures if we ever have them?

I feel that we're trying to solve an almost non-existent naming issue by piling a ton more complexity with implicit conversions, helper functions and union names (in the form "this OR that").

@kainino0x kainino0x closed this Dec 17, 2020
@kainino0x kainino0x deleted the gpuoriginextent-remove-dictionary branch December 17, 2020 00:32
ben-clayton pushed a commit to ben-clayton/gpuweb that referenced this pull request Sep 6, 2022
…eb#1081)

* Add canvas STORAGE_BINDING cases for RGBA8Unorm and RGBA16Float

* Fix nits

* Fix toHalf and update README
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants