Skip to content

JIT: Reconsider how to represent defs/uses of CPU flags (GTF_SET_FLAGS) #74867

@jakobbotsch

Description

@jakobbotsch

When GTF_SET_FLAGS is set on a node it indicates that a future node may consume the CPU flags that were set by this node. However, this is outside the JIT's modelling of values in the IR as we do not track CPU flag dependencies at all. It means the interference checking we have today is not really sufficient, and it complicates some things such as potentially implementing rematerialization that could introduce nodes trashing the CPU flags at arbitrary places.

Today this works out because we do very limited transformations on LIR and because we use the GTF_SET_FLAGS capability only in limited situations: decomposed longs on 32-bit platforms and for conditional branching.

If we want rematerialization we are probably going to have to solve this in some way. Two solutions come to mind:

  • Actually model CPU flags at the IR level in a way that we can reason about these when doing interference checks and from LSRA. Probably quite a bit of work.
  • Reimplement it via containment, i.e. contain child nodes that may generate values into CPU flags in parent nodes that can consume values from CPU flags. This may not be as powerful as the above, but allows us to rely on the existing def/use model and thus seems much simpler. Compare chains on ARM64 work based on this model today.

One fairly simple step might be to move all non-decomposition uses of GTF_SET_FLAGS to the containment model, which should mean that rematerialization becomes possible in many contexts (e.g. always on 64-bit targets).

cc @dotnet/jit-contrib

category:proposal
theme:ir

Metadata

Metadata

Assignees

Labels

area-CodeGen-coreclrCLR JIT compiler in src/coreclr/src/jit and related components such as SuperPMI

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions