Hey, I was working on utilizing neuralop in parts of larger torch modules and there appears to be a bug in the subclassing of the state_dict method which causes issues with the '_metadata' attribute. This produces some problems when I try and torch.save and load_state_dict methods, though an easy work around at the moment is to just delete the "_metadata" attribute before I load the saved model.
Here's an example:
this_fno = FNO((16,),2,5,64)
that_fno = FNO((16,),5,1,64)
total_model = nn.Sequential(this_fno,that_fno)
Produces:
UserWarning: Attempting to update metadata for a module with metadata already in self.state_dict()
warnings.warn("Attempting to update metadata for a module with metadata already in self.state_dict()")
If you check:
total_model.state_dict()['_metadata']
We get:
{'n_layers': 4,
'lifting_channel_ratio': 2,
'projection_channel_ratio': 2,
'positional_embedding': 'grid',
'non_linearity': ,
'norm': None,
'complex_data': False,
'use_channel_mlp': True,
'channel_mlp_dropout': 0,
'channel_mlp_expansion': 0.5,
'channel_mlp_skip': 'soft-gating',
'fno_skip': 'linear',
'resolution_scaling_factor': None,
'domain_padding': None,
'fno_block_precision': 'full',
'stabilizer': None,
'max_n_modes': None,
'factorization': None,
'rank': 1.0,
'fixed_rank_modes': False,
'implementation': 'factorized',
'decomposition_kwargs': None,
'separable': False,
'preactivation': False,
'conv_module': neuralop.layers.spectral_convolution.SpectralConv,
'_version': '0.1.0',
'args': ((16,), 2, 5, 64),
'_name': 'FNO'}
Which is just the "_metadata" of the "this_fno" not of the Sequential module. Note that the parameters of the state_dict seem to be appropriate otherwise.
Hey, I was working on utilizing neuralop in parts of larger torch modules and there appears to be a bug in the subclassing of the state_dict method which causes issues with the '_metadata' attribute. This produces some problems when I try and torch.save and load_state_dict methods, though an easy work around at the moment is to just delete the "_metadata" attribute before I load the saved model.
Here's an example:
this_fno = FNO((16,),2,5,64)
that_fno = FNO((16,),5,1,64)
total_model = nn.Sequential(this_fno,that_fno)
Produces:
UserWarning: Attempting to update metadata for a module with metadata already in self.state_dict()
warnings.warn("Attempting to update metadata for a module with metadata already in self.state_dict()")
If you check:
total_model.state_dict()['_metadata']
We get:
{'n_layers': 4,
'lifting_channel_ratio': 2,
'projection_channel_ratio': 2,
'positional_embedding': 'grid',
'non_linearity': ,
'norm': None,
'complex_data': False,
'use_channel_mlp': True,
'channel_mlp_dropout': 0,
'channel_mlp_expansion': 0.5,
'channel_mlp_skip': 'soft-gating',
'fno_skip': 'linear',
'resolution_scaling_factor': None,
'domain_padding': None,
'fno_block_precision': 'full',
'stabilizer': None,
'max_n_modes': None,
'factorization': None,
'rank': 1.0,
'fixed_rank_modes': False,
'implementation': 'factorized',
'decomposition_kwargs': None,
'separable': False,
'preactivation': False,
'conv_module': neuralop.layers.spectral_convolution.SpectralConv,
'_version': '0.1.0',
'args': ((16,), 2, 5, 64),
'_name': 'FNO'}
Which is just the "_metadata" of the "this_fno" not of the Sequential module. Note that the parameters of the state_dict seem to be appropriate otherwise.