Skip to content

make rnn.fsx example device agnostic.#428

Open
nhirschey wants to merge 1 commit intoDiffSharp:devfrom
nhirschey:rnn-nh
Open

make rnn.fsx example device agnostic.#428
nhirschey wants to merge 1 commit intoDiffSharp:devfrom
nhirschey:rnn-nh

Conversation

@nhirschey
Copy link
Copy Markdown
Contributor

Some minor changes that will allow rnn.fsx to run if the dsharp.config is changed to GPU.

Thanks for your incredible work this past month to incorporate my change requests into the dev branch! I'm looking forward to using the new version once it's released on the nuget feed.

if File.Exists(modelFileName) then
printfn "Resuming training from existing model params found: %A" modelFileName
languageModel.state <- dsharp.load(modelFileName)
languageModel.move(Device.Default)
Copy link
Copy Markdown
Collaborator

@dsyme dsyme Oct 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the load happen on the default device in any case? Is this a problem with dsharp.load?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think dsharp.load loading into the default device makes sense, and that's what seems to be the goal of PR #430.

Pytorch is a little different, it would torch.load into whatever device the tensor was saved from: "They are first deserialized on the CPU and are then moved to the device they were saved from" (link).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants