Skip to content

feat: allow variable tensor dimension#1007

Merged
samsja merged 17 commits intofeat-rewrite-v2from
feat-871-tensor-vardim
Jan 16, 2023
Merged

feat: allow variable tensor dimension#1007
samsja merged 17 commits intofeat-rewrite-v2from
feat-871-tensor-vardim

Conversation

@Jackmin801
Copy link
Copy Markdown
Contributor

@Jackmin801 Jackmin801 commented Jan 11, 2023

Goals:

@Jackmin801
Copy link
Copy Markdown
Contributor Author

Do we want to add an example usage in the docstrings?

@Jackmin801 Jackmin801 linked an issue Jan 11, 2023 that may be closed by this pull request
Copy link
Copy Markdown
Member

@JohannesMessner JohannesMessner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this logic be move into the AbstractTensor class so that it "just works" when adding a new tensor type? Also I'm guessing variables ('x') are not shared between tensors (or the same Document), right?

@JohannesMessner
Copy link
Copy Markdown
Member

Do we want to add an example usage in the docstrings?

Yes please!

@Jackmin801
Copy link
Copy Markdown
Contributor Author

Jackmin801 commented Jan 11, 2023

Can this logic be move into the AbstractTensor class so that it "just works" when adding a new tensor type? Also I'm guessing variables ('x') are not shared between tensors (or the same Document), right?

First question, unfortunately not. Numpy and torch have slightly different api, particularly in this case, in the view vs reshape. And yep, the 'x' isn't shared.

Copy link
Copy Markdown
Member

@samsja samsja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be nice to find a way to factorize the code between torch and numpy

@samsja
Copy link
Copy Markdown
Member

samsja commented Jan 11, 2023

First question, unfortunately not. Numpy and torch have slightly different api, particularly in this case, in the view vs reshape. And yep, the 'x' isn't shared.

cannot we abstract the reshaping part, somewhere in the computational backend ?

@JohannesMessner
Copy link
Copy Markdown
Member

First question, unfortunately not. Numpy and torch have slightly different api, particularly in this case, in the view vs reshape. And yep, the 'x' isn't shared.

cannot we abstract the reshaping part, somewhere in the computational backend ?

I suggest the same

@Jackmin801 Jackmin801 force-pushed the feat-871-tensor-vardim branch from 01119e5 to 6f89780 Compare January 16, 2023 08:03
@Jackmin801 Jackmin801 force-pushed the feat-871-tensor-vardim branch from b07cc2f to 5c87da1 Compare January 16, 2023 12:35
@Jackmin801 Jackmin801 marked this pull request as draft January 16, 2023 12:40
@Jackmin801 Jackmin801 marked this pull request as ready for review January 16, 2023 13:16
"""
...

@overload
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to automatically generate the overload. But lets not do it in this PR unless you found a brillant solution haha

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think it works :( python/mypy#11488

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we could generate the code though

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but yeah lets see later

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nah not code generation via our friend the bot.

But we could have a helper function that generate 3 methods with the correct overload

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could it be done via decorator somehow?

@Jackmin801 Jackmin801 force-pushed the feat-871-tensor-vardim branch from 7ab4d2e to ecdf48a Compare January 16, 2023 14:24
@github-actions
Copy link
Copy Markdown

📝 Docs are deployed on https://ft-feat-871-tensor-vardim--jina-docs.netlify.app 🎉

@samsja samsja merged commit 116bca9 into feat-rewrite-v2 Jan 16, 2023
@samsja samsja deleted the feat-871-tensor-vardim branch January 16, 2023 15:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add TorchTyping like features

3 participants