Wanted to follow up on #18 as I saw it was closed. I wanted to try again on my 5090 (and all other blackwell gpus) a minimum of cuda 12.8 is required to run.
This mostly affects the following baselines
- dpvo
- droidslam
- mast3rslam
- monogs
- vggt
- vggtslam
in particular because they require lietorch/torch-scatter/ and other precompiled conda packages.
I tried out orbslam2/3 and pycuvslam and those seemed to work on the 5090. All others froze.
Along with this it seems that they dont fail in an obvious way but rather get stuck and an error message never surfaces which makes it hard to understand whats going on
So I think the fix is bumping 12.8 and recompiling all the underlying packages with 12.8 support (or 12.9? it looks like the conda linux package only has 12.9 when I was looking https://prefix.dev/channels/conda-forge/packages/pytorch)
@Tobias-Fischer
Wanted to follow up on #18 as I saw it was closed. I wanted to try again on my 5090 (and all other blackwell gpus) a minimum of cuda 12.8 is required to run.
This mostly affects the following baselines
in particular because they require lietorch/torch-scatter/ and other precompiled conda packages.
I tried out orbslam2/3 and pycuvslam and those seemed to work on the 5090. All others froze.
Along with this it seems that they dont fail in an obvious way but rather get stuck and an error message never surfaces which makes it hard to understand whats going on
So I think the fix is bumping 12.8 and recompiling all the underlying packages with 12.8 support (or 12.9? it looks like the conda linux package only has 12.9 when I was looking https://prefix.dev/channels/conda-forge/packages/pytorch)
@Tobias-Fischer