Comments for Art, Tech and other Nonsense https://maxliani.wordpress.com Errands in uncharted territory, by Max Liani Wed, 16 Aug 2023 15:14:41 +0000 hourly 1 http://wordpress.com/ Comment on Offline to Realtime: Manipulators by Daniel Toloudis https://maxliani.wordpress.com/2021/06/29/offline-to-realtime-manipulators/comment-page-1/#comment-303 Wed, 16 Aug 2023 15:14:41 +0000 http://maxliani.wordpress.com/?p=612#comment-303 In reply to Max Liani.

Thank you! I was coming toward implementing the latter “if drag/release don’t reset” at least to try it out. And thank you for these posts in the first place. They give really great insight on how to do “immediate mode” interactivity.

Like

]]>
Comment on Offline to Realtime: Manipulators by Max Liani https://maxliani.wordpress.com/2021/06/29/offline-to-realtime-manipulators/comment-page-1/#comment-302 Wed, 16 Aug 2023 04:50:48 +0000 http://maxliani.wordpress.com/?p=612#comment-302 In reply to Daniel Toloudis.

You are right. The code I posted is a simplification, extracted from a more complex app.
In the simplification process I may have broken something.
In the app this code is different:
– gesture retains the selected code and give access to it through a function int getCurrentSelectionCode()
– gesture pick returns a bool instead of the picking code. False if the picking is Graphics::k_noSelectionCode with early return if clickEnbaled || clickDrag.
By early returning the function will not update its last selection code during drag or button release events. In early returns it
returns retained code != Graphics::k_noSelectionCode.
– the implementation use the return value true to call:
int selectionCode = gesture.getCurrentSelectionCode();
forEachTool([&](ManipulationTool* tool) { tool->setActiveCode(selectionCode); });

This has the effect of restoring the selection code of the manipulator. I am sure it can be done in other ways, such as “if drag/release don’t call reset…
I’ll try to amend the code in the blog post.

Liked by 1 person

]]>
Comment on Offline to Realtime: Manipulators by Daniel Toloudis https://maxliani.wordpress.com/2021/06/29/offline-to-realtime-manipulators/comment-page-1/#comment-301 Tue, 15 Aug 2023 14:19:39 +0000 http://maxliani.wordpress.com/?p=612#comment-301 I am actually trying to implement your system and I have a question! If you clear the active code on each frame, then when you try to drag a manipulator you need to re-select it on each pick, so what if the user initially selects to drag and then moves the mouse off of the manipulator while dragging?
It feels like there is something missing in how to handle dragging the MoveTool. Also in the code example you gave for picking, it looks like there is some code that returns no selection id while dragging (it doesn’t even run the picking), but the tool doesn’t remember its active state. Maybe we are supposed to remember the tool’s active state across frames while dragging?

Liked by 1 person

]]>
Comment on DNND 2: Tensors and Convolution by machineko https://maxliani.wordpress.com/2023/03/24/dnnd-2-tensors-and-convolution/comment-page-1/#comment-294 Fri, 14 Jul 2023 12:19:29 +0000 http://maxliani.wordpress.com/?p=913#comment-294 In reply to Max Liani.

Indeed I understand that you didn’t state it clean as that, but it’s implied that NCHW is faster (cause it faster in SIMD cpu? case)

“SIMD instructions prefer reading SOA data because, with each individual load and store instructions, the processor can fill wide registers with several elements accessed consecutively and sequentially. This makes good use of the memory latency and bandwidth, resulting in faster processing speeds.”

And it’s not true for neither NVidia nor Apple or even Intel on each of this GPU’s channel last is faster and preferred for both FP32/FP16 and BF16/TF32 precision (if available). [I’m not sure about AMD as I never worked with it].

Liked by 1 person

]]>
Comment on DNND 3: the U-Net Architecture by Jakub https://maxliani.wordpress.com/2023/04/07/dnnd-3-the-u-net-architecture/comment-page-1/#comment-293 Fri, 14 Jul 2023 05:12:26 +0000 http://maxliani.wordpress.com/?p=966#comment-293 In reply to Max Liani.

Sure, that’s understandable. Good luck and looking forward to the next chapter.

Like

]]>
Comment on DNND 2: Tensors and Convolution by Max Liani https://maxliani.wordpress.com/2023/03/24/dnnd-2-tensors-and-convolution/comment-page-1/#comment-292 Fri, 14 Jul 2023 00:23:51 +0000 http://maxliani.wordpress.com/?p=913#comment-292 In reply to machineko.

Thank you for the comment. It is correct, NVIDIA GPUs may favor other data layout, and that depends on the precision required and the instruction set available.
In this post I am not looking at performance yet, instead I am showing that there are many options to consider, and I am trying to keep the text as simple as I can for the purpose of the narration. How the choice of tensor layout, data precision and the hardware come together is something I will write about in a future post.

Like

]]>
Comment on DNND 3: the U-Net Architecture by Max Liani https://maxliani.wordpress.com/2023/04/07/dnnd-3-the-u-net-architecture/comment-page-1/#comment-291 Fri, 14 Jul 2023 00:16:12 +0000 http://maxliani.wordpress.com/?p=966#comment-291 In reply to Jakub.

Hi Jakub, life got in the way. I will resume the series in the next couple of months. Thank you for the patience.

Like

]]>
Comment on DNND 2: Tensors and Convolution by machineko https://maxliani.wordpress.com/2023/03/24/dnnd-2-tensors-and-convolution/comment-page-1/#comment-290 Thu, 13 Jul 2023 10:02:12 +0000 http://maxliani.wordpress.com/?p=913#comment-290 Part about NCHW as preferred format is kinda wrong as most of the modern (<5 years old) GPUs indeed prefer NHWC data layout for most of the operations https://docs.nvidia.com/deeplearning/performance/dl-performance-convolutional/index.html#tensor-layout

Liked by 1 person

]]>
Comment on DNND 3: the U-Net Architecture by Jakub https://maxliani.wordpress.com/2023/04/07/dnnd-3-the-u-net-architecture/comment-page-1/#comment-284 Fri, 26 May 2023 14:23:54 +0000 http://maxliani.wordpress.com/?p=966#comment-284 Hi Max, great series! Any idea when the next part will be ready? 😉

Like

]]>
Comment on DNND 2: Tensors and Convolution by DNND 2:テンソルとコンボリューション – 世界の話題を日本語でザックリ素早く確認! https://maxliani.wordpress.com/2023/03/24/dnnd-2-tensors-and-convolution/comment-page-1/#comment-275 Sat, 01 Apr 2023 12:52:42 +0000 http://maxliani.wordpress.com/?p=913#comment-275 […] この記事はHackerNewsに掲載された下記の記事を元に作成されています。DNND 2: Tensors and Convolution […]

Like

]]>