From 220d678ef4180c331763f6d9ae3d56b3fe525769 Mon Sep 17 00:00:00 2001 From: coolmian <36444522+coolmian@users.noreply.github.com> Date: Wed, 23 Nov 2022 17:22:58 +0800 Subject: [PATCH] fix(docs): correction document description Signed-off-by: coolmian <36444522+coolmian@users.noreply.github.com> --- docs/advanced/torch-support/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/advanced/torch-support/index.md b/docs/advanced/torch-support/index.md index bf94f758833..6fadc51d815 100644 --- a/docs/advanced/torch-support/index.md +++ b/docs/advanced/torch-support/index.md @@ -44,7 +44,7 @@ for d in DocumentArray.load_binary('test.protobuf.gz'): ## Load, map, batch in one-shot -There is a very common pattern in the deep learning engineering: loading big data, mapping it via some function for preprocessing on GPU, and batching it to GPU for intensive deep learning stuff. +There is a very common pattern in the deep learning engineering: loading big data, mapping it via some function for preprocessing on CPU, and batching it to GPU for intensive deep learning stuff. There are many pitfalls in this pattern when not implemented correctly, to name a few: - data may not fit into memory;