Home on wrong.wang https://wrong.wang/ Recent content in Home on wrong.wang Hugo zh-CN Mon, 28 Oct 2024 00:42:16 +0800 OmniGen -- Unified Image Generation https://wrong.wang/paper/omnigen----unified-image-generation/ Sun, 27 Oct 2024 16:35:32 +0800 https://wrong.wang/paper/omnigen----unified-image-generation/ <p> <a href="https://arxiv.org/pdf/2409.11340" target="_blank" rel="noreferrer">OmniGen: Unified Image Generation</a> 旨在统一常见生成任务比如文生图、图片编辑、图片复原、可控生成,甚至姿态估计等理解任务到同一个模型中。相比于最近很火的理解生成一体化,OmniGen更关心生成,它将以上任务,统一地放在了一个“图文交叉序列输入 + 图片输出”的框架中,训练一个模型实现了上面所说的所有任务,而且还后续微调新增新的功能还很容易。该工作来自于智源,开源了权重和训练代码: <a href="https://github.com/VectorSpaceLab/OmniGen" target="_blank" rel="noreferrer">VectorSpaceLab/OmniGen</a>。</p> EMMA https://wrong.wang/blog/20240512-emma/ Sun, 12 May 2024 22:50:26 +0800 https://wrong.wang/blog/20240512-emma/ <p>在完成 <a href="https://ella-diffusion.github.io/" target="_blank" rel="noreferrer">ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment</a> 这个工作后,我的目标变成了轻量廉价地将 Stable Diffusion 系列模型改造成图文交叉序列作为控制条件的图片生成模型。我尝试了多种 MLLM 领域的图文信息融合的思路,最终有了第一版 work 的方案,我将其命名为 EMMA(Efficient Multi Modal Adapter)。</p> What is EMMA https://wrong.wang/blog/20240512-what-is-emma/ Sun, 12 May 2024 22:50:26 +0800 https://wrong.wang/blog/20240512-what-is-emma/ <p>After completing the work on <a href="https://ella-diffusion.github.io/" target="_blank" rel="noreferrer">ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment</a>, my objective shifted towards the lightweight and cost-effective transformation of the Stable Diffusion series models into image generation models that are conditioned on cross-modal sequences of text and images. I explored various approaches for integrating text and image information in the field of Multimodal Large Language Models (MLLM), and ultimately developed the first version of my solution, which I have named EMMA (Efficient Multi Modal Adapter).</p> Scaling Up to Excellence - Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild https://wrong.wang/paper/scaling-up-to-excellence---practicing-model-scaling-for-photo-realistic-image-restoration-in-the-wild/ Sat, 27 Jan 2024 10:16:57 +0800 https://wrong.wang/paper/scaling-up-to-excellence---practicing-model-scaling-for-photo-realistic-image-restoration-in-the-wild/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1696" height="1248" srcset="https://wrong.wang/paper/scaling-up-to-excellence---practicing-model-scaling-for-photo-realistic-image-restoration-in-the-wild/image_0_hu18237007907729778155.webp 330w,/paper/scaling-up-to-excellence---practicing-model-scaling-for-photo-realistic-image-restoration-in-the-wild/image_0_hu7729211832556238864.webp 660w,/paper/scaling-up-to-excellence---practicing-model-scaling-for-photo-realistic-image-restoration-in-the-wild/image_0_hu2914268044947408315.webp 1024w,/paper/scaling-up-to-excellence---practicing-model-scaling-for-photo-realistic-image-restoration-in-the-wild/image_0_hu14328117452264380246.webp 2x" src="https://wrong.wang/paper/scaling-up-to-excellence---practicing-model-scaling-for-photo-realistic-image-restoration-in-the-wild/image_0_hu7729211832556238864.webp" alt="" loading="lazy" /> </figure> </p> <p>SUPIR (Scaling-UP Image Restoration) 目标是基于预训练的文生图模型先验,20M 高质量图片数据, MLLM captioner 等技术,实现一个 scaling-up 的图片复原网络。</p> <p>SUPIR 训练时,整体的大思路是用高质量图片与其对应的降质图片形成 pair 对,降质图片对应的 MLLM 合成 caption 作为文本控制信号。这里的“降质”复用了王鑫涛的 RealESRGAN 中提出的模拟真实低质量图片的降质策略。</p> StyleDrop - Text-to-Image Generation in Any Style https://wrong.wang/paper/styledrop---text-to-image-generation-in-any-style/ Sat, 06 Jan 2024 11:34:43 +0800 https://wrong.wang/paper/styledrop---text-to-image-generation-in-any-style/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1692" height="1146" srcset="https://wrong.wang/paper/styledrop---text-to-image-generation-in-any-style/image_0_hu5557851837375853757.webp 330w,/paper/styledrop---text-to-image-generation-in-any-style/image_0_hu736686579879446106.webp 660w,/paper/styledrop---text-to-image-generation-in-any-style/image_0_hu17922952714207500564.webp 1024w,/paper/styledrop---text-to-image-generation-in-any-style/image_0_hu5742328058117881673.webp 2x" src="https://wrong.wang/paper/styledrop---text-to-image-generation-in-any-style/image_0_hu736686579879446106.webp" alt="" loading="lazy" /> </figure> </p> <p>StyleDrop 尝试解决图片生成领域一个非常经典的问题:给定一张图片作为风格参考,生成一张该风格的新内容图片。其效果相比之前一众 style transfer 算法有了飞跃。</p> Paragraph-to-Image Generation with Information-Enriched Diffusion Model https://wrong.wang/paper/paragraph-to-image-generation-with-information-enriched-diffusion-model/ Sat, 09 Dec 2023 23:17:33 +0800 https://wrong.wang/paper/paragraph-to-image-generation-with-information-enriched-diffusion-model/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1720" height="1274" srcset="https://wrong.wang/paper/paragraph-to-image-generation-with-information-enriched-diffusion-model/image_0_hu3737021875979196455.webp 330w,/paper/paragraph-to-image-generation-with-information-enriched-diffusion-model/image_0_hu8579134691321205805.webp 660w,/paper/paragraph-to-image-generation-with-information-enriched-diffusion-model/image_0_hu14954315257528253198.webp 1024w,/paper/paragraph-to-image-generation-with-information-enriched-diffusion-model/image_0_hu10975187999409166366.webp 2x" src="https://wrong.wang/paper/paragraph-to-image-generation-with-information-enriched-diffusion-model/image_0_hu8579134691321205805.webp" alt="" loading="lazy" /> </figure> </p> <p>ParaDiffusion 尝试解决<em>paragraph-to-image generation</em>任务,即给定一个长达 400 词甚至更多的 prompt,生成对应的图片。 T2I 模型要先能理解这么长的图片描述,然后把描述中涉及的关键物体都以一个合理的方式展示在图片中,难度很大。ParaDiffusion 认为之前的文生图模型做不了这样的任务既有数据上的原因,也有架构上的原因。之前的文生图模型基本上是基于 alt-text(平均甚至只有11 词)训练的,图片 caption 信息太少;提取文本 embedding 的网络要么是只接受 77token 的 CLIP,要么是只接受 128token 的 T5 Encoder。因此,ParaDiffusion 用 CogVLM 标注了4M LAION 子集,人工标注了600K 高质量图片,构建了两个 caption 长达 400 词的高质量数据集;使用 LLaMA 2 作为 text encoder。</p> Consistency is All You Need https://wrong.wang/blog/20231111-consistency-is-all-you-need/ Sat, 11 Nov 2023 01:12:34 +0800 https://wrong.wang/blog/20231111-consistency-is-all-you-need/ <p>最近一周内,<em>Consistency</em> 突然成了文生图领域的热点词: <a href="https://arxiv.org/abs/2310.04378" target="_blank" rel="noreferrer">Latent Consistency Model</a>、 <a href="https://arxiv.org/abs/2311.05556" target="_blank" rel="noreferrer">Latent Consistency Model LoRA (LCM-LoRA)</a> 、 <a href="https://github.com/openai/consistencydecoder" target="_blank" rel="noreferrer">Consistency Decoder</a> 接连出现。</p> <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1882" height="959" srcset="https://wrong.wang/blog/20231111-consistency-is-all-you-need/image_0_hu7774037104096827349.webp 330w,/blog/20231111-consistency-is-all-you-need/image_0_hu9332106179657381514.webp 660w,/blog/20231111-consistency-is-all-you-need/image_0_hu11051329789320331124.webp 1024w,/blog/20231111-consistency-is-all-you-need/image_0_hu16471691456134308722.webp 2x" src="https://wrong.wang/blog/20231111-consistency-is-all-you-need/image_0_hu9332106179657381514.webp" alt="" loading="lazy" /> </figure> </p> <p>我先画了一个简单的示意图说明这些新东西和之前算法之间的关系:</p> De-Diffusion Makes Text a Strong Cross-Modal Interface https://wrong.wang/paper/de-diffusion-makes-text-a-strong-cross-modal-interface/ Sat, 04 Nov 2023 02:01:31 +0800 https://wrong.wang/paper/de-diffusion-makes-text-a-strong-cross-modal-interface/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1730" height="1304" srcset="https://wrong.wang/paper/de-diffusion-makes-text-a-strong-cross-modal-interface/image_0_hu15610296311729639523.webp 330w,/paper/de-diffusion-makes-text-a-strong-cross-modal-interface/image_0_hu15355270576669074083.webp 660w,/paper/de-diffusion-makes-text-a-strong-cross-modal-interface/image_0_hu7413347440972210355.webp 1024w,/paper/de-diffusion-makes-text-a-strong-cross-modal-interface/image_0_hu3282131600844891109.webp 2x" src="https://wrong.wang/paper/de-diffusion-makes-text-a-strong-cross-modal-interface/image_0_hu15355270576669074083.webp" alt="" loading="lazy" /> </figure> De-Diffusion 把一幅图片<em>编码</em>为一段描述非常精准全面的 caption,这段 caption 送入预训练的 T2I 模型后可以<em>解码</em>重建原图。De-Diffusion 试图证明,除了把图片转换成 CLIP embedding,直接把图片转换为一段有意义的纯文本,然后送入 NLP 大模型也能完成很多多模态任务,甚至取得了比用 embedding 当做图片表示更好的效果。</p> Improving Image Generation with Better Captions https://wrong.wang/paper/improving-image-generation-with-better-captions/ Fri, 20 Oct 2023 15:20:31 +0800 https://wrong.wang/paper/improving-image-generation-with-better-captions/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="2041" height="901" srcset="https://wrong.wang/paper/improving-image-generation-with-better-captions/image_0_hu17103956154903393345.webp 330w,/paper/improving-image-generation-with-better-captions/image_0_hu11958326879090657522.webp 660w,/paper/improving-image-generation-with-better-captions/image_0_hu14847924085302965073.webp 1024w,/paper/improving-image-generation-with-better-captions/image_0_hu16816072813864652691.webp 2x" src="https://wrong.wang/paper/improving-image-generation-with-better-captions/image_0_hu11958326879090657522.webp" alt="" loading="lazy" /> </figure> </p> <p>DALLE3的效果有多牛自不用说。OpenAI最终还是出了一篇简单的介绍DALLE3的论文,涉及到的模型细节很少,重点是讲如何构造训练数据。</p> Kosmos-G - Generating Images in Context with Multimodal Large Language Models https://wrong.wang/paper/kosmos-g---generating-images-in-context-with-multimodal-large-language-models/ Tue, 10 Oct 2023 19:06:32 +0800 https://wrong.wang/paper/kosmos-g---generating-images-in-context-with-multimodal-large-language-models/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1347" height="1448" srcset="https://wrong.wang/paper/kosmos-g---generating-images-in-context-with-multimodal-large-language-models/image_0_hu14505624487665990197.webp 330w,/paper/kosmos-g---generating-images-in-context-with-multimodal-large-language-models/image_0_hu14882774773437642776.webp 660w,/paper/kosmos-g---generating-images-in-context-with-multimodal-large-language-models/image_0_hu17045735714502851605.webp 1024w,/paper/kosmos-g---generating-images-in-context-with-multimodal-large-language-models/image_0_hu10579227881221286467.webp 2x" src="https://wrong.wang/paper/kosmos-g---generating-images-in-context-with-multimodal-large-language-models/image_0_hu14882774773437642776.webp" alt="" loading="lazy" /> </figure> </p> <p>KOSMOS-G的目标是实现zero-shot personalized text to image。能够实现多Object组合文本的保ID生成。</p> <p>KOSMOS-G的训练流程分为三个阶段:</p> DiffBlender - Scalable and Composable Multimodal Text-to-Image Diffusion Models https://wrong.wang/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/ Mon, 09 Oct 2023 19:05:40 +0800 https://wrong.wang/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1662" height="1449" srcset="https://wrong.wang/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_0_hu11285560617418964618.webp 330w,/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_0_hu3461950195752979000.webp 660w,/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_0_hu10759838819934196428.webp 1024w,/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_0_hu8698454754347122363.webp 2x" src="https://wrong.wang/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_0_hu3461950195752979000.webp" alt="" loading="lazy" /> </figure> </p> <p>DiffBlender目标是能同时结合文本、图片、不带空间信息的token序列、带空间信息的token序列等多种不同模态的控制信号,通过高效地训练Adapter或者HyperNetwork之类的外挂组件,实现可扩展的多模态信号控制图片生成。 其主要对标的是Composer、ControlNet这一类的算法: <figure> <img class=" mx-auto my-0 rounded-md " width="1619" height="603" srcset="https://wrong.wang/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_1_hu12691659957842279847.webp 330w,/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_1_hu16030667860798630136.webp 660w,/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_1_hu4602389736755251076.webp 1024w,/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_1_hu12433860822880659787.webp 2x" src="https://wrong.wang/paper/diffblender---scalable-and-composable-multimodal-text-to-image-diffusion-models/image_1_hu16030667860798630136.webp" alt="" loading="lazy" /> </figure> </p> PixArt-alpha - Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis https://wrong.wang/paper/pixart-alpha---fast-training-of-diffusion-transformer-for-photorealistic-text-to-image-synthesis/ Tue, 03 Oct 2023 19:01:00 +0800 https://wrong.wang/paper/pixart-alpha---fast-training-of-diffusion-transformer-for-photorealistic-text-to-image-synthesis/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1826" height="1286" srcset="https://wrong.wang/paper/pixart-alpha---fast-training-of-diffusion-transformer-for-photorealistic-text-to-image-synthesis/image_0_hu13295845490272877995.webp 330w,/paper/pixart-alpha---fast-training-of-diffusion-transformer-for-photorealistic-text-to-image-synthesis/image_0_hu11672442588125029876.webp 660w,/paper/pixart-alpha---fast-training-of-diffusion-transformer-for-photorealistic-text-to-image-synthesis/image_0_hu1764589581890719970.webp 1024w,/paper/pixart-alpha---fast-training-of-diffusion-transformer-for-photorealistic-text-to-image-synthesis/image_0_hu13530668632992944081.webp 2x" src="https://wrong.wang/paper/pixart-alpha---fast-training-of-diffusion-transformer-for-photorealistic-text-to-image-synthesis/image_0_hu11672442588125029876.webp" alt="" loading="lazy" /> </figure> </p> <p>这篇论文把训练过程拆分成了3个阶段:</p> <ol> <li> <p>Capturing Pixel Dependency:这个阶段模型进行类指导的图片生成,目标是能生成合理的图片。这个阶段模型在ImageNet上预训练了一个class guided的图片生成模型。然后用这个模型当做预训练权重,接着后续的训练。(<em>估计就是直接挑选了一个模型,懒得从头训练了</em>)</p> Emu - Enhancing Image Generation Models Using Photogenic Needles in a Haystack https://wrong.wang/paper/emu---enhancing-image-generation-models-using-photogenic-needles-in-a-haystack/ Thu, 28 Sep 2023 18:12:19 +0800 https://wrong.wang/paper/emu---enhancing-image-generation-models-using-photogenic-needles-in-a-haystack/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1829" height="1642" srcset="https://wrong.wang/paper/emu---enhancing-image-generation-models-using-photogenic-needles-in-a-haystack/image_0_hu10210010954060763247.webp 330w,/paper/emu---enhancing-image-generation-models-using-photogenic-needles-in-a-haystack/image_0_hu6791911013398114981.webp 660w,/paper/emu---enhancing-image-generation-models-using-photogenic-needles-in-a-haystack/image_0_hu6841551114071666784.webp 1024w,/paper/emu---enhancing-image-generation-models-using-photogenic-needles-in-a-haystack/image_0_hu714579908816122204.webp 2x" src="https://wrong.wang/paper/emu---enhancing-image-generation-models-using-photogenic-needles-in-a-haystack/image_0_hu6791911013398114981.webp" alt="" loading="lazy" /> </figure> 核心观点是用一小批(2000)张<strong>极高质量</strong>的图片finetune基础文生图模型就可以让模型输出质量极大提升,同时生成图片的语义贴合描述,过拟合不严重。</p> Common Diffusion Noise Schedules and Sample Steps are Flawed https://wrong.wang/paper/common-diffusion-noise-schedules-and-sample-steps-are-flawed/ Wed, 02 Aug 2023 14:43:15 +0800 https://wrong.wang/paper/common-diffusion-noise-schedules-and-sample-steps-are-flawed/ <p>这篇文章认为现有的diffusion noise schedules和sample steps有两个比较大的问题:</p> <ol> <li>noise scheduler没有保证在最后一个timestep时,信噪比为0。这样会导致模型在训练时sample的均值等低频信息被泄露给网络,但在推理时指定均值为0的高斯噪声,导致推理时无法生成很亮或很暗的图。</li> <li>DDIM等采样的step没有从最后一个timestep开始,进一步加剧了上述问题。</li> </ol> <h2 class="group" id="推理"> 推理<span class="invisible group-hover:visible ml-2"><a href="#%e6%8e%a8%e7%90%86" class="text-neutral-300">#</a></span> </h2> <h3 class="group" id="diffusion基础"> Diffusion基础<span class="invisible group-hover:visible ml-2"><a href="#diffusion%e5%9f%ba%e7%a1%80" class="text-neutral-300">#</a></span> </h3> <p>预先定义扩散率\(\beta_t\),令\(\alpha_t=1-\beta_t\),\(\bar{\alpha}_t = \prod_{i=1}^T \alpha_i\)</p> 生成周刊·第五期 https://wrong.wang/blog/20230324-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E4%BA%94%E6%9C%9F/ Fri, 24 Mar 2023 00:58:36 +0800 https://wrong.wang/blog/20230324-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E4%BA%94%E6%9C%9F/ <blockquote> <p>本文由工具自动收集我在 <a href="https://log.wrong.wang/explore" target="_blank" rel="noreferrer">Memos</a>上记录的内容,汇总后得到。 <a href="https://www.cursor.so/" target="_blank" rel="noreferrer">Cursor</a>帮忙实现了从API加载数据、解析时间字符串、内容处理正则等函数。</p> </blockquote> <ol> <li> <p> <a href="https://github.com/cleanlab/cleanvision" target="_blank" rel="noreferrer">cleanlab/cleanvision: Automatically find issues in image datasets and practice data-centric computer vision (github.com)</a> 能筛选数据集里的图片,找出有包括Low Information、Blurry在内的异常图片。</p> </li> <li> <p> <a href="https://arxiv.org/pdf/2303.12733v1.pdf" target="_blank" rel="noreferrer">On the De-duplication of LAION-2B</a> 也是一个给LAION去重的工作。不过这篇文章的思路不像Meta那篇,不是先聚类再两两求相似度,而是直接压缩CLIP特征,在压缩后的特征上批量求相似度。 <a href="https://github.com/ryanwebster90/snip-dedup" target="_blank" rel="noreferrer">ryanwebster90/snip-dedup (github.com)</a>这里是代码。据说可以很快地搜索整个LAION2B的数据。</p> 生成周刊·第四期 https://wrong.wang/blog/20230225-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E5%9B%9B%E6%9C%9F/ Sat, 25 Feb 2023 23:00:27 +0800 https://wrong.wang/blog/20230225-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E5%9B%9B%E6%9C%9F/ <h2 class="group" id="论文"> 论文<span class="invisible group-hover:visible ml-2"><a href="#%e8%ae%ba%e6%96%87" class="text-neutral-300">#</a></span> </h2> <h3 class="group" id="constitutional-ai-harmlessness-from-ai-feedbackhttpsarxivorgabs221208073v1"> <a href="https://arxiv.org/abs/2212.08073v1" target="_blank" rel="noreferrer">Constitutional AI: Harmlessness from AI Feedback</a><span class="invisible group-hover:visible ml-2"><a href="#constitutional-ai-harmlessness-from-ai-feedbackhttpsarxivorgabs221208073v1" class="text-neutral-300">#</a></span> </h3> <p>Anthropic AI由OpenAI前任领导人(包括兄弟姐妹Daniela 和Dario Amodei)创立,于2023年1月发布了对名为Claude的新聊天机器人的有限测试,以与ChatGPT竞争。这篇文章是Anthropic AI发表的关于Claude的一篇论文。不知道为什么,感觉社区好像没太在意的样子。</p> BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models https://wrong.wang/paper/bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-models/ Thu, 09 Feb 2023 00:08:41 +0800 https://wrong.wang/paper/bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-models/ <p> <figure> <img class=" mx-auto my-0 rounded-md " width="1114" height="522" srcset="https://wrong.wang/paper/bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-models/image_0_hu1538000416471526059.webp 330w,/paper/bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-models/image_0_hu11964974741973081979.webp 660w,/paper/bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-models/image_0_hu14254250546620445525.webp 1024w,/paper/bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-models/image_0_hu9202596071751047229.webp 2x" src="https://wrong.wang/paper/bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-models/image_0_hu11964974741973081979.webp" alt="" loading="lazy" /> </figure> {width=&ldquo;4.955489938757656in&rdquo; height=&ldquo;2.3220111548556432in&rdquo;}</p> <p>BLIP2 是BLIP团队的新作,核心共享是教给大家如何同时利用预训练视觉和语言模型实现多模态任务。BLIP2的同时利用了预训练的图片Encoder和LLM,可以复用LLM中存储的知识和Image Encoder的特征提取能力。为了建立Image Encoder和LLM两个单模态大模型的之间的关联,BLIP2设计了一个相对轻量的Q-Former结构,</p> 生成周刊·第三期 https://wrong.wang/blog/20230209-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E4%B8%89%E6%9C%9F/ Thu, 09 Feb 2023 00:00:27 +0800 https://wrong.wang/blog/20230209-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E4%B8%89%E6%9C%9F/ <h2 class="group" id="论文"> 论文<span class="invisible group-hover:visible ml-2"><a href="#%e8%ae%ba%e6%96%87" class="text-neutral-300">#</a></span> </h2> <h3 class="group" id="blip-2-bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-modelshttpsarxivorgabs230112597v1"> <a href="https://arxiv.org/abs/2301.12597v1" target="_blank" rel="noreferrer">BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models</a><span class="invisible group-hover:visible ml-2"><a href="#blip-2-bootstrapping-language-image-pre-training-with-frozen-image-encoders-and-large-language-modelshttpsarxivorgabs230112597v1" class="text-neutral-300">#</a></span> </h3> <p>详细内容单独整理为了一篇博客,请阅读: <a href="https://wrong.wang/blog/20230209-%E8%AE%BA%E6%96%87%E5%88%86%E4%BA%AB-blip2/">论文分享 - BLIP2</a>。</p> <h2 class="group" id="只言片语"> 只言片语<span class="invisible group-hover:visible ml-2"><a href="#%e5%8f%aa%e8%a8%80%e7%89%87%e8%af%ad" class="text-neutral-300">#</a></span> </h2> <h3 class="group" id="midjourney背后的技术猜测"> Midjourney背后的技术猜测<span class="invisible group-hover:visible ml-2"><a href="#midjourney%e8%83%8c%e5%90%8e%e7%9a%84%e6%8a%80%e6%9c%af%e7%8c%9c%e6%b5%8b" class="text-neutral-300">#</a></span> </h3> <p>Reddit上有个 <a href="https://www.reddit.com/r/MachineLearning/comments/xpb2c5/comment/iq315l6/" target="_blank" rel="noreferrer">帖子</a>介绍了一些他很久以前了解的Midjourney背后的技术。他说MJ曾有段时间用 <a href="https://github.com/crowsonkb/v-diffusion-pytorch" target="_blank" rel="noreferrer">v-diffusion</a> finetune了一下SD,最开始finetune用的数据是LAION-2B中的子集,这说明LAION-2B里面还是有很多宝藏的,因为LAION2B数据量太大,大家都没怎么好好分析过这个数据集,难道说我们想要的高质量动漫、艺术画这类数据,LAION都有,关键是怎么采样出来?</p> 生成周刊·第二期 https://wrong.wang/blog/20230117-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E4%BA%8C%E6%9C%9F/ Tue, 17 Jan 2023 10:18:27 +0800 https://wrong.wang/blog/20230117-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E4%BA%8C%E6%9C%9F/ <h2 class="group" id="论文"> 论文<span class="invisible group-hover:visible ml-2"><a href="#%e8%ae%ba%e6%96%87" class="text-neutral-300">#</a></span> </h2> <h3 class="group" id="mid-u-guidance-fast-classifier-guidance-for-latent-diffusion-modelshttpswandbaijohnowhitakermidu-guidancereports-mid-u-guidance-fast-classifier-guidance-for-latent-diffusion-models--vmlldzozmjg0nza1"> <a href="https://wandb.ai/johnowhitaker/midu-guidance/reports/-Mid-U-Guidance-Fast-Classifier-Guidance-for-Latent-Diffusion-Models--VmlldzozMjg0NzA1" target="_blank" rel="noreferrer">Mid-U Guidance: Fast Classifier Guidance for Latent Diffusion Models</a><span class="invisible group-hover:visible ml-2"><a href="#mid-u-guidance-fast-classifier-guidance-for-latent-diffusion-modelshttpswandbaijohnowhitakermidu-guidancereports-mid-u-guidance-fast-classifier-guidance-for-latent-diffusion-models--vmlldzozmjg0nza1" class="text-neutral-300">#</a></span> </h3> <h3 class="group" id="sketch-guided-text-to-image-diffusion-modelshttpssketch-guided-diffusiongithubio"> <a href="https://sketch-guided-diffusion.github.io/" target="_blank" rel="noreferrer">Sketch-Guided Text-to-Image Diffusion Models</a><span class="invisible group-hover:visible ml-2"><a href="#sketch-guided-text-to-image-diffusion-modelshttpssketch-guided-diffusiongithubio" class="text-neutral-300">#</a></span> </h3> <p>虽然目前SOTA的Text2Image模型主要都使用<strong>CFG</strong>(Classifier Free Guidance),但是Classifier Guidance本身其实能提供更为Flexible的Guidance,毕竟Classifier其实可以是预训练的任何能提供梯度的网络,能构造一个loss就可以了。但是用Classifier做Guidance天然地存在两个问题:</p> 生成周刊·第一期 https://wrong.wang/blog/20230107-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E4%B8%80%E6%9C%9F/ Sat, 07 Jan 2023 15:40:27 +0800 https://wrong.wang/blog/20230107-%E7%94%9F%E6%88%90%E5%91%A8%E5%88%8A%E7%AC%AC%E4%B8%80%E6%9C%9F/ <h2 class="group" id="有趣的论文"> 有趣的论文<span class="invisible group-hover:visible ml-2"><a href="#%e6%9c%89%e8%b6%a3%e7%9a%84%e8%ae%ba%e6%96%87" class="text-neutral-300">#</a></span> </h2> <h3 class="group" id="maskgit-masked-generative-image-transformerhttpsarxivorgabs220204200v1"> <a href="https://arxiv.org/abs/2202.04200v1" target="_blank" rel="noreferrer">MaskGIT: Masked Generative Image Transformer</a><span class="invisible group-hover:visible ml-2"><a href="#maskgit-masked-generative-image-transformerhttpsarxivorgabs220204200v1" class="text-neutral-300">#</a></span> </h3> <p>Google新的文生图大模型Muse所依赖的生成模型。介绍了一种新的&quot;Masked Visual Token Modeling&quot;训练策略。</p> 什么是Diffusion模型? https://wrong.wang/blog/20220605-%E4%BB%80%E4%B9%88%E6%98%AFdiffusion%E6%A8%A1%E5%9E%8B/ Sun, 05 Jun 2022 17:12:07 +0800 https://wrong.wang/blog/20220605-%E4%BB%80%E4%B9%88%E6%98%AFdiffusion%E6%A8%A1%E5%9E%8B/ <h2 class="group" id="diffusion过程"> Diffusion过程<span class="invisible group-hover:visible ml-2"><a href="#diffusion%e8%bf%87%e7%a8%8b" class="text-neutral-300">#</a></span> </h2> <p>扩散(Diffusion)在热力学中指细小颗粒从高密度区域扩散至低密度区域,在统计领域,扩散则指将复杂的分布转换为一个简单的分布的过程。 Diffusion模型定义了一个概率分布转换模型\(\mathcal{T}\),能将原始数据\(x_0\)构成的复杂分布\(p_{\mathrm{complex}}\)转换为一个简单的已知参数的先验分布\(p_{\mathrm{prior}}\):</p> 利用torch.fx提取PyTorch网络结构信息绘制网络结构图 https://wrong.wang/blog/20220520-%E5%88%A9%E7%94%A8torch.fx%E6%8F%90%E5%8F%96pytorch%E7%BD%91%E7%BB%9C%E7%BB%93%E6%9E%84%E4%BF%A1%E6%81%AF%E7%BB%98%E5%88%B6%E7%BD%91%E7%BB%9C%E7%BB%93%E6%9E%84%E5%9B%BE/ Fri, 20 May 2022 12:54:20 +0800 https://wrong.wang/blog/20220520-%E5%88%A9%E7%94%A8torch.fx%E6%8F%90%E5%8F%96pytorch%E7%BD%91%E7%BB%9C%E7%BB%93%E6%9E%84%E4%BF%A1%E6%81%AF%E7%BB%98%E5%88%B6%E7%BD%91%E7%BB%9C%E7%BB%93%E6%9E%84%E5%9B%BE/ <p> <a href="https://pytorch.org/docs/stable/fx.html" target="_blank" rel="noreferrer"><code>torch.fx</code></a>是一个用于转换(transform)PyTorch模型(即<code>nn.Module</code>)的工具包。从<code>torch 1.10</code>开始,工具包不再处于beta阶段,<code>torch.fx</code>成为了PyTorch的稳定功能。</p> <p>最近我比较闲,按照文档随便试了一下<code>torch.fx</code>的功能,立马意识到,这玩意真的挺有用!<code>torch.fx</code>能将<code>nn.Module</code>转换为一个图结构,图的节点保存着当前网络节点前向时的输入,输出和参数,以及网络结构本身。这个图结构保存的信息足够多,api丰富。我一直苦于看不懂使用Tensorboard的<code>add_graph</code>方法得到的网络结构图,就尝试用graphviz可视化<code>torch.fx</code>得到的图,发现效果确实不错,比Tensorboard的结果清晰不少。</p> 初始化Svelte+TailwindCSS网站 https://wrong.wang/flight-rules/20211220-%E5%88%9D%E5%A7%8B%E5%8C%96svelte-tailwindcss%E7%BD%91%E7%AB%99/ Mon, 20 Dec 2021 13:36:06 +0800 https://wrong.wang/flight-rules/20211220-%E5%88%9D%E5%A7%8B%E5%8C%96svelte-tailwindcss%E7%BD%91%E7%AB%99/ <p>Svelte是一个为构建Web APP设计的javescript框架,类似于Vue或者React。TailwindCSS是一个用于网页设计的CSS框架。这两个框架在各自领域都是独树一帜的存在,本文介绍一下怎么同时使用这两个框架。</p> 一行命令查看ZeroTier网络中设备IP和在线情况 https://wrong.wang/flight-rules/20211205-%E4%B8%80%E8%A1%8C%E5%91%BD%E4%BB%A4%E6%9F%A5%E7%9C%8Bzerotier%E7%BD%91%E7%BB%9C%E4%B8%AD%E8%AE%BE%E5%A4%87ip%E5%92%8C%E5%9C%A8%E7%BA%BF%E6%83%85%E5%86%B5/ Sun, 05 Dec 2021 21:19:23 +0800 https://wrong.wang/flight-rules/20211205-%E4%B8%80%E8%A1%8C%E5%91%BD%E4%BB%A4%E6%9F%A5%E7%9C%8Bzerotier%E7%BD%91%E7%BB%9C%E4%B8%AD%E8%AE%BE%E5%A4%87ip%E5%92%8C%E5%9C%A8%E7%BA%BF%E6%83%85%E5%86%B5/ <p>最近我经常需要查看ZeroTier中某个网络的设备在线情况和网络中每个设备的公网IP。由于ZeroTier的Web管理端做的不是很易用,每次都登录<code>my.zerotier.com</code>查看信息特别不方便。调研了一下,发现ZeroTier官方提供的API接口 <a href="https://docs.zerotier.com/central/v1/#tag/network-member" target="_blank" rel="noreferrer">Returns a list of Members on the network</a>正好能提供我所需要的信息。所以写了一个one-line shell命令在终端用表格展示一下结果。</p> 我的2022年秋招 https://wrong.wang/blog/20211130-%E6%88%91%E7%9A%842022%E5%B9%B4%E7%A7%8B%E6%8B%9B/ Tue, 30 Nov 2021 15:11:56 +0800 https://wrong.wang/blog/20211130-%E6%88%91%E7%9A%842022%E5%B9%B4%E7%A7%8B%E6%8B%9B/ <p>我的秋招自7月初投递米哈游开始,到11月中旬最终确定接腾讯的offer结束。从去年年底,我就一直在腾讯实习。到了七月份,我一边实习一边投递了米哈游、商汤、网易、华为、快手、阿里、字节这些公司的算法岗。最终除了米哈游一面挂和阿里没有给我正经的面试机会以外,其它公司的offer都拿到了,都是SP以上,网易、商汤、腾讯还给我开了比SSP还高一些的offer。秋招期间,我总是因为各种失误和遗憾而痛苦,但现在再回顾,做为一个普通人,自己能拿到现在这些offer,已经非常非常幸运了。似乎自己在秋招阶段还是做对了很多事情的。现在,我想和你分享一下我的秋招,我的感悟。</p> About https://wrong.wang/about/ Mon, 29 Nov 2021 09:47:56 +0800 https://wrong.wang/about/ <div class='text-center'> <h2 class="group" id="我是谁"> 我是谁<span class="invisible group-hover:visible ml-2"><a href="#%e6%88%91%e6%98%af%e8%b0%81" class="text-neutral-300">#</a></span> </h2> </div> <p> <figure> <img class=" mx-auto my-0 rounded-md " src="https://wrong.wang/about/image_0.svg" alt="自拍" /> <figcaption class="text-center">自拍正在加载&hellip;</figcaption> </figure> </p> <p>我叫Ray,行走江湖也常用<em>不对</em>(Wrong Wang)这个绰号,出生于上个世纪,在本世纪初长大。</p> <p>我享受创造。小时候想造机器人造飞机,初中时就决定学自动化这个似乎与机器人最相关的专业;到了大学便选了自动化专业,却逐渐沉迷于代码;后面读研时兜兜转转选了 GAN 做研究方向,开始做图片生成;第一份工作在腾讯,作为螺丝钉,给 QQ 做乱七八糟的图片生成活动。我能创造得越来越多, 想创造得却越来越小。</p> Ubuntu关闭GUI https://wrong.wang/flight-rules/20200725-ubuntu%E5%85%B3%E9%97%ADgui/ Sat, 25 Jul 2020 10:45:31 +0800 https://wrong.wang/flight-rules/20200725-ubuntu%E5%85%B3%E9%97%ADgui/ <h2 class="group" id="一持久关闭"> 一、持久关闭<span class="invisible group-hover:visible ml-2"><a href="#%e4%b8%80%e6%8c%81%e4%b9%85%e5%85%b3%e9%97%ad" class="text-neutral-300">#</a></span> </h2> <p>执行以下命令,持久关闭Ubuntu桌面版的GUI环境(通过Ctrl+Alt+F1-F6快捷键进入命令行界面):</p> <div class="text-sm border border-solid bg-white shadow-md py-0"><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>sudo systemctl set-default multi-user.target</span></span></code></pre></div></div><p>执行以下命令,持久开启Ubuntu桌面版的GUI环境(通过Ctrl+Alt+F7快捷键进入GUI界面):</p> 输出shell脚本头部的注释做帮助信息 https://wrong.wang/flight-rules/20200717-%E8%BE%93%E5%87%BAshell%E8%84%9A%E6%9C%AC%E5%A4%B4%E9%83%A8%E7%9A%84%E6%B3%A8%E9%87%8A%E5%81%9A%E5%B8%AE%E5%8A%A9%E4%BF%A1%E6%81%AF/ Fri, 17 Jul 2020 12:47:47 +0800 https://wrong.wang/flight-rules/20200717-%E8%BE%93%E5%87%BAshell%E8%84%9A%E6%9C%AC%E5%A4%B4%E9%83%A8%E7%9A%84%E6%B3%A8%E9%87%8A%E5%81%9A%E5%B8%AE%E5%8A%A9%E4%BF%A1%E6%81%AF/ <p>博主 <a href="https://twitter.com/reconquestio" target="_blank" rel="noreferrer">@reconquestio</a>发了一篇文章 <a href="https://samizdat.dev/help-message-for-shell-scripts/" target="_blank" rel="noreferrer">Help message for shell scripts</a>展示一个技巧,将帮助信息写在 Bash 脚本脚本的头部,然后只要执行&quot;脚本名 + help&quot;,就能输出这段帮助信息。我翻译了一下,权做参考。</p> 使用官方库在golang中表示json的三种方法 https://wrong.wang/flight-rules/20200523-%E4%BD%BF%E7%94%A8%E5%AE%98%E6%96%B9%E5%BA%93%E5%9C%A8golang%E4%B8%AD%E8%A1%A8%E7%A4%BAjson%E7%9A%84%E4%B8%89%E7%A7%8D%E6%96%B9%E6%B3%95/ Sat, 23 May 2020 07:53:54 +0800 https://wrong.wang/flight-rules/20200523-%E4%BD%BF%E7%94%A8%E5%AE%98%E6%96%B9%E5%BA%93%E5%9C%A8golang%E4%B8%AD%E8%A1%A8%E7%A4%BAjson%E7%9A%84%E4%B8%89%E7%A7%8D%E6%96%B9%E6%B3%95/ <p>以下方法均只使用了<code>encoding/json</code>这个库,但事实上业界还有很多很优秀的<code>JSON</code>解析库,也对应着有不同表示<code>JSON</code>的方法。本文只描述使用官方库时可用的3种方式。</p> 两个有趣的AI动作迁移(Motion Transfer)项目: Pose Animator,avatarify https://wrong.wang/blog/20200516-%E4%B8%A4%E4%B8%AA%E6%9C%89%E8%B6%A3%E7%9A%84ai%E5%8A%A8%E4%BD%9C%E8%BF%81%E7%A7%BBmotion-transfer%E9%A1%B9%E7%9B%AE-pose-animatoravatarify/ Sat, 16 May 2020 21:20:37 +0800 https://wrong.wang/blog/20200516-%E4%B8%A4%E4%B8%AA%E6%9C%89%E8%B6%A3%E7%9A%84ai%E5%8A%A8%E4%BD%9C%E8%BF%81%E7%A7%BBmotion-transfer%E9%A1%B9%E7%9B%AE-pose-animatoravatarify/ <p>疫情肆虐期间大家被限制在家中,只能远程工作,远程会议。散落在各地的程序员也同样如此,整天面对笔记本的摄像头工作,催生出两个利用笔记本摄像头和AI实现的小项目。这俩项目很有趣,同时和我研究生在做的课题比较接近,于是我粗略地研究了一下他们的方案,和大家分享一下这两个Motion Transfer项目。</p> 相爱千日碎碎念 https://wrong.wang/blog/20200214-%E7%9B%B8%E7%88%B1%E5%8D%83%E6%97%A5%E7%A2%8E%E7%A2%8E%E5%BF%B5/ Fri, 14 Feb 2020 00:05:20 +0800 https://wrong.wang/blog/20200214-%E7%9B%B8%E7%88%B1%E5%8D%83%E6%97%A5%E7%A2%8E%E7%A2%8E%E5%BF%B5/ <p>如果在2017年5月20日在一起,那么2017年8月28日(七夕)是在一起的第100天,今天(2020年2月14日)则是在一起的第1000天。 我和我的女朋友就是17年5月20号在一起的。</p> Cheatsheet https://wrong.wang/cheatsheet/ Mon, 27 Jan 2020 21:45:37 +0800 https://wrong.wang/cheatsheet/ <div class="mx-auto bg-zinc-100 m-4"> <details class="open:bg-white dark:open:bg-slate-900 open:ring-1 open:ring-black/5 dark:open:ring-white/10 open:shadow-md p-4" close> <summary class="text-sm leading-6 text-slate-900 dark:text-white font-semibold select-none"> <span class=""><code>pip</code>如何使用清华源</span> </summary> <div class="mt-3 text-sm leading-6 text-slate-900 dark:text-slate-400"> <h3 class="group" id="临时使用"> 临时使用<span class="invisible group-hover:visible ml-2"><a href="#%e4%b8%b4%e6%97%b6%e4%bd%bf%e7%94%a8" class="text-neutral-300">#</a></span> </h3> <div class="text-sm border border-solid bg-white shadow-md py-0"><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>pip install -i https://pypi.tuna.tsinghua.edu.cn/simple some-package</span></span></code></pre></div></div><p>注意,simple 不能少, 是 https 而不是 http</p> 新建空git分支 https://wrong.wang/flight-rules/20191116-%E6%96%B0%E5%BB%BA%E7%A9%BAgit%E5%88%86%E6%94%AF/ Sat, 16 Nov 2019 09:23:12 +0800 https://wrong.wang/flight-rules/20191116-%E6%96%B0%E5%BB%BA%E7%A9%BAgit%E5%88%86%E6%94%AF/ <p>偶尔有这么一个奇怪的需求:新建一个不包含任何commit的git分支。比如你使用<em>GitHub Pages</em>,需要新增一个gh-pages分支,由于这个分支只需要一些HTML/CSS/JS,就需要新建一个不包含任何commit的新分支。</p> 更改Windows中文版默认英文字体 https://wrong.wang/flight-rules/20191113-%E6%9B%B4%E6%94%B9windows%E4%B8%AD%E6%96%87%E7%89%88%E9%BB%98%E8%AE%A4%E8%8B%B1%E6%96%87%E5%AD%97%E4%BD%93/ Wed, 13 Nov 2019 16:18:40 +0800 https://wrong.wang/flight-rules/20191113-%E6%9B%B4%E6%94%B9windows%E4%B8%AD%E6%96%87%E7%89%88%E9%BB%98%E8%AE%A4%E8%8B%B1%E6%96%87%E5%AD%97%E4%BD%93/ <p>Windows中文版默认英文字体为宋体,导致一些软件如Mendeley界面非常丑。</p> <p>替换Windows下默认字体的方法如下:</p> <ol> <li>按下windows+R组合键打开<strong>运行</strong>;</li> <li>输入<code>regedit</code>并回车打开注册表管理器;</li> <li>打开注册表中的<code>[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\GRE_Initialize]</code>项;</li> <li>将<code>GUIFont.Facename</code>项更改为你喜欢的字体,如<code>Tahoma</code>或<code>Arial</code>;</li> <li>重启电脑</li> </ol> 抵制or不抵制notepad++? https://wrong.wang/blog/20191030-%E6%8A%B5%E5%88%B6or%E4%B8%8D%E6%8A%B5%E5%88%B6notepad-/ Wed, 30 Oct 2019 17:20:03 +0800 https://wrong.wang/blog/20191030-%E6%8A%B5%E5%88%B6or%E4%B8%8D%E6%8A%B5%E5%88%B6notepad-/ <p>今年“政治”的存在感非常高,这不,Notepad++作者又发表了一些反华言论,接着<code>v2ex.com</code>上就出现很多个帖子讨论这件事情,部分帖子号召大家抵制这个软件,但我QQ空间里也有人开始暗搓搓地表达对这种抵制来抵制去的不满。</p> 安装标准的caffe和opencv https://wrong.wang/flight-rules/20191016-%E5%AE%89%E8%A3%85%E6%A0%87%E5%87%86%E7%9A%84caffe%E5%92%8Copencv/ Wed, 16 Oct 2019 16:09:16 +0800 https://wrong.wang/flight-rules/20191016-%E5%AE%89%E8%A3%85%E6%A0%87%E5%87%86%E7%9A%84caffe%E5%92%8Copencv/ <p>如果只是使用最标准的caffe,没有自定义layer的话,使用<code>conda</code>安装caffe很简单:</p> <div class="text-sm border border-solid bg-white shadow-md py-0"><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#008000"># 创建一个虚拟环境,使用python2</span> </span></span><span style="display:flex;"><span>conda create -n caffe-python2 python=2.7 </span></span><span style="display:flex;"><span><span style="color:#008000"># 激活这个环境</span> </span></span><span style="display:flex;"><span>conda activate caffe-python2 </span></span><span style="display:flex;"><span><span style="color:#008000"># 安装GPU版本的caffe,如果需要,CUDNN等等依赖conda也会帮你装好</span> </span></span><span style="display:flex;"><span>conda install -c defaults caffe-gpu </span></span><span style="display:flex;"><span><span style="color:#008000"># 安装2.4版本的OpenCV</span> </span></span><span style="display:flex;"><span>conda install -c https://conda.binstar.org/menpo opencv</span></span></code></pre></div></div> 远离UGC https://wrong.wang/blog/20191011-%E8%BF%9C%E7%A6%BBugc/ Fri, 11 Oct 2019 10:34:30 +0800 https://wrong.wang/blog/20191011-%E8%BF%9C%E7%A6%BBugc/ <p>我现在非常认同一个我之前嗤之以鼻的观点:经常在deadline之前摸鱼反而不会快乐,会很抑郁。</p> <p>今天又一次在电脑前思考我该干啥,由于事情太多,deadline太赶,我被压垮了,啥都不想做,直接翻身上了床睡了一觉。起来下来照常先刷一波知乎、微博、v2ex、煎蛋、虎嗅、RSS,刷完后又面临同样问题:我该先干啥?</p> Manjaro I3连接蓝牙耳机 https://wrong.wang/flight-rules/20191001-manjaro-i3%E8%BF%9E%E6%8E%A5%E8%93%9D%E7%89%99%E8%80%B3%E6%9C%BA/ Tue, 01 Oct 2019 11:07:58 +0800 https://wrong.wang/flight-rules/20191001-manjaro-i3%E8%BF%9E%E6%8E%A5%E8%93%9D%E7%89%99%E8%80%B3%E6%9C%BA/ <p>manjaro i3版本默认安装了 blueman 系列软件。其中:</p> <ul> <li><code>blueman-manager</code> 可以选择连接蓝牙设备</li> <li><code>blueman-adapters</code> 可以修改蓝牙名等本机蓝牙设置</li> </ul> <p>但是,刚装好系统时,我尝试连接蓝牙耳机,一直提示错误:</p> 关于“混合虚实” https://wrong.wang/blog/20190925-%E5%85%B3%E4%BA%8E%E6%B7%B7%E5%90%88%E8%99%9A%E5%AE%9E/ Wed, 25 Sep 2019 23:27:33 +0800 https://wrong.wang/blog/20190925-%E5%85%B3%E4%BA%8E%E6%B7%B7%E5%90%88%E8%99%9A%E5%AE%9E/ <blockquote> <p>混合虚实,想了一段时间,决定把音视频制作,3D影像建构和老照片、老视频基于AI混合在一起的技术统称为混合虚实。混合虚实技术的发展在5G大规模部署后将会成为视频中的主流。这将进一步降低门槛,未来我们都不需要拍摄即可获得自己的任意指定场景的影像。这一天很快会到来。 摘自微博 <a href="https://weibo.com/1652867473/I7M7vF8tY" target="_blank" rel="noreferrer">新媒沈阳</a></p> 新闻致人郁闷 https://wrong.wang/blog/20190901-%E6%96%B0%E9%97%BB%E8%87%B4%E4%BA%BA%E9%83%81%E9%97%B7/ Sun, 01 Sep 2019 18:57:28 +0800 https://wrong.wang/blog/20190901-%E6%96%B0%E9%97%BB%E8%87%B4%E4%BA%BA%E9%83%81%E9%97%B7/ <p>不瞒各位,最近我做事效率奇低,很大一个原因是我总是抱着手机刷各种关于香港的新闻。</p> <p>从观察者网开始,到微博,到多维,到Twitter,到BBC,到Matters.news,到Hacker News,到Telegram中的国外新闻网站聚合频道。从光谱的一端走到另一端后,我每天总是打开这些网站翻来覆去地刷新,仿佛是笼子里的猴子,时不时在投喂口看看有没有新的食物落下。这种刷新的快感很是让我沉迷,但心中郁闷之情也集聚甚多,以至于不吐不快。</p> 第四次换域名感言 https://wrong.wang/blog/20190822-%E7%AC%AC%E5%9B%9B%E6%AC%A1%E6%8D%A2%E5%9F%9F%E5%90%8D%E6%84%9F%E8%A8%80/ Thu, 22 Aug 2019 21:14:41 +0800 https://wrong.wang/blog/20190822-%E7%AC%AC%E5%9B%9B%E6%AC%A1%E6%8D%A2%E5%9F%9F%E5%90%8D%E6%84%9F%E8%A8%80/ <p>本站目前域名是rayhy.com,这已经是我第四次换域名了。前两次只是在玩票,暂不说它。</p> <p>由于我的本名太过泯然众人,与名字相关的域名早就被注册一空(当然我还不是最惨的,相信我哥哥魏bo也是这么认为),所以我注册域名的原则一直是网名优先。但网名优先法的达摩克斯之剑就是:网名不稳定,所谓喜新厌旧是也。所以去年毅然决然地以志趣做域名:低熵(Low Entropy)。熵这个概念一谈起来就高大上。低熵,就是Bring Order to Chaos呀,我的小网站也算是世界朝着高熵发展到终局前绝望的挣扎了。正是世界的终局,个人的挣扎。lowentropy.me这个域名很优秀,我非常喜欢。</p> 如何方便地同时使用命令行参数和配置文件指定程序参数 https://wrong.wang/blog/20190816-%E5%A6%82%E4%BD%95%E6%96%B9%E4%BE%BF%E5%9C%B0%E5%90%8C%E6%97%B6%E4%BD%BF%E7%94%A8%E5%91%BD%E4%BB%A4%E8%A1%8C%E5%8F%82%E6%95%B0%E5%92%8C%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E6%8C%87%E5%AE%9A%E7%A8%8B%E5%BA%8F%E5%8F%82%E6%95%B0/ Fri, 16 Aug 2019 14:45:07 +0800 https://wrong.wang/blog/20190816-%E5%A6%82%E4%BD%95%E6%96%B9%E4%BE%BF%E5%9C%B0%E5%90%8C%E6%97%B6%E4%BD%BF%E7%94%A8%E5%91%BD%E4%BB%A4%E8%A1%8C%E5%8F%82%E6%95%B0%E5%92%8C%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E6%8C%87%E5%AE%9A%E7%A8%8B%E5%BA%8F%E5%8F%82%E6%95%B0/ <p>最近在写深度学习代码,很头疼的一个问题是:代码中有很多需要经常调整的超参数,要能通过配置修改这些超参数,不能直接写死。</p> <p>参数较少时,直接使用命令行参数指定就行了,灵活方便。但是,当参数量比较多时,命令行参数就不太合适了,主要有三个问题:</p> 查看linux服务器的开放端口 https://wrong.wang/flight-rules/20190730-%E6%9F%A5%E7%9C%8Blinux%E6%9C%8D%E5%8A%A1%E5%99%A8%E7%9A%84%E5%BC%80%E6%94%BE%E7%AB%AF%E5%8F%A3/ Tue, 30 Jul 2019 12:56:38 +0800 https://wrong.wang/flight-rules/20190730-%E6%9F%A5%E7%9C%8Blinux%E6%9C%8D%E5%8A%A1%E5%99%A8%E7%9A%84%E5%BC%80%E6%94%BE%E7%AB%AF%E5%8F%A3/ <p>很多命令都可以查看当前开发的端口</p> <h2 class="group" id="netstat"> netstat<span class="invisible group-hover:visible ml-2"><a href="#netstat" class="text-neutral-300">#</a></span> </h2> <div class="text-sm border border-solid bg-white shadow-md py-0"><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>sudo netstat -tulpn | grep LISTEN</span></span></code></pre></div></div><ul> <li><strong>-t</strong>: 所有TCP端口</li> <li><strong>-u</strong>: 所有UDP端口</li> <li><strong>-l</strong>: 显示正在监听中的socket</li> <li><strong>-p</strong>: 显示socket对应的程序名字、PID</li> <li><strong>-n</strong>: 不需要解析名字</li> </ul> <h2 class="group" id="ss"> ss<span class="invisible group-hover:visible ml-2"><a href="#ss" class="text-neutral-300">#</a></span> </h2> <div class="text-sm border border-solid bg-white shadow-md py-0"><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>sudo ss -tulpn</span></span></code></pre></div></div><h2 class="group" id="lsof"> lsof<span class="invisible group-hover:visible ml-2"><a href="#lsof" class="text-neutral-300">#</a></span> </h2> <div class="text-sm border border-solid bg-white shadow-md py-0"><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>sudo lsof -i -P -n | grep LISTEN</span></span></code></pre></div></div><h2 class="group" id="nmap"> nmap<span class="invisible group-hover:visible ml-2"><a href="#nmap" class="text-neutral-300">#</a></span> </h2> <div class="text-sm border border-solid bg-white shadow-md py-0"><div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>sudo nmap -sT -O localhost</span></span></code></pre></div></div><p>实际用起来,感觉<code>ss</code>输出信息比较直观。</p> ssh内网穿透 https://wrong.wang/flight-rules/20190721-ssh%E5%86%85%E7%BD%91%E7%A9%BF%E9%80%8F/ Sun, 21 Jul 2019 18:48:36 +0800 https://wrong.wang/flight-rules/20190721-ssh%E5%86%85%E7%BD%91%E7%A9%BF%E9%80%8F/ <p>这种方法需要有一台公网中的VPS。</p> <p>将三台机器描述如下:</p> <table> <thead> <tr> <th style="text-align: center">机器代号</th> <th style="text-align: center">网络位置描述</th> <th style="text-align: center">地址</th> <th style="text-align: center">账户</th> <th style="text-align: center">端口</th> <th style="text-align: center">运行程序</th> </tr> </thead> <tbody> <tr> <td style="text-align: center">server</td> <td style="text-align: center">内网或者防火墙后,只能主动连外网</td> <td style="text-align: center">localhost</td> <td style="text-align: center">user</td> <td style="text-align: center">22</td> <td style="text-align: center">autossh</td> </tr> <tr> <td style="text-align: center">VPS</td> <td style="text-align: center">有一个公网IP,公网双向可用</td> <td style="text-align: center">lowentropy.me</td> <td style="text-align: center">rp</td> <td style="text-align: center">2201, 2200</td> <td style="text-align: center">sshd</td> </tr> <tr> <td style="text-align: center">PC</td> <td style="text-align: center">自己的电脑,能访问到VPS</td> <td style="text-align: center"></td> <td style="text-align: center"></td> <td style="text-align: center"></td> <td style="text-align: center">ssh</td> </tr> </tbody> </table> <p>我们的目标很简单:<strong>在PC端使用ssh,以VPS作为跳板,连接内网中的服务器server</strong>。</p> 还与韶光共憔悴,不堪看 https://wrong.wang/blog/20190720-%E8%BF%98%E4%B8%8E%E9%9F%B6%E5%85%89%E5%85%B1%E6%86%94%E6%82%B4%E4%B8%8D%E5%A0%AA%E7%9C%8B/ Sat, 20 Jul 2019 15:01:14 +0800 https://wrong.wang/blog/20190720-%E8%BF%98%E4%B8%8E%E9%9F%B6%E5%85%89%E5%85%B1%E6%86%94%E6%82%B4%E4%B8%8D%E5%A0%AA%E7%9C%8B/ <p>我关注了谢益辉的 <a href="https://yihui.name/" target="_blank" rel="noreferrer">博客</a>。他博客的RSS源很有意思,不知道什么原因,总是隔几个月才将他这几个月的所有博客一起加入RSS中。导致我每次注意到他的博客更新,都有几十篇未看。今天就是这样,我一下子看了他上半年的几十篇博客。</p> <p>经常看到有人问,除了程序员的技术博客,现在还有人认认真真地写博客吗?谢益辉就是一个认认真真写博客的人。第一次进入他的博客是因为他关于电子排版的文章,看了他很多关于网页排版、字体的文章后,我很认同他的理念,我的博客主题就是在他设计的主题基础上修修补补得来的。</p> 交换ctrl与Caps lock键 https://wrong.wang/flight-rules/20190718-%E4%BA%A4%E6%8D%A2ctrl%E4%B8%8Ecaps-lock%E9%94%AE/ Thu, 18 Jul 2019 19:18:03 +0800 https://wrong.wang/flight-rules/20190718-%E4%BA%A4%E6%8D%A2ctrl%E4%B8%8Ecaps-lock%E9%94%AE/ <p>最近准备尝试下交换CTRL与Caps lock键。已经有很多交换这两个按键的方式。但大都不太满足我的要求:最好不装软件,最好很容易恢复到原来的样子。</p> docker container新增开放端口 https://wrong.wang/flight-rules/20190718-docker-container%E6%96%B0%E5%A2%9E%E5%BC%80%E6%94%BE%E7%AB%AF%E5%8F%A3/ Thu, 18 Jul 2019 16:23:42 +0800 https://wrong.wang/flight-rules/20190718-docker-container%E6%96%B0%E5%A2%9E%E5%BC%80%E6%94%BE%E7%AB%AF%E5%8F%A3/ <p>实验室最近在拿docker当虚拟机用。这当然完全违逆了docker的使用准则,但是考虑到docker配合nvidia-docker能同时使用不同的深度学习环境,而且管理较为简单,所以最终我们还是决定拿docker的容器当作虚拟机来使用。</p> 可能是目前最简单方便的管理dotfiles的方法:使用裸Git仓库 https://wrong.wang/blog/20190708-%E5%8F%AF%E8%83%BD%E6%98%AF%E7%9B%AE%E5%89%8D%E6%9C%80%E7%AE%80%E5%8D%95%E6%96%B9%E4%BE%BF%E7%9A%84%E7%AE%A1%E7%90%86dotfiles%E7%9A%84%E6%96%B9%E6%B3%95%E4%BD%BF%E7%94%A8%E8%A3%B8git%E4%BB%93%E5%BA%93/ Mon, 08 Jul 2019 14:12:18 +0800 https://wrong.wang/blog/20190708-%E5%8F%AF%E8%83%BD%E6%98%AF%E7%9B%AE%E5%89%8D%E6%9C%80%E7%AE%80%E5%8D%95%E6%96%B9%E4%BE%BF%E7%9A%84%E7%AE%A1%E7%90%86dotfiles%E7%9A%84%E6%96%B9%E6%B3%95%E4%BD%BF%E7%94%A8%E8%A3%B8git%E4%BB%93%E5%BA%93/ <blockquote> <p>标题略有夸张,很多人都有独特的、适合自己的管理dotfiles的方案。本文无意诋毁其它方法,只是介绍一种仅仅依靠git就能优雅地管理dotfiles的方案。</p> 175天 https://wrong.wang/blog/20190315-175%E5%A4%A9/ Fri, 15 Mar 2019 16:20:53 +0800 https://wrong.wang/blog/20190315-175%E5%A4%A9/ <p>去年9月18号我保研了。今年3月12号晚上6点50,女朋友考研成功上岸。我比较弱,就留在本学院了;我女朋友强一些,最后去了浙大。</p> <p>结局是快乐的,但过程却波折。她初试成绩刚出来时,虽然蛮低于预期,但也有380+,我觉得也就差不多了吧,能上浙大。</p> 本站引用图片的“顺滑”流程 https://wrong.wang/blog/20190301-%E6%9C%AC%E7%AB%99%E5%BC%95%E7%94%A8%E5%9B%BE%E7%89%87%E7%9A%84%E9%A1%BA%E6%BB%91%E6%B5%81%E7%A8%8B/ Fri, 01 Mar 2019 23:07:39 +0800 https://wrong.wang/blog/20190301-%E6%9C%AC%E7%AB%99%E5%BC%95%E7%94%A8%E5%9B%BE%E7%89%87%E7%9A%84%E9%A1%BA%E6%BB%91%E6%B5%81%E7%A8%8B/ <p>对我来说,静态博客的图片管理需要满足以下几个需求:</p> <ol> <li>方便引用,插入图片时怎么简单怎么来;</li> <li>方便备份,图片最好存在多处,防止图片丢失;</li> <li>方便管理,可以用程序自动上传到多个地方;</li> <li>保证速度,图片最好有CDN,加载不要太慢。</li> </ol> <h2 class="group" id="文件组织"> 文件组织<span class="invisible group-hover:visible ml-2"><a href="#%e6%96%87%e4%bb%b6%e7%bb%84%e7%bb%87" class="text-neutral-300">#</a></span> </h2> <p>我的网站使用的生成器是hugo。为了方便地引用,保存图片,我现在采用如下的文件组织形式:</p> ReID任务中的CMC和mAP https://wrong.wang/blog/20190223-reid%E4%BB%BB%E5%8A%A1%E4%B8%AD%E7%9A%84cmc%E5%92%8Cmap/ Sat, 23 Feb 2019 16:33:24 +0800 https://wrong.wang/blog/20190223-reid%E4%BB%BB%E5%8A%A1%E4%B8%AD%E7%9A%84cmc%E5%92%8Cmap/ <h2 class="group" id="reid"> ReID<span class="invisible group-hover:visible ml-2"><a href="#reid" class="text-neutral-300">#</a></span> </h2> <p>ReID指Re-identification,常翻译为重识别。ReID任务本身分类很多,本文只讨论基于图片的ReID任务中<code>single-gallery-shot</code>这一最简单的情况。</p> Windows下获取连接过的WiFi的密码 https://wrong.wang/flight-rules/20190121-windows%E4%B8%8B%E8%8E%B7%E5%8F%96%E8%BF%9E%E6%8E%A5%E8%BF%87%E7%9A%84wifi%E7%9A%84%E5%AF%86%E7%A0%81/ Mon, 21 Jan 2019 20:18:03 +0800 https://wrong.wang/flight-rules/20190121-windows%E4%B8%8B%E8%8E%B7%E5%8F%96%E8%BF%9E%E6%8E%A5%E8%BF%87%E7%9A%84wifi%E7%9A%84%E5%AF%86%E7%A0%81/ <ol> <li>打开cmd(win+r输入cmd)</li> <li>输入以下命令:<code>netsh wlan show profile WiFi名字 key=clear</code>,注意把WiFi名字部分替换为你想知道密码的WiFi名。</li> <li>输出的内容中,<code>安全设置-&gt;关键内容</code> 就是WiFi密码。</li> </ol> <p>当然,当你因为记性不好,需要查看已经连过的WiFi的密码时,你可能同时因为记性,记不起正确的WiFi名字。你可以通过<code>netsh wlan show profile</code>这条命令查看本机连接过的所有WiFi。</p> 你我和互联网(一):自由与控制 https://wrong.wang/blog/20181216-%E4%BD%A0%E6%88%91%E5%92%8C%E4%BA%92%E8%81%94%E7%BD%91%E4%B8%80%E8%87%AA%E7%94%B1%E4%B8%8E%E6%8E%A7%E5%88%B6/ Sun, 16 Dec 2018 15:58:31 +0800 https://wrong.wang/blog/20181216-%E4%BD%A0%E6%88%91%E5%92%8C%E4%BA%92%E8%81%94%E7%BD%91%E4%B8%80%E8%87%AA%E7%94%B1%E4%B8%8E%E6%8E%A7%E5%88%B6/ <blockquote> <p>本文写在读胡泳(公众号:beingdigital)的 <a href="https://www.huxiu.com/article/268816.html" target="_blank" rel="noreferrer">《中国互联网二十年:自由的向往,信任的呼唤》</a>之后, 文章深意尚无理解, 已略有所感,故有此文。</p> </blockquote> <blockquote> <p>这篇文章10月27号就开始写了,后来攒了很久,一直写不完。太大的命题了,我究竟懂多少呢?文中按主题拆成了3个部分,就先把第一个部分的东西放上来。(仅仅过了1个月,这篇文章我就有些看不下去了,想法自大,行文幼稚。但毕竟是我自己当时认真思考后的产物,舍不得删掉。)</p> Golang 中 http.Get 过慢原因 https://wrong.wang/blog/20181206-golang-%E4%B8%AD-http.get-%E8%BF%87%E6%85%A2%E5%8E%9F%E5%9B%A0/ Thu, 06 Dec 2018 17:22:07 +0800 https://wrong.wang/blog/20181206-golang-%E4%B8%AD-http.get-%E8%BF%87%E6%85%A2%E5%8E%9F%E5%9B%A0/ <h2 class="group" id="背景"> 背景<span class="invisible group-hover:visible ml-2"><a href="#%e8%83%8c%e6%99%af" class="text-neutral-300">#</a></span> </h2> <p>我最近打算好好学一下Golang。翻开《The Go Programming Language》第一章<code>fetch</code>(page 16)单元,我拷贝了书上的 <a href="https://github.com/adonovan/gopl.io/blob/master/ch1/fetch/main.go" target="_blank" rel="noreferrer">代码</a>执行。发现下载百度首页都需要10s+,用<code>curl</code>下载则只需要不到1s,很奇怪。</p> 使用Pandoc和KaTeX为HUGO添加LaTeX支持 https://wrong.wang/flight-rules/20181130-%E4%BD%BF%E7%94%A8pandoc%E5%92%8Ckatex%E4%B8%BAhugo%E6%B7%BB%E5%8A%A0latex%E6%94%AF%E6%8C%81/ Fri, 30 Nov 2018 09:18:03 +0800 https://wrong.wang/flight-rules/20181130-%E4%BD%BF%E7%94%A8pandoc%E5%92%8Ckatex%E4%B8%BAhugo%E6%B7%BB%E5%8A%A0latex%E6%94%AF%E6%8C%81/ <p>最近在扫论文。写阅读笔记的时候,需要在Markdown中写公式。</p> <p>我一般用 <a href="https://code.visualstudio.com/" target="_blank" rel="noreferrer">Visual Studio Code</a>写Markdown文件,插件 <a href="https://marketplace.visualstudio.com/items?itemName=yzhang.markdown-all-in-one" target="_blank" rel="noreferrer">Markdown All in One</a>可以给VScode添加LaTeX公式支持,在本地写作很方便。然而到了要生成网页展示的时候,却发现因为Markdown标识符(如<code>_</code>)和LaTeX标识符含义冲突,hugo对公式的支持有很多 <a href="https://gohugo.io/content-management/formats/#issues-with-markdown" target="_blank" rel="noreferrer">问题</a>。</p> 穷折腾的无头苍蝇 https://wrong.wang/blog/20181126-%E7%A9%B7%E6%8A%98%E8%85%BE%E7%9A%84%E6%97%A0%E5%A4%B4%E8%8B%8D%E8%9D%87/ Mon, 26 Nov 2018 16:54:27 +0800 https://wrong.wang/blog/20181126-%E7%A9%B7%E6%8A%98%E8%85%BE%E7%9A%84%E6%97%A0%E5%A4%B4%E8%8B%8D%E8%9D%87/ <p>虽然明天晚上就要考试,但今天我还<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>是在刷v2ex,刷到一个 <a href="https://www.v2ex.com/t/511546" target="_blank" rel="noreferrer">帖子</a>。内容大概是一位专科辍学生描述了下自己会做什么以求一份工作,可惜他简历的内容却是:会翻墙,会刷机,会装系统之类的东西。这当然受到了v友们的围攻<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>,槽点无非就是以电脑(折腾)爱好者的水平谋求一份程序员的工作,没有搞清楚什么是竞争力。</p> <p>帖主的简历活生生一个<code>穷折腾</code>典范,<strong>配置不是编程</strong>。这里我想同时批判一下某个自翊为了解Python/Go/C++的程序员:</p> 什么是embedding features? https://wrong.wang/blog/20181123-%E4%BB%80%E4%B9%88%E6%98%AFembedding-features/ Fri, 23 Nov 2018 14:22:07 +0800 https://wrong.wang/blog/20181123-%E4%BB%80%E4%B9%88%E6%98%AFembedding-features/ <p>你也许会感到惊奇,我开始读论文时,花了很长时间才搞明白<code>embedding features</code>这个概念。尽管到最后我也没搞懂如何<strong>信达雅</strong>地翻译“embedding”这个词,但还是分享下现在<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>我对<code>embedding</code>的理解吧。</p> 怎么做到行文通顺? https://wrong.wang/blog/20181020-%E6%80%8E%E4%B9%88%E5%81%9A%E5%88%B0%E8%A1%8C%E6%96%87%E9%80%9A%E9%A1%BA/ Sat, 20 Oct 2018 11:34:29 +0800 https://wrong.wang/blog/20181020-%E6%80%8E%E4%B9%88%E5%81%9A%E5%88%B0%E8%A1%8C%E6%96%87%E9%80%9A%E9%A1%BA/ <p>首先说明,这不是什么教程博客,而是一篇描述问题的博客。</p> <p>最近发现我写博客的时候,连行文通顺都不能很好地做到。经常会有病句,缺少主语,读起来不自然。本来这篇博客标题准备用“怎么才能做到文笔好”,后来读了两篇自己的文章,觉得还是提出“怎么做到行文通顺?”这个问题更符合我现在的水平。</p> GO Web后端项目如何组织? https://wrong.wang/blog/20181003-go-web%E5%90%8E%E7%AB%AF%E9%A1%B9%E7%9B%AE%E5%A6%82%E4%BD%95%E7%BB%84%E7%BB%87/ Wed, 03 Oct 2018 15:36:29 +0800 https://wrong.wang/blog/20181003-go-web%E5%90%8E%E7%AB%AF%E9%A1%B9%E7%9B%AE%E5%A6%82%E4%BD%95%E7%BB%84%E7%BB%87/ <blockquote> <p>本文翻译自 <a href="http://matryer.com/" target="_blank" rel="noreferrer">Mat Ryer</a>的博文: <a href="https://medium.com/statuscode/how-i-write-go-http-services-after-seven-years-37c208122831" target="_blank" rel="noreferrer">How I write Go HTTP services after seven years</a>. 有足够英语阅读能力的读者请直接阅读原文。看完后可以再看下本文最后的补充部分。</p> </blockquote> <p>我一直在改进我写HTTP服务的方法,在写了7年Go程序后,我是怎么设计Go Web后端程序的呢?</p> Golang http库路由机制 https://wrong.wang/blog/20181001-golang-http%E5%BA%93%E8%B7%AF%E7%94%B1%E6%9C%BA%E5%88%B6/ Mon, 01 Oct 2018 14:40:53 +0800 https://wrong.wang/blog/20181001-golang-http%E5%BA%93%E8%B7%AF%E7%94%B1%E6%9C%BA%E5%88%B6/ <h2 class="group" id="自带路由的使用"> 自带路由的使用<span class="invisible group-hover:visible ml-2"><a href="#%e8%87%aa%e5%b8%a6%e8%b7%af%e7%94%b1%e7%9a%84%e4%bd%bf%e7%94%a8" class="text-neutral-300">#</a></span> </h2> <p>首先我们来研究下<code>net/http</code>库自带的路由。只要用<code>HandleFunc</code>将请求URL模式和回调函数注册成一条路由,然后调用<code>http.ListenAndServe</code>,当请求路径匹配路由表的某一项时,就调用这一项对应的回调函数(这里的“调用”并不指直接调用,具体如何,接着往下看)。 举个例子:</p> 读研前的展望 https://wrong.wang/blog/20180926-%E8%AF%BB%E7%A0%94%E5%89%8D%E7%9A%84%E5%B1%95%E6%9C%9B/ Wed, 26 Sep 2018 08:47:43 +0800 https://wrong.wang/blog/20180926-%E8%AF%BB%E7%A0%94%E5%89%8D%E7%9A%84%E5%B1%95%E6%9C%9B/ <p>毫无疑问,我是非常幸运的。在我复习很久开始倦怠的时候,保研名单里突然有了我的名字。十几号的时候我提交了保研材料做最后一搏,等了一周,保研结果才出来。在这一周里,我依然在自习室从早坐到晚,没有放弃考研复习。显而易见,我这一周里没学多少东西。有个词能概况我那几天的心态:患得患失。一会儿安慰自己肯定在保研名额里,一会告诫自己保研多半没希望而现在进度已经很慢了,再不加快复习肯定考不上的。</p> c语言宏中的字符串化和合并操作符 https://wrong.wang/blog/20180825-c%E8%AF%AD%E8%A8%80%E5%AE%8F%E4%B8%AD%E7%9A%84%E5%AD%97%E7%AC%A6%E4%B8%B2%E5%8C%96%E5%92%8C%E5%90%88%E5%B9%B6%E6%93%8D%E4%BD%9C%E7%AC%A6/ Sat, 25 Aug 2018 14:20:45 +0000 https://wrong.wang/blog/20180825-c%E8%AF%AD%E8%A8%80%E5%AE%8F%E4%B8%AD%E7%9A%84%E5%AD%97%E7%AC%A6%E4%B8%B2%E5%8C%96%E5%92%8C%E5%90%88%E5%B9%B6%E6%93%8D%E4%BD%9C%E7%AC%A6/ <p>C语言中的宏是一个很简单粗暴的设计,主要功能就是<code>replace</code>。为了更方便地替换,引入了宏函数这一概念。宏函数用参数替换预先定义的标识符在宏定义中的每一次出现。配合#和##,可以用宏简单高效地完成一些复杂的操作。</p> FIFO存储器件空满标志产生探究 https://wrong.wang/blog/20180528-fifo%E5%AD%98%E5%82%A8%E5%99%A8%E4%BB%B6%E7%A9%BA%E6%BB%A1%E6%A0%87%E5%BF%97%E4%BA%A7%E7%94%9F%E6%8E%A2%E7%A9%B6/ Mon, 28 May 2018 10:00:00 +0000 https://wrong.wang/blog/20180528-fifo%E5%AD%98%E5%82%A8%E5%99%A8%E4%BB%B6%E7%A9%BA%E6%BB%A1%E6%A0%87%E5%BF%97%E4%BA%A7%E7%94%9F%E6%8E%A2%E7%A9%B6/ <h2 class="group" id="设计难点"> 设计难点<span class="invisible group-hover:visible ml-2"><a href="#%e8%ae%be%e8%ae%a1%e9%9a%be%e7%82%b9" class="text-neutral-300">#</a></span> </h2> <p>在探究如何产生FIFO的空满标志前,先来解决一个问题 : FIFO存储器件的空满标志产生有什么难点?</p> <ol> <li> <p>亚稳态问题</p> <p>在数字集成电路中,触发器要满足setup/hold的时间要求。当一个信号被寄存器锁存时,如果信号和时钟之间不满足这个要求,Q端的值是不确定的,并且在未知的时刻会固定到高电平或低电平,这个过程称为亚稳态. 对于我们主要关注的异步FIFO器件, 读写操作分别在两个时钟域中进行, 自然, 亚稳态问题对FIFO的空满标志产生有很大影响.</p>