<feed xmlns="http://www.w3.org/2005/Atom"> <id>https://blog.devploit.dev/</id><title>devploit / blog</title><subtitle>Real-world infosec, no hype.</subtitle> <updated>2026-04-04T01:20:04+00:00</updated> <author> <name>Daniel Púa</name> <uri>https://blog.devploit.dev/</uri> </author><link rel="self" type="application/atom+xml" href="https://blog.devploit.dev/feed.xml"/><link rel="alternate" type="text/html" hreflang="en" href="https://blog.devploit.dev/"/> <generator uri="https://jekyllrb.com/" version="4.2.2">Jekyll</generator> <rights> © 2026 Daniel Púa </rights> <icon>/assets/img/favicons/favicon.ico</icon> <logo>/assets/img/favicons/favicon-96x96.png</logo> <entry><title>DEFCON Quals 2025 - Memory Bank CTF Challenge Writeup</title><link href="https://blog.devploit.dev/posts/defcon-quals-2025-memorybank/" rel="alternate" type="text/html" title="DEFCON Quals 2025 - Memory Bank CTF Challenge Writeup" /><published>2025-04-14T00:00:00+00:00</published> <updated>2025-04-14T06:24:11+00:00</updated> <id>https://blog.devploit.dev/posts/defcon-quals-2025-memorybank/</id> <content src="https://blog.devploit.dev/posts/defcon-quals-2025-memorybank/" /> <author> <name>Daniel Púa</name> </author> <category term="Exploitation" /> <summary> Introduction As a web hacking enthusiast, I typically focus on web-based CTF challenges. However, due to the lack of such challenges in DEFCON Quals 2025, I decided to tackle the Memory Bank challenge, which, though not strictly web-based, shared some similarities in nature. In this white-box CTF challenge, we had full access to the application code, allowing us to understand and exploit the ... </summary> </entry> <entry><title>Cracking Gandalf: Conquering the Lakera AI Security Challenge</title><link href="https://blog.devploit.dev/posts/cracking-gandalf-conquering-the-lakera-ai-security-challenge/" rel="alternate" type="text/html" title="Cracking Gandalf: Conquering the Lakera AI Security Challenge" /><published>2024-07-03T00:00:00+00:00</published> <updated>2024-07-04T07:13:29+00:00</updated> <id>https://blog.devploit.dev/posts/cracking-gandalf-conquering-the-lakera-ai-security-challenge/</id> <content src="https://blog.devploit.dev/posts/cracking-gandalf-conquering-the-lakera-ai-security-challenge/" /> <author> <name>Daniel Púa</name> </author> <category term="AI Security" /> <summary> Gandalf by Lakera is an engaging and educational online challenge designed to test and improve your skills in manipulating large language models (LLMs). Named after the wise wizard from “The Lord of the Rings”, this game involves progressively difficult levels where you must use clever prompts to make the AI reveal a secret password. The challenge is not just a fun exercise; it also serves to ... </summary> </entry> <entry><title>Hacking the Mind of AI: Pentesting Large Language Models</title><link href="https://blog.devploit.dev/posts/hacking-the-mind-of-ai-pentesting-large-language-models/" rel="alternate" type="text/html" title="Hacking the Mind of AI: Pentesting Large Language Models" /><published>2024-06-25T00:00:00+00:00</published> <updated>2024-06-25T20:27:01+00:00</updated> <id>https://blog.devploit.dev/posts/hacking-the-mind-of-ai-pentesting-large-language-models/</id> <content src="https://blog.devploit.dev/posts/hacking-the-mind-of-ai-pentesting-large-language-models/" /> <author> <name>Daniel Púa</name> </author> <category term="AI Security" /> <summary> Pentesting Large Language Models (LLMs) is crucial to ensure they operate securely and do not expose vulnerabilities that can be exploited by attackers. Based on OWASP’s Top 10 vulnerabilities for LLM applications, this post details each vulnerability, examples of exploitation, and mitigation measures. What is an LLM? Large Language Models (LLMs) are AI algorithms designed to understand and... </summary> </entry> </feed>
