<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://dubrefjord.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://dubrefjord.com/" rel="alternate" type="text/html" /><updated>2026-04-29T20:15:51+00:00</updated><id>https://dubrefjord.com/feed.xml</id><title type="html">Dubrefjord Consulting</title><subtitle>AI-empowered application security and offensive security. Consulting and notes from Dennis Dubrefjord.</subtitle><author><name>Dennis Dubrefjord</name></author><entry><title type="html">What AI Found and What It Missed</title><link href="https://dubrefjord.com/general/2026/04/29/hello-world.html" rel="alternate" type="text/html" title="What AI Found and What It Missed" /><published>2026-04-29T00:00:00+00:00</published><updated>2026-04-29T00:00:00+00:00</updated><id>https://dubrefjord.com/general/2026/04/29/hello-world</id><content type="html" xml:base="https://dubrefjord.com/general/2026/04/29/hello-world.html"><![CDATA[<h1 id="intro">Intro</h1>
<p>Yesterday, the second Open Technology Fund pentest I have done was published. That means I can finally share some results of my exploration in the intersection of AI and security, namely using AI for pentesting!</p>

<h1 id="background">Background</h1>

<p>First, a brief background. During 2024 I conducted two pentests (manually, no AI) of open source applications via Open Technology Fund. The first one was Uwazi (<a href="https://www.opentech.fund/security-safety-audits/uwazi-security-audit/">report available</a>), a security critical system for managing eyewitness videos, testimonies, and other human rights documentation for human rights defenders, Journalists, activists, and researchers. Uwazi had been pentested three times previously.</p>

<p>One of the discovered vulnerabilites was a zero click account takeover via a password reset flaw. The fundamental issue was that the reset token was generated by hashing the email and the current unix time stamp, which meant that the attacker only had to try resetting the password using every possible unix time stamp from when they sent the password reset request to when they received the response from the server. In practice this means a few hundred attempts, which translates to a few seconds. When the right token value was tried, the attacker could set the password of the account to whatever they wanted.</p>

<p><img src="/assets/images/posts/2026-04-30-what-ai-found/uwazi_crit.png" alt="The attack in action!" />
<em>The account takeover via password reset</em></p>

<p><img src="/assets/images/posts/2026-04-30-what-ai-found/uwazi_crit.png" alt="The vulnerable code. Note line 291 where the token is generated" />
<em>Always use a secure random function to generate password reset tokens!</em></p>

<p>The second system we pentested was CDR-Link (<a href="https://www.opentech.fund/security-safety-audits/cdr-link/">report available</a>), a secure, open source help desk application for organizations that run digital security help desks for communities facing authoritarian censorship and surveillance. The helpdesks run via Signal, Telegram, and WhatsApp channels, and provide organizations with a dashboard from which they can streamline responses to support requests. I will dig into the most critical vulnerability we found, a zero click account takeover via SSRF later in the post.</p>

<h1 id="disclosure-the-vulnerability-that-i-missed">Disclosure: The vulnerability that I missed</h1>

<p>Part of my exploration into using AI for security has focused on vulnerability research, or pentesting. Specifically I have been interested in what the limit is for frontier AI models in discovering vulnerabilities in code. To really dig into it, I decided to scan open source applications that I have pentested (specifically the commit I pentested) since I know the code and what vulnerabilities they hold. One of the systems was Uwazi.</p>

<p>I scanned the system using a pentesting harness I built, and it did find most vulnerabilities from the pentest. I must say, I was surprised and really impressed! I looked specifically at the findings related to the password reset functionality, and it had found the same critical issue with the predictable reset token. But on top of that, it had discovered one more critical issue.</p>

<p>Looking back at the code,</p>

<p><img src="/assets/images/posts/2026-04-30-what-ai-found/uwazi_crit.png" alt="The vulnerable code. Where does the domain come from?" />
<em>Always use a secure random function to generate password reset tokens!</em></p>

<h1 id="the-vulnerability-that-ai-missed">The vulnerability that AI missed</h1>
<h1 id="conclusion">Conclusion</h1>
<h1 id="thanks">Thanks</h1>
<ul>
  <li>Uwazi</li>
  <li>Assured</li>
</ul>]]></content><author><name>Dennis Dubrefjord</name></author><category term="general" /><category term="intro" /><summary type="html"><![CDATA[Intro Yesterday, the second Open Technology Fund pentest I have done was published. That means I can finally share some results of my exploration in the intersection of AI and security, namely using AI for pentesting!]]></summary></entry></feed>