Go to content
We are the #1 Microsoft partner
#1 Microsoft partner of NL
Console Courses Working at (NL)

Coordinated Vulnerability Disclosure

Why Responsible Vulnerability Disclosure is crucial for digital resilience
This article is automatically translated using Azure Cognitive Services, if you find mistakes, please get in touch

Cybersecurity is no longer just about technology. It's about people, processes and the willingness to work together on a safer digital ecosystem. Yet in practice, we see that many organizations are still surprised when an external security researcher reports a vulnerability.

Vulnerabilities are always found. The question is not if, but by whom. By someone with good intentions, or by a malicious person who takes advantage of it. The Coordinated Vulnerability Disclosure (CVD) process shows how organizations can make the difference between the two.

What is Coordinated Vulnerability Disclosure and why does it matter?

Coordinated Vulnerability Disclosure is a formal process by which organizations record how security researchers can report vulnerabilities in a responsible manner. The goal is clear: to safely resolve vulnerabilities before they are exploited.

A possible outcome of a CVD process is a Wall of Fame or Hall of Fame: a public recognition for security researchers who have acted responsibly. Other outcomes can be, for example, a financial reward, a thank you or merchendise. The Wall of Fame is therefore not an end in itself, but a consequence of a well-designed disclosure process.

Organizations that have a CVD process:

  • Recognize that they can't see everything for themselves.

  • Actively invite responsible disclosure.

  • Show that safety is a joint responsibility.

In the security community, such an approach is anything but optional. It means that a vulnerability has actually been found, validated and resolved. For the outside world, it is a clear signal: this organization takes digital security seriously.

Responsible Disclosure & CVD: this is how it works in practice

Responsible reporting of vulnerabilities is done through a Coordinated Vulnerability Disclosure (CVD) process. This is a formal agreement in which an organisation records:

  • Where vulnerabilities can be reported.

  • Under what conditions research is permitted.

  • And that reporters do not risk legal consequences as long as they comply with those rules.

A common first step is to set up a security.txt file. This is an international standard that tells security researchers how to contact you safely.

Guus Verbeek, security expert at Wortell
"It says something about the maturity of an organization whether they have arranged this."
Without a CVD process, security research can fall under computer trespassing. With a CVD, you explicitly provide frameworks within which vulnerabilities may be reported, so that you can solve them before a malicious person does.

From discovery to solution: how this works in practice

In the daily practice of security monitoring and research, new vulnerabilities are constantly being discovered. Sometimes these are known issues, sometimes new vulnerabilities that are not yet automatically recognized by tooling.

When such a vulnerability is found:

  • Is it validated: is it actually unusable?

  • Is the impact determined: what can an attacker do with it?

  • Customers are informed and advised on mitigation.

  • And, if relevant, other organizations with the same software component are also alerted via their CVD process.

In recent cases, this led to Guus being included in the Wall of Fame of several large Dutch organizations. Not because of the 'hacking', but because of the responsible reporting and helping to solve vulnerabilities.

Why CVD is also culturally important

Responsible disclosure is not a technical tick, but a mindset. Organizations that deal with this in a mature way:

  • Do not react panicky, but professionally.

  • See notifications as additional intelligence.

  • Learn structurally from external signals.

  • And build trust within the security community.

Unfortunately, we also see the opposite: organizations that approach reports legally, deter investigators or simply do not have a clear CVD process. In those cases, signals often do not even reach the organization, because there are no clear communication paths available for security researchers. In practice, this does not lead to more safety, but to less visibility.

As Guus aptly puts it:

"Malicious parties operate without restrictions. Security researchers work within agreed frameworks. That is precisely why responsible disclosure is essential."

What can you do as an organization?

Do you want to demonstrably work on digital resilience as an organization? Then these are the most important steps:

  1. Implement a security.txt
    Make sure researchers know where they can safely report vulnerabilities.

  2. Set up a CVD process
    Determine what is allowed, what is not allowed and how reports are handled.

  3. Map your external attack surface
    Many vulnerabilities arise in forgotten test, demo or legacy environments. 

  4. Test structurally, not incidentally
    Pen tests reveal not only technical, but also logical vulnerabilities.

  5. Be open to collaboration
    Security is not a solo action, but an ecosystem.

Shared responsibility

At Wortell, we don't see security as a product, but as a shared responsibility. What we learn in customer environments, within legitimate frameworks of course, helps us to recognize patterns and protect others as well. First our customers. Then the wider landscape.

That attitude is deeply embedded in our culture. Not because you have to, but because it works. And because it demonstrably makes the world a little safer.

Our author

Guus Verbeek