AWS Teams with OSTIF on Open Source Security Audits

We are excited to announce that AWS is sponsoring open source software security audits by the Open Source Technology Improvement Fund (OSTIF), a non-profit dedicated to securing open source. This funding is part of a broader initiative at Amazon Web Services (AWS) to support open source software supply chain security.

Last year, AWS committed to investing $10 million over three years alongside the Open Source Security Foundation (OpenSSF) to fund supply chain security. AWS will be directly funding $500,000 to OSTIF as a portion of our ongoing initiative with OpenSSF. OSTIF has played a critical role in open source supply chain security by providing security audits and reviews to projects through their work as a pre-existing partner of the OpenSSF. Their broad experience with auditing open source projects has already provided significant benefits. This month the group completed a significant security audit of Git that uncovered 35 issues, including two critical and one high-severity finding. In July, the group helped find and fix a critical vulnerability in sigstore, a new open source technology for signing and verifying software.

Many of the tools and services provided by AWS are built on open source software. Through our OSTIF sponsorship, we can proactively mitigate software supply chain risk further up the supply chain by improving the health and security of the foundational open source libraries that AWS and our customers rely on. Our investment helps support upstream security and provides customers and the broader open source community with more secure open source software.

Supporting open source supply chain security is akin to supporting electrical grid maintenance. We all need the grid to continue working, and to be in good repair, because nothing gets powered without it. The same is true of open source software. Virtually everything of importance in the modern IT world is built atop open source. We need open source software to be well maintained and secure.

We look forward to working with OSTIF and continuing to make investments in open source supply chain security.

Flatlogic Admin Templates banner

Why Bad Bugs in DNS (And Other Open Source Code) Just Won’t Go Away

Earlier this year, security researchers at Nozomi Networks discovered a DNS vulnerability in two C standard libraries used widely in embedded systems. The bug leaves the libraries vulnerable to cache poisoning – a DNS flaw Dan Kaminsky discovered in 2008.

Paul Vixie, a contributor to the Domain Name System (DNS) and distinguished engineer and VP at AWS Security, underscored this DNS issue during his keynote address at Open Source Summit Europe in Dublin.

Vixie stated that the real fix is Domain Name System Security Extensions (DNSSEC), released in 2010; however, DNSSEC is still not widely deployed enough to solve this problem when it turned up again in embedded systems some 13 years later.

These two libraries are open source software — anyone can inspect them, Vixie concluded. “So, this should be embarrassing that in 2022 there’s still widely used open source software that has this vulnerability in it.”

As an original author and eventual patcher of some of these DNS bugs, Vixie took us on a journey through the history of the early days of the internet to examine how we got to where we are today. He then provided best practices that consumers and producers can do now to help reduce vulnerabilities and mitigate security risks in the future.

How it all started: Vixie takes us back to 1986

Did you know that all devices that use DNS are using a fork of a fork of a fork of 4.3 BSD code from 1986?

It all started when the publishers of the 4.3 Berkeley Standard Distribution (BSD) of UNIX added support for the (then) new DNS protocol in a novel way. Getting a new release out on magnetic tapes and shipping it out in containers was a lot of work. So, they published it as a patch by posting it to Usenet (newsgroups), on an FTP server, and via a mailing list. If users were interested, they could download the patch. And at that time, in 1986, the internet was still small. “This was pre-commercialization, pre-privatization; the whole world was not using the internet,” Vixie said.

When Vixie began working on DNS shortly after the patch was issued, it was considered abandonware – no one was maintaining it. The people at Berkeley who had been working on it had all graduated. However, anyone who had a network device needed DNS. They needed to do DNS lookups, but the names of the APIs that Berkeley published were not standardized. Different embedded systems vendors had their own domain naming conventions, and they copied the 4.3 BSD code and changed it to suit their local engineering considerations.

“Then Linux came along… and right after that, we commercialized the internet,” said Vixie. In other words, things got big! “All of our friends and relations started to get email addresses, and it was wonderful and creepy all at the same time.”

Where things get a little complicated

Once the internet “got big,” every distro had built its own C library and copied some version of the old Berkeley code. “So, they might know that it came from Berkeley and get the latest one, or just copied what some other distro used and made a local version of it that was divorced from the upstream,” said Vixie. “Then we got embedded systems, and now IoT is everywhere, and all the DNS code in all of the billions of devices are running some fork of a fork of a fork of code that Berkeley published in 1986, so DNS is almost never independently reimplemented.”

It’s truly amazing that we are literally standing on the shoulders of giants. And this is just one example.

Vixie takes responsibility

Vixie surprised me and garnered my respect even more by taking responsibility for his past actions. He showed that he was accountable and transparent by stating, “All of those bugs and vulnerabilities that I showed you earlier…all of the bugs that are mentioned in that RFC are bugs that I wrote or at least bugs that I shipped when I shouldn’t have. And there are bugs that I fixed. I fixed those bugs in the 1990s. And so, for an embedded system today to still have that problem or any of those problems means that whatever I did to fix it, wasn’t enough. I didn’t have a way of telling people.”

Ok, what can we do now?

Once Vixie took the audience through the pages of internet history, he asked, “So, what can we learn?” I giggled a bit when he said, “Well, it sure would have been nice to already have an internet when we were building one, because then there would have been something like GitHub instead of an FTP server and a mailing list and a Usenet newsgroup. But in any era, you use what you have and try to anticipate what you are going to have.”

Best Practices for Producers

Vixie’s advice for what we could do now started with recommendations for steps that producers could take to mitigate and reduce code vulnerabilities.

Producers should take the following proactive approach.

Presume that all software has bugs. “It is the safe position to take,” stated Vixie.

Slide from Vixie’s Keynote at OSS EU: Findings and Recommendations for Producers

Have a way of shipping changes that are machine-readable. “You can’t depend on a human to monitor a mailing list. It has to be some automation to get the scale necessary to operate.”
Include version numbers. Version numbers are got-to haves, not nice-to-haves. “People who are depending on you need to know something more than what you thought worked on Tuesday,” Vixie said. “They need an indicator, and that indicator often takes the form of the date – in a year, month, day format. And it doesn’t matter what it is; it just has to uniquely identify the bug (and feature) level of any given piece of software. So, we have put these version numbers in, even if they serve no purpose for us as developers locally.”
Say where you got code. “It should be in your README files. It should be in your source code comments because you want it to be that if somebody is chasing a bug and they reach that bit of local source code, they’ll understand, ah… this is a local fork, there is an upstream, let’s see if they have fixed this,” Vixie said.
Automate your own monitoring of these upstream projects. “If there’s a change, then you need to look at it, decide what it means to you. Is it a bug that you also have, or is that part of the code base that you didn’t import? Is that a part that you have completely rewritten? Do you have the same bug but in a different function name or some other local variation of it? This is not optional,” Vixie added.
Give your downstreams some way to know when you have made a change.

Best Practices for Consumers

Next, Vixie continued by identifying best practices that consumers can follow to help mitigate or reduce vulnerabilities.

Consumers can take the following proactive approach:

Slide from Vixie’s Keynote at OSS EU: Findings and Recommendations for Consumers

Understand the risks of your external dependencies. To dig a little deeper into the pockets of the problem, Vixie stated, “As a consumer, when you import something, remember that you are also importing everything it depends on. So, when you check your dependencies, you have to do it recursively. You have to go all the way up. Uncontracted dependencies are a dangerous thing. If you are taking free software from somebody and hoping that team doesn’t disband, doesn’t go on vacation, doesn’t maybe have a big blow-up and make a fork and there are two forks, but the one you are using is dead. We have no other choice, we need the software that everybody else is writing, but we have to recognize that such dependencies are an operating risk” and not merely an unconnected benefit.

Side Note: How free software gets expensive

Vixie said, “Orphaned dependencies become things you have to maintain locally, and that is a much higher cost than monitoring the developments that are coming out of other teams. But it is a cost that you will have as these dependencies eventually become outdated. Somebody moves from version 2 to version 3, and you really liked version 2, but it’s dead code. Well, you have to maintain version 2 yourself. That’s expensive. It’s either expensive because you hire enough people and build enough automation, or it’s expensive because you don’t. That’s the choice.“So mostly, we should automatically import the next version of whatever, but it can’t be fully automated; sometimes the license will change from one you could live with to one you can’t. You may have an uncontracted dependency with somebody at some point that decides that they want to get paid. So that is another risk.”

Depend on known version numbers or version number ranges; say what version number you need. “So, as you become aware that only the version from this one or higher has the fix that you now know that you’ve got to have, you can make sure that you don’t accidentally get an older one. Or it might be a specific version. Only that version is suitable for you, in which case someday that TAR file is going to disappear. And you’re going to worry about whether you have a local copy and what you’re going to do,” Vixie said.
Avoid creating local forks. “It’s usually better not to have a local fork of something, so you don’t have to maintain it yourself.”
Monitor and review releases and decide the level of urgency to dedicate to updates. “Every time somebody releases something, open a ticket and make it some engineer’s job to go look at it, see if it’s safe, see if it’s necessary, see if it’s absolutely vital-set -my-hair-on-fire work over the weekend, or we’ll just get to it when we get to it,” Vixie said.

In conclusion, Vixie stated, “If you can’t afford to do these things, then free software is too expensive for you.” And with that, Vixie dropped the mic and walked off the stage. Ok, that didn’t happen, but that would have been the perfect ending to such an impactful statement.

As an “OS Newbie,” a badge ribbon I wore proudly at Open Source Summit EU, I had my first opportunity to get immersed in open source. Attending this conference in my first few months as an Amazonian helped me better understand our industry’s shared successes and challenges. I gained a lot of perspective from several resources and sessions; however, this important lesson taught by Paul Vixie, really stood out for me: free software comes at a cost. How much does it cost? This answer depends on the proactive or reactive choices you make as a consumer or producer of open source code.

Open source software is critical to our future. As we continue to use it to innovate, we must remember to follow the best practices Vixie outlined. And taking advice from a living legend, is worth the expense.

Helpful Security Resources:

Flatlogic Admin Templates banner

Problems with online user authentication when using self sovereign identity

Using self sovereign identity (SSI), there is no standardized solutions for solving online user authentication when using verifiable credentials and verifying the identity and user. All solutions result in further compromises and result in new problems. To understand the problems, we need to understand how this works. The following diagram shows the verifiable credential (VC) relationship with Issuer, Holders (behind a software wallet) and the verifier. Trust between the actors and are you required to authenticate the user behind the wallet are key and important requirements for some systems. Verifiable credentials are not issued to a user but to a wallet representing the user.

Image src: Verifiable Credentials Data Model 1.0 specification 

Use case definition: The user must be authenticated and verified by the “verifier” application using the verifiable credential. Is the user data in the verifiable credential the same user presenting it or a user, application allowed to use the VC on behalf of that person. So how can this be solved?

Solution 1: User Authentication is on the wallet.

The wallet application implements the authentication of the user and binds the user to all credentials issued to the wallet through the agents and then sent to verifier applications. With BBS+ verifiable credentials, it is possible to do this. The wallet is responsible for authentication of the user, but this is not standardized, and no wallet does this the same. If the wallet is responsible for user authentication, then applications only need to authorize the verifiable credentials and not authenticate the user behind the wallet and represented in the verifiable credential which is connected to the wallet. The VC is invalid if a different wallet sends this. So the verifier applications only validates that the sender of the VC has possession of the credential, nothing else and trusts that the wallet authenticates the user correctly and also trusts that the wallet prevents misuse. The verifier cannot validate if the application, person using the credential is allowed to use it. The verifier must trust that the wallet does this correctly.

Problems with this solution:

Wallet Software monopoly: If a state body pushes this solution, then it has effectively created a monopoly for the producer of the wallet software. This is because at present with existing wallets, the required authentication is specific to the application and the definition of how this is required and the hardware device used for the wallet. No standards exist for how a user is authenticated in the wallet and what level of initial user authentication is required. This could be improved by creating a new standard for wallets which can be used by the state body and the way the wallet must authenticate the users of the wallet. Then any wallet which fulfills the standard can be used for state created verifiable credentials.

Backup and recovery of wallets becomes really complicated because the user is connected to the software wallet. If I lose my wallet, or would like to switch the wallet, a safe secure standardized way would be required proving that the wallet has authenticated the same person as the initial wallet or a person of trust. All issued credentials would probably need to be re-issued. The user of the wallet and the wallet instance are tightly coupled.

Verifier authorization only, not authentication: The verifier does not authenticate the user behind the wallet, just accepts that it was done correctly. This creates a closed system between the verifier and the wallet even though it is distributed. The verifier is tightly coupled in the relationship if blindly trusting verifiable credentials from wallets which are not in its system scope. If the verifier needs to verify the identity, then FIDO2, OIDC, OAUTH2, PKI or existing online verifying flows could be used as a second factor.

Single point of failure: if the credential issuer VCs can no longer be trusted, then all verifiers using the credentials need to revoke everything created from the VC . This is not a problem if the verifier authenticated it’s users, identities directly.

Solution 2: Use the OIDC SIOP standard to authenticate when issuing and verifying the verifiable credentials

A second way of solving user authentication in SSI is to use OpenID Connect and SIOP. The credential issuer uses it’s own OpenID Connect server with pre-registered users where the person has been correctly identified. The credential issuer is responsible for identifying and authenticating the identity, ie the user plus the application. Each credential type which is issued requires a specific OpenID Connect client. When the user, using the SIOP agent from his or her wallet tries to add a new verifiable credential using SIOP, the user is required to authenticate using the identity provider (IDP). This can also be used when verifying credentials. By using this, any wallet which supports the SIOP agent with the correct verifiable credential type used can work. The strong authentication is not required on the wallet because this is part of the flows and the user does not need to be connected to the wallet. If the verifier does not authenticate the user or application sending the verifiable credentials, then strong authentication would still be required on the wallet.

Problems with this solution:

Requires an OIDC server: All credential issuers require an OpenID Connect server and a separate client per credential type.

Verifier authorization only, not authentication: Only proof of possession on the verifier. Verifiers need to start an SIOP verification and the verifier needs to trust the OIDC server used for the client. The OIDC server authenticates and not the verifier.

Single point of failure: if the credential issuer VCs can no longer be trusted, then all verifiers using the credentials need to revoke everything created from the VC . This is not a problem if the verifier authenticated it’s users directly.

Solution 3: Verifiers authenticate the user correctly before trusting the verifiable credentials sent from an unspecific wallet.

Another way of solving this is that all credential issuers and all verifiers authenticate the user behind a verifiable credential using their own process. This avoids the single point of failure. Each sign-in would require an extra step to authenticate, if using SSI for example a FIDO2 key or a PKI authentication or some OIDC flow can be used. SSI could be used as the first factor in the application authentication. This solution works really good but a lot of the advantages of SSI is lost.

Problems with this solution:

All applications require authentication. This is more effort if implementing a closed system, but all applications need to do this anyway, so it’s not really a disadvantage. If you control both the issuer and the verifier, then the verifier application could just do authorization of the verifiable credential.

SSI adds little value. Due to the fact that a second authentication method is used everywhere, this would also work without SSI, so why add SSI then in the first place.


User authentication is not an easy problem to solve and SSI at present does not solve this in a standard way. All existing solutions do something vendor specific or solve this in a closed system. As soon as inter-op is required, then the problems begin. It is important to solve this in a way which does not require a vendor specific solution or creates a monopoly for a vendor solution. At present, SSI solutions still have very little convergence. We have different ledgers which only work with specific agents. We have different agents, SIOP, DIDComm V1, V2 which are only supported by certain wallets. We have different verifiable credentials standards which do not work together. We have no authentication standards on the wallets, no standard for backup and recovery. It is still not clear how the trust register for credentials issuers will work, I as an application verifier need an easy way to validate the quality of the credential issuer otherwise how can I know if the credential was issued in a good way without doing my own security check. Guardianship will also complicate the user authentication process.

Links: Admin Templates banner