Gluu https://gluu.org Open Source Identity and Access Management Tue, 11 Mar 2025 11:00:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://gluu.org/wp-content/uploads/2023/08/favicon.png Gluu https://gluu.org 32 32 Janssen Project is a Digital Public Good https://gluu.org/janssen-project-is-a-digital-public-good/ Wed, 01 May 2024 07:51:16 +0000 https://www-dev.gluu.info/?p=31833 The Janssen Project has been included in the Digital Public Goods Alliance (DPG) Registry. The goal of the DPG Registry is to promote digital public goods and contribute to creating a more equitable world.

The post Janssen Project is a Digital Public Good first appeared on Gluu.

]]>
Helping Nations Build Trusted Digital Identity

Citizens want to use the Internet to connect to their government for a myriad of reasons. Wouldn’t it be great if you could use the Web to pay your taxes, vote, obtain a driver’s license, or apply for a birth certificate? To a large extent, governments are welcoming this digital transformation. Moving online improves the efficiency and accessibility of citizen-facing services.

But citizens also want to assert their government identities to third parties–not just to government websites. National governments are uniquely positioned to combine foundational and digital identities.  Some technologically advanced countries, like Singapore, offer a national identity provider or “IDP” that both the public and private sectors can leverage. A national IDP is not meant to replace consumer social login (e.g. Google or Facebook). It provides a higher assurance alternative when the website needs a trusted verification of the person in front of the browser–for example when you open a bank account.

Our migration to a more digital society is both exciting and scary. From a security perspective, a national IDP with the personal details of citizens is a honeypot for hackers, who persistently improve their capabilities. Governments need advancements in IDP software that mitigate the latest risks, like phone-based authentication using biometrics, and machine learning to detect risk.

Since we founded Gluu in 2009, we’ve helped national, state, and city governments to roll out digital identity services. There are both functional and operational challenges. An example of a functional challenge is onboarding third-party websites to use the national IDP. An operational challenge is scaling up the infrastructure just in time to handle very high loads, and scaling it down to reduce cost when traffic is low.

The community is the key to sustainable open-source software projects. This is the reason Janssen has invested so heaving in building the Agama programming language, and why Gluu launched Agama Lab, a developer tool. Low code tools like Agama make it easier to launch a purpose-built IDP with the citizen identity journeys that are required. Low code also makes it easier to hand off code, through visual whiteboard representations of the flows. And it’s fun to build and share Agama projects.

When Gluu contributed our code to the Linux Foundation Janssen Project in 2020, we hadn’t heard the term “Digital Public Good” or “DPG”. But the philosophy and principles behind the movement align perfectly with the Janssen Project’s mission to build the world’s leading community-governed digital identity platform. We strongly believe that the Janssen Project software can enable countries to build safe, trusted, and inclusive public digital identity infrastructure, and that Janssen software addresses more challenges than any other existing commercial or open-source platform.

At Gluu, we’re excited that the Digital Public Goods Alliance has recognized the Janssen Project as a Digital Public Good (DPG). It doesn’t change what we’re doing, why, or how. DPG or not… Gluu will continue to contribute to the Janssen Project because we think it’s the right way to build the core technology for our commercial product.  But at the same time, it’s nice to know that the knowledge that we’ve acquired over the last fourteen years, much of which is reflected in our software, can benefit low and middle-income nations who are struggling to build this existentially important new component of our digital society.

If you’re interested in learning more about the Linux Foundation and Janssen Project being recognized for their digital public goods, be sure to check out this press release on LinkedIn.

The post Janssen Project is a Digital Public Good first appeared on Gluu.

]]>
Multi Master Multi-Cluster LDAP (OpenDJ) replication in Kubernetes? A controversial view https://gluu.org/opendj-is-a-lightweight-directory-access-protocol-ldap-compliant-distributed-directory-written-in-java-many-organizations-use-it-as-a-persistence-mechanism-for-their-iam-systems/ Tue, 30 Apr 2024 08:38:24 +0000 https://u38m26irx0.onrocket.site/?p=28687 OpenDJ is a Lightweight Directory Access Protocol (LDAP) compliant distributed directory written in Java. Many organizations use it as a persistence mechanism for their IAM systems.

The post Multi Master Multi-Cluster LDAP (OpenDJ) replication in Kubernetes? A controversial view first appeared on Gluu.

]]>

Multi-Master Multi-Cluster LDAP (OpenDJ) replication in Kubernetes? A controversial view

OpenDJ is a Lightweight Directory Access Protocol (LDAP) compliant distributed directory written in Java. Many organizations use it as a persistence mechanism for their IAM systems. But should it be used in a multi cluster or multi regional Kubernetes setup by organizations? That is exactly what we want to discuss and dive into.

Beware the initial YES

Gluu Federation had great results with OpenDJ  working in a multi-AZ single Kubernetes cluster. As a result, Gluu Federation decided that the engineering team should take on the task of supporting multi-regional OpenDJ in Kubernetes setups. We realized that OpenDJ needed a lot of work. The containerization team looked at one of the most probable issues that would arise, which are replication issues. Looking at the code, OpenDJ lacked a lot of the networking smarts that would ease the addition and removal of regional pods. The solution was to match Serf with our OpenDJ container image and use it as the intelligence layer for pod membership, OpenDJ pod failure detection, and orchestration. That layer would communicate effectively with the replication process OpenDJ holds through NodePorts or static addresses assigned per pod.

OpenDJ communication

Gluu Federation helm charts holding the OpenDJ sub chart also took care of creating the needed Kubernetes resources to support a multi-regional setup. After several rounds of testing

Gluu rolled it out in Alpha stage to some customers. Although the solution worked we started noticing several issues:

Replication! Depending on how fast replications occurs and the user login rate, users may face login issues. Yes, this may occur in a single regional Kubernetes cluster but it is much less observable. To solve this, the user needed to be tied with a session and later reconcile the replication. Unless the user travels to another region before replication fully re balances that user won’t notice.

Missing Changes (M.C)! This was the main reason

Gluu Federation decided to withdraw the alpha support and remove the layers to support multi regional multi master OpenDJ setups in Kubernetes. Several missing changes started appearing in the long run that would not sync in. This would require manual intervention to force the sync and activate the replication correctly. Something many don’t want to be handling in a production environment. The below is one said replication status:

				
					Suffix DN : Server                                                       : Entries : Replication enabled : DS ID : RS ID : RS Port (1) : M.C. (2) : A.O.M.C. (3) : Security (4)
----------:--------------------------------------------------------------:---------:---------------------:-------:-------:-------------:----------:--------------:-------------
o=gluu    : gluu-opendj-east-regional-0-regional.gluu.org:30410 : 68816   : true                : 30177 : 21360 : 30910       : 0        :              : true
o=gluu    : gluu-opendj-east-regional-1-regional.gluu.org:30440 : 66444   : true                : 13115 : 26513 : 30940       : 0        :              : true
o=gluu    : gluu-opendj-west-regional-0-regional.gluu.org:30441 : 67166   : true                : 16317 : 17418 : 30941       : 0        :              : true
o=metric  : gluu-opendj-east-regional-0-regional.gluu.org:30410 : 30441   : true                : 30271 : 21360 : 30910       : 0        :              : true
o=metric  : gluu-opendj-east-regional-1-regional.gluu.org:30440 : 24966   : true                : 30969 : 26513 : 30940       : 0        :              : true
o=metric  : gluu-opendj-west-regional-0-regional.gluu.org:30441 : 30437   : true                : 1052  : 17418 : 30941       : 0        :              : true
o=site    : gluu-opendj-east-regional-0-regional.gluu.org:30410 : 133579  : true                : 1683  : 21360 : 30910       : 0        :              : true
o=site    : gluu-opendj-east-regional-1-regional.gluu.org:30440 : 133425  : true                : 15095 : 26513 : 30940       : 0        :              : true
o=site    : gluu-opendj-west-regional-0-regional.gluu.org:30441 : 133390  : true                : 26248 : 17418 : 30941       : 0        :              : true
				
			

Maintenance! No matter how good of a solution it was it required constant care unlike modern central managed NOSQL or even RDBMS solutions.

Recovery! Not as easy as thought , but recovery can go wrong easily since the operations are occurring on an ephemeral environment. Sometimes this requires shutting down a regions persistence and preform the recovery on one side then bringing the subsequent region afterwards. Not so cloud friendly right?!

Cost! Taking a look at the cost organizations spend on just holding a setup as mentioned up is actually higher then simply using a cloud managed solution. Of course given that the cloud managed solution is viable as some organizations require all services to live on their data centers.

Scale! No matter how smart the solution was, scaling up and down on higher rates of authentications was an issue. Going back to the first issue mentioned, if a high surge occurred and replication was behind a bit some authentications will fail. What would happen if auto scaling was enabled and a high surge occurred? Obviously, the number of pods would increase and hence the number of available members that need to join OpenDJ increases.

While pods become ready to accept traffic several users will be directed to different pods depending on the routing strategy creating a bigger gap for replication differences to arise. If the surge discontinues before replication fully balances two scenarios may occur. The first is the normal wanted behavior where the pod would check if replication with its peers is perfect and there are no missing changes in which the pod de-registers itself from the cluster membership and then gracefully shuts down.

The other behavior is where a missing change arises and the pod can’t scale down hence holding up the resources until an engineer can walk in, look at the issue, fix it and possibly forcefully scale down. I’m not saying a solution can’t be made here but the price for making it available doesn’t make sense with more viable solutions such as RDBMS and NOSQL solutions.

Performance! In general, we noticed that any organization requiring larger then 200–300 authentications per second should avoid using OpenDJ as the persistence layer. You do not want to be handling replications issues with 200 authentications per second coming in.

Conclusion

I personally still love OpenDJ and the LDAP protocol in general but is it a cloud friendly product? Not really. I would still recommend it for single multi AZ Kubernetes cluster for small — medium organizations that are trying to save money and have a light load. Focusing on what matters for your organization is very important. If you can implement a central managed solution, and the cost for you is relatively low, you should go with a managed solution that can scale and replicate with minimal effort across regions.

The post Multi Master Multi-Cluster LDAP (OpenDJ) replication in Kubernetes? A controversial view first appeared on Gluu.

]]>
Gluu Flex Roadmap https://gluu.org/gluu-flex-roadmap/ Mon, 29 Apr 2024 16:29:36 +0000 https://www-dev.gluu.info/?p=31763 As Flex is a commercial distribution of the Janssen Project, check the Janssen Nightly Build changelog and issues. You can also check the Nightly Build changelog for the Admin UI. Gluu 4 is considered a stable release–meaning no major features are planned.

The post Gluu Flex Roadmap first appeared on Gluu.

]]>
As Flex is a commercial distribution of the Janssen Project, check the Janssen Nightly Build changelog and issues. You can also check the Nightly Build changelog for the Admin UI.  Gluu 4 is considered a stable release–meaning no major features are planned.

Gluu Flex – Q3 2023

  • New endpoint for “OAuth 2.0 for First-Party Native Applications” draft
  • Support for new parameter for “Use of Attestation in OAuth2 Dynamic Client Registration” draft
  • New component: Jans Link Syncronization Server (LDAP or Keycloak as identity source)
  • New Component: Keycloak SAML IDP
  • Ecommerce SMB License for 1600 MAU or less
  • Support for FIPS profile using Jans RHEL 8

Gluu 4 – Q3 2023

  • Bug fixes for oxTrust Admin UI update
  • MAU report in oxTrust

Gluu Flex – Q4 2023

  •  Bi-directional Keycloak Syncronization
  •  Support for one-time use SSA statements
  • Support “Grant Management for OAuth 2.0” draft

Gluu 4 – Q4 2023

  • No new features planned

The post Gluu Flex Roadmap first appeared on Gluu.

]]>
Enhancing Secure Mobile Authentication with OAuth, Dynamic Client Registration, and DPoP https://gluu.org/enhancing-secure-mobile-authentication-with-oauth-dynamic-client-registration-and-dpop/ Fri, 08 Mar 2024 19:18:02 +0000 https://u38m26irx0.onrocket.site/?p=31230 Discover the latest insights from Mike Schwartz on authentication protocols, including OAuth, Dynamic Client Registration, and DPoP, in this thought-provoking blog post.

The post Enhancing Secure Mobile Authentication with OAuth, Dynamic Client Registration, and DPoP first appeared on Gluu.

]]>
🔒 Discover the latest insights from Mike Schwartz on authentication protocols, including OAuth, Dynamic Client Registration, and DPoP, in this thought-provoking blog post.

OpenID is a federated identity system designed to support a third party that needs to verify a person’s identity within your domain. For example, an e-commerce website may wish to offer social login, leveraging the identity provider (“IDP”) of Google or Apple. However, third-party authentication raises significant security concerns; we cannot allow the third party to access the user’s password. This is why the home IDP displays the login page via a TLS connection, a process that the RP (Relying Party) cannot intercept.

For first-party websites, using a federated identity protocol is very convenient, even though it was primarily designed for third-party authentication. It’s natural for first-party websites to utilize browser redirection via TLS, enabling the IDP to perform authentication and centralize other domain-specific business logic. However, for first-party mobile applications, OpenID is a square peg in a round hole. The browser redirect experience is subpar, with confusing popups that bewilder end-users, and domains cannot customize these system messages. Most mobile developers prefer a backchannel authentication mechanism, allowing them to keep the entire login experience within the app. Yes, they can access the password, but since it’s a first-party application, security concerns are somewhat reduced.

If passwords were the sole solution for human authentication, the OAuth password grant would suffice. However, most modern authentication technologies involve a series of requests and responses, i.e. they are multi-step. What we truly need is something akin to a backchannel OAuth Code Flow grant, where an authentication workflow can occur, and upon completion, the client can reference it with a code while requesting a token. A new IETF draft, known as OAuth 2.0 for First-Party Native Applications, addresses this need.

There are a few other measures that can enhance security: (1) using “proof of possession tokens” to prevent unauthorized usage in case a token leaks; (2) employing app attestation to mitigate the risk of app tampering; and (3) utilizing FIDO authentication to leverage hardware biometrics for end-user authentication — currently the best alternative to traditional passwords available.

The post Enhancing Secure Mobile Authentication with OAuth, Dynamic Client Registration, and DPoP first appeared on Gluu.

]]>
The Ten Buts of Govstack’s Identity Building Block https://gluu.org/the-ten-buts-of-govstacks-identity-building-block/ Thu, 29 Feb 2024 13:41:10 +0000 https://u38m26irx0.onrocket.site/?p=31193 Each Govstack specification offers a blueprint of a digital service landscape. Assuming you think this is possible, among the various Govstack specs, the most important is the GovStack Identity Building Block specification– because most governments that participate in the 50-in-5initiative will start their digital public infrastructure projects with “identity”.

The post The Ten Buts of Govstack’s Identity Building Block first appeared on Gluu.

]]>
Each Govstack specification offers a blueprint of a digital service landscape. Assuming you think this is possible, among the various Govstack specs, the most important is the GovStack Identity Building Block specification– because most governments that participate in the 50-in-5initiative will start their digital public infrastructure projects with “identity”.

But the Govstack identity initiative is not going to work. This article will raise five “buts” or reasons that will highlight some of the challenges. But I will also offer five possible course corrections which may help the effort achieve more success–or to at least accelerate momentum toward a different solution.

This approach to Momentum Thinking was developed by John Wolpert in The Two But Rule. The format is simple. The first “but” is “but that will never work because…” The second “but” is, “but it would work if…” The idea behind the book is that “buts” should always come in pairs–two “buts” are better than one. The first “but” identifies the challenge; the second “but” suggests a solution. Even if the second “but” is ridiculous, it still maintains momentum. You always want an even number of “buts”! Unless that even number of “buts” is zero! Wolpert warns that a “no buts” environment results in a culture of toxic positivity clinging to blind optimism and prone to creating a “cacophony of unexamined nonsense.”  My hope is that Ten Buts should create a huge amount of momentum! Are you still with me? I hope so! Here’s the first But…

But The APIs!

But it won’t work because they shouldn’t have defined an API in the first place!

In The Two But Rule, Wolpert advises to “move fast, but don’t break things.” Nick Schrock (the inventor of GraphQL)  says it well in my podcast, Open Source Underdogs, Episode 65:

“Know when to go slow and know when to go fast, especially when you’re talking about so-called “one-way doors” in Jeff Bezos speak, where you’re making decisions that are either extremely costly or impossible to undo… API decisions, especially in open source, last forever. You need to be deliberate on that.”

There are two kinds of endpoints in the current Identity BB “Service APIs” definitions: (1) not sufficiently specified; (2) not sufficiently considered. Simultaneously the Identity BB Service API gives too much detail and not enough!

Insufficiently specified

The Govstack Identity BB copies several API definitions from elsewhere, but does a bad job. Why copy and not just reference? Good question–I don’t know. For example, the authorize endpoint… they make a vague mention that it is the “authorize endpoint of Open ID Connect.” If you’re wondering… that’s not how “OpenID” is spelled–this lack of attention to detail is typical. There is no mention of which version of OpenID Connect was used (probably 1.0 incorporating errata set 2). There are no footnotes that specify where the referenced documents are located. Note that the authorize endpoint is defined in not one, but in several OpenID and OAuth specifications. See this well written section in the OpenID Connect Core Spec–that’s probably the one they were thinking about. But what about RFC 7636 which defines “PKCE”–a critical enhancement to the authorize endpoint for javascript clients to prevent a code interception attack, and documented in RFC 7636? No mention of it anywhere. I could go on and on here… but I’m trying to keep this as short as possible…

Not sufficiently considered

Let’s just look at the “client-management” endpoint, which the Identity BB spec defines as an endpoint “to add a new open ID connect client” – yes, yet another creative spelling of “OpenID Connect”. Spelling aside… I have to ask, why is this endpoint necessary? There was no such requirement from OpenID Connect software vendors in the past ten years–each vendor has their own config control plane. If standardization was needed, it could have been requested by universities or governments, but no one is asking for it. What is defined in OpenID Connect and OAuth is client “registration”–this is a way for a software client to register asymmetric secrets–passwords are bad for both people and software! For more information, you can see the OpenID Connect Discovery spec which was developed around 2015. You should also look at an excellent profile of OpenID Connect Dynamic Client Registration by the UK Open Banking Implementation Entity (“OBIE”).  Why didn’t the Identity BB study this profile? Well, they didn’t know about it. I asked… they had never heard of it.

Specifying a bad API can cause governments harm. What harm? There is the cost of migrating off the wrong API. And perhaps most significant are the “opportunity costs” – or the benefits lost by going in the wrong direction. For example, the incremental improvements that never happen, lost engineering team experience, and lost new capabilities in systems that are never developed because of API decision mistakes. As Nick Schrock says–”the API lasts forever.” So how can we fix it?

But it could work if Govstack refrains from designing APIs until a later time, when the need for standardization is clear, and multiple implementations can collaborate in a consensus  based process, preferably governed by a professional standards body like OASIS, Kantara or the OpenID Foundation (maybe in the iGov workgroup?).

Bonus But: it could work if Govstack profiled available specs, rather than defining new ones. For example, in the open banking ecosystem, the Financial API (“FAPI”) profile defines acceptable cryptographic algorithm strengths, and which OpenID Flows provide acceptable protection for consumers.

Double Bonus But: It could work if Govstack leveraged some of the contributors who are domain experts to give critical feedback on major decisions, creating a technical advisory committee, to insure the technical integrity of the deliverables, and to contribute to conversations about strategic direction.

But Trust!

But it won’t work because we don’t live in a one IDP world!

Ok, now I’m going to go a little identity geek on you. Quick definition: IDP = “identity provider” – it’s an organizational software service that displays the web login pages, issues digital access tokens and mints digitally signed identity “assertions” that describe the “who, what, where, and when” about the authentication event.

I’ll start with an observation from a recent market engagement webinar published by the great state of Texas, a gov’t of 30M people that is pretty good at building infrastructure. I might like more public transportation, but I can normally drive 80 miles an hour past the Tesla factory when I’m headed up north. And we also have some experience building digital infrastructure–although we’re far from perfect, and probably not even the best in the US.  If you watch the above referenced webinar, what’s interesting is that you can see that the Identity BB blueprint is repeating the mistakes of Texas. Initially Texas rolled out a “citizen identity” component as part of the ecommerce website. See this diagram:

But over the years, they realized that this approach fell short. One citizen identity provider would never rule them all. And this one-off identity solution was expensive and slow to innovate. Adding insult to injury, it hindered, not accelerated, digital transformation in the state. Other governments should learn from the real world experiences of states like Texas. They should build what Texas wants now, not what failed them more than ten years ago. The new architecture looks like this:

You need to listen to the webinar to appreciate the nuance. Texas doesn’t need a citizen IDP–remember that boondoggle is going to cost them a fortune to fix. Texas needs a platform to host IDPs for departments and jurisdictions. They need to become Okta–and host IDPs like they host compute, network, and persistence. Just to give you an example, Gluu serves the state of Maine. They have an IDP just for their election workers. Another example in the US–most state police departments have their own IDP, because each department has their own way to vet people, i.e. their own identity management business process for onboarding. Any officer with access to such an IDP in the US can access the FBI National Data Exchange (N-DEx). Identity is not just a citizen challenge.

This brings me to one of my major criticisms of the Identity BB–the Identity BB should start by thinking about how to establish TRUST.  The return on investment (“ROI”) for any infrastructure is based on the economic value derived from public and private private sector use. For example, a person gets value when they use their digital identity to obtain government subsidized health services, open a bank account or get hired by a private company. How can we make it easier for all the parties involved to trust each other?

To build trust, we need both “tools and rules”. The tools are the technical standards and open source software. The rules are the legal agreements that protect domains that consume digital identity, protect individuals who need to control their digital identity data, and protect the government agencies that share data about its workforce and citizens. Through federations, trust marks, and other tools in the digital identity space, the Identity BB can provide useful templates that can jump start trust.
If you want to consider the “infrastructure” analogy, the “identity” roads that governments need to build will enable the government departments and the private sector to connect to each other. The more “cars,” the higher the ROI. It’s not the government’s job to drive the trucks, but to set the traffic laws and enforce them.

The impending adoption of decentralized identity credentials makes trust even more challenging. While two-party federated trust ecosystems based on OpenID Connect offer the fastest current ROI–countries should all be studying Singpass, supported by more then 200 Singaporean organizations–verifiable credentials (“VC’s”) solve some interesting new challenges. But VC’s won’t make trust any easier! Without the muscle memory of federated identity trust, countries will find it more difficult to adopt the next generation of digital identity infrastructure. You can’t leap frog here–you need to build a solid foundation.

The whole idea of trust management is unaddressed in the Identity BB blueprint. In fact, just considering the lack of details around client registration betrays this fact. Clients aren’t “added” by admins in the government–i.e. You don’t need to POST to the client-mgt endpoint as the Identity BB imagines. Client registrations should be approved according to a normalized business process defined in the federation for the applicable department (health, justice, defense, elections, etc). There is a fundamental lack of understanding betrayed in the current documents about how modern federated trust ecosystems are constructed.

But it would work if Govstack focused on creating templates for governments to build trust ecosystems that enable business value creation from identity infrastructure.

Bonus But it would work if we establish patterns and guidelines for governments to create identity platforms instead of identity providers.

But The Scope!

But it won’t work because the IT landscape is actually a bit more complex than Govstack envisions

I have to admit, I think the idea of a blueprint for a state digital identity service is optimistic. I would argue that national identity systems are by definition bespoke. Obviously scale varies with each country, as does technical debt. Each government’s current “platform”can also vary quite a bit. I just brain-dumped some of the platform systems that interact or are required to launch a digital identity service in the diagram below, just to give you an idea of what your average state IT director needs to consider.

Typical IT Platform

The scope of the Govstack identity blueprint is a subset of citizen “Authn” (i.e. authentication) and citizen“IDM” (i.e. identity management)–the green boxes in the diagram above. Authentication includes issuing citizens some kind of identifier and an authenticator–normally a password, but it could be anything, e.g. mobile phone or passkey. Identity Management includes providing a way for citizens to register and recover their identity.

Right out of the box I wonder why we are focusing on citizens and not government workforce identity. If the government workforce is going to build and operate this citizen service, how can they even help citizens if we don’t know who they are, and what government agency they work for? This goes back to the idea that governments don’t need a citizen identity solution, they need an identity silo that provides identity for many segments: citizen, workforce, healthcare, public safety, defense, etc.

The current Govstack Identity BB is too “in the weeds”. This may sound obvious, but Govstack should help governments govern IT, not tell them how to assemble the various components. If you listen to the Texas Digital Identity Solution webinar, it’s clear that they are not going to operate this infrastructure–the operation of the service is a big part of what they are bidding out. I suggest the Govstack Identity BB spec follows Texas’ lead and focuses on governing IT. Ultimately, governments will need to make a buy-versus-build decision with regard to design, build, and operate. They will need to engage with the market–let commercial entities propose different technical solutions and operational contracts. The total budget for a digital identity service needs to consider the totality of licensing, cloud infrastructure, staffing, legal, marketing, QA, and many other aspects of the service. Ultimately the license is a small part of the total cost of ownership, and whether to build up from an open source base or license a commercial product is a business decision. Open source is only free if you don’t value your time. It may save money to license software if it’s more productive or efficient. Thus, Govstack should not be overly prescriptive with regard to the specifics of the solution. And let’s not kid ourselves, any “lab” we create is purely academic, because there will be so many differences with a government’s real world IT landscape.  Personally, I think a lab is a waste of money because there is no technical risk here–digital identity is a mature technology with well-known best practices, documented particularly well by IDPro.

Also, there is no “right” answer to the buy-versus-build question. Govstack should not advocate for building. In Texas they want to buy–they don’t have the internal resources to design or operate this technically complex, mission critical infrastructure. In Ethiopia, maybe they want to build because they see positive externalities to catalyze the tech industry in Addis Ababa.

I know just diving in head first, and enrolling citizens may sound like a great idea–people are registering… progress! But that doesn’t mean we’re delivering value. It’s like having a driver’s license in a country with no roads! Let’s lay the groundwork for trust in our digital society and take our time making the big decisions that impact end-user citizen identity–like how we identity proof, and which credentials we enroll. Make content available before we make a huge push for citizen adoption.

But we can fix the scope if we limit ourselves initially to the governance of the service, rather than nuts and bolts of a specific implementation!

Bonus But we can focus on building trust between the public and private sector organizations that will create and consume identity and consider more carefully the recommendations regarding citizen identity!

But the Team!

But it won’t work because the Identity BB team doesn’t have the right people on the bus.

Jim Collins in his seminal book Good to Great, says the first challenge is “who”– GET THE RIGHT PEOPLE ON THE BUS!  The right people doesn’t mean the most famous people. I have spoken and reached out to almost everyone listed as an author or contributor on the Identity BB. If you are one of those people, and you want to talk, contact me on Linkedin! The pattern I see is that there were some super smart people who just didn’t have enough time to participate. There were some people who were clearly unqualified to contribute–but no one either knew better or felt safe enough to voice their concerns about their contributions.  This has led to some pretty shoddy work. Nor does there seem like there is any desire to undertake a review or consider feedback–and I offered plenty of the latter myself. In fact, Jaume Dubois, one of the leading authors of the spec, said he had no regrets about the work, and he is advising national governments to adopt it.  I don’t doubt the good intentions, but the potential results are uncertain.

Who is on the team? The authors and contributors are listed in the spec. As mentioned above, the contributors are probably either minor participants or altruistic technologists who want to lend a helping hand. So let’s consider the authors. The Identity BB spec was written by three organizations: MOSIP, Technoforte, and ID30. This was not originally apparent. The original authors page had a mistake–the OpenID Foundation was listed as another organization that contributed, but it turns out that the person did not contribute any text, only attended “one or two meetings”, and asked to have their name removed. This is consistent with the “shoddy work” theme… you should ask people before you list them as an author, and find out what organization they are representing.

After doing some personal investigation, and speaking with the real authors, what I’ve noticed is that the current authors are either uninvolved in the conversation, or have material gaps in their knowledge, or both. Leaders in the group had never heard of well known identity standards, like SCIM or XACML. Some had no knowledge of popular identity federations, like eduGAIN in the higher education space, which has thousands of participants, or the UK Open Banking federation. When I asked about the plan to establish trust between identity providers (i.e. between domains), there was no answer. This kind of industry knowledge is essential to advise nations how to build their identity infrastructure. For example, when I asked why the group defined an enrollment endpoint, versus adopting SCIM, one author asked “What is SCIM?” And after I explained it, instead of saying something like “oh, we should look into that”… the SCIM standard was dismissed as “commercial.” It’s clear the current team needs different leadership and to be held technically accountable.

There are many excellent federation experts out there. I know at least a dozen who would be amazing. We need to get them in here or what we’re going to get is a “a cacophony of nonsense” with lots of spelling mistakes.

But we can get the right people on the bus if we find the people with the right skills, experience, attitudes, habits and values and pay them the rate they ask–the right design has a huge ROI!

Bonus But: we can minimize the team size by joining existing standards efforts where possible.

But we want a competitive vendor landscape, and they wrote the spec in the way that only one product can satisfy their requirements

A competitive ecosystem of vendors–who offer both commercial and open source software solutions–is desirable. If the Govstack Identity BB writes a spec in a way that only one vendor’s product can satisfy it, then it has failed.

The Govstack Identity spec seems to double as a product design document for MOSIP. The client management endpoint–the one I criticized above– makes perfect sense if you think of the Identity BB spec as a MOSIP product design document. And coincidentally, all the authors of the Identity BB are closely associated with the project.

MOSIP is funded by donations from the Gates Foundation and EK Step (a foundation funded by Nandan Nilekani, the co-founder of Infosys.) They are doing some interesting work in citizen identity management, although one has to wonder if a donations based funding model will result in sufficient long term maintenance of the software and innovation. Ultimately, they need a monetization strategy to fund the boring stuff (updates), and the fun stuff (new research and development). Deploying MOSIP into production given the uncertainties of this funding model is not without its risks. Will Gates Foundation and EK Step continue to fund it? If not, how will MOSIP monetize? Ultimately, this “open source” effort is starting to look more and more like a business. When I heard Gabi Adotevi speak at the DPGAMeeting in November about MOSIP, he sounded awfully concerned about new funding commitments from donors. In my opinion, he should have been more concerned about revenues, and less concerned about raising more capital.  As Bill Gates knows, in the software business, hostages are better than customers. It’s free now, but are current MOSIP customers future MOSIP hostages, because no other solution supports this “standard”? Only time will tell!

The DPGA, a UN initiative, and Govstackshould not be the marketing arm of MOSIP. And although we all want to see synergies between the altruistic investments of the various donors, in this case the stakes are simply too high to put your fingers on the scales to favor one very risky solution. MOSIP should have to compete in the marketplace of ideas and against economics, just like all the other commercial and open source solutions. If the Govstack Identity BB spec stands as is, it has failed completely to coalesce an ecosystem of vendors. In layman’s terms, no vendor or open source projects wants to implement a deficient API that is not really a standard.

But we can fix the Identity BB spec to make sure that it only uses open standards that have multiple competitors who can satisfy the requirements!

Wow, if you made it to the end of this article, congratulations! I know it’s a lot. Frankly, I have more but I’ve run out of time to share the rest of my thoughts! Please reach out to me on Linkedin if you are DPGA or Govstack leadership–this article is for you. It’s not too late to maintain momentum and help achieve the goals of SDG 9–to build resilient infrastructure, promote sustainable industrialization and foster innovation. But wishful thinking is not going to enable us help countries build digital public infrastructure. But we can act now, address the problems, and make sure we don’t make things worse!

The post The Ten Buts of Govstack’s Identity Building Block first appeared on Gluu.

]]>
“Workload Identity”: It’s SPIFFY, but Central Policy Management? https://gluu.org/workload-identity-its-spiffy-but-central-policy-management/ Thu, 08 Feb 2024 19:55:45 +0000 https://u38m26irx0.onrocket.site/?p=31174 SPIFFY Mutual TLS (mTLS) is a way to secure workload identity and communication in a distributed system using the SPIFFE and SPIRE standards. (see also: https://spiffe.io/) SPIFFE stands for Secure Production Identity Framework for Everyone, and SPIRE stands for SPIFFE Runtime Environment.

The post “Workload Identity”: It’s SPIFFY, but Central Policy Management? first appeared on Gluu.

]]>
SPIFFY Mutual TLS (mTLS) is a way to secure workload identity and communication in a distributed system using the SPIFFE and SPIRE standards. (see also: https://spiffe.io/) SPIFFE stands for Secure Production Identity Framework for Everyone, and SPIRE stands for SPIFFE Runtime Environment. SPIFFE defines a platform-agnostic identity format and API for workloads, and SPIRE provides a system for issuing and verifying SPIFFE identities.

An East-West service mesh like Istio or Cilium could be a good place to enforce policies based on “workload identities” derived from the claims of the X.509 client certificate. Let’s say the policy is “version2” clients can’t call “version1″ endpoints.” Thanks to Christian Posta for the solo.io webinar showing how in Cilium we could define a policy like this:

Cilium YAML file

From an enterprise policy management standpoint, do I want network policies in my service mesh? It means I need to learn a one-off policy language. For example, Cilium has its way they express policies (see above). But KrakenD uses the CEL policy language. And there are a hundred other systems that might need to consume enterprise policies. Wouldn’t it be better to follow the lead of Kubernetes access control, and use OPA to manage policies, or some other PDP–pick from an increasingly vast array like Axiomatics , Aserto , Permit.io , SGNL ? Just to name a few…

We want the speed of decentralization, but the consistency and operational leverage of central control. eBPF enables kernel-level optimizations and leaps forward in performance. But metaphorically, it doesn’t mean having a bunch of ACL’s in the router is the right thing, even if we can run those ACL’s faster then ever!

As an OAuth person, I see a lot of overlap between an MTLS workload identity (e.g. a “a kubernetes service account”) and an OAuth “client”. You can connect these two worlds–RFC 8705 defines a mechanism to bind OAuth access tokens to a client’s mutual-TLS certificate, enabling endpoints to verify the access token presented was actually issued to the client presenting it. You could also achieve this with DPoP (RFC 9449), which might be better, because MTLS doesn’t work for mobile clients. The real edge is the super computer in people’s pocket.

The post “Workload Identity”: It’s SPIFFY, but Central Policy Management? first appeared on Gluu.

]]>
4 Learnings: DPGA Meeting 2023 https://gluu.org/4-learnings-dpga-meeting-2023/ Thu, 08 Feb 2024 19:46:12 +0000 https://u38m26irx0.onrocket.site/?p=31170 I attended the DPGA annual meeting in Addis Ababa, Ethiopia. It was my first time meeting in person many of the people in that community and learning about the laudable goals of the DPG Alliance initiatives.

The post 4 Learnings: DPGA Meeting 2023 first appeared on Gluu.

]]>
I attended the DPGA annual meeting in Addis Ababa, Ethiopia. It was my first time meeting in person many of the people in that community and learning about the laudable goals of the DPG Alliance initiatives. The meeting was opened by Yodahe Zemichael , who leads the National ID Program Office in Ethiopia, and shared his insights based on their recent successful rollout. The plenary was followed by tracks on digital public infrastructure, global challenges, sustainability, and safeguards. At the conclusion of the meeting, participants were asked to jot down some key takeaways from the meeting. This list is not comprehensive, but it reflects a few things I was thinking about at the end of our time together.

One: Open Source Software ecosystems

As a DPG product owner, my biggest concern was regarding a mis-alignment on how to fund the continued maintenance and innovation of DPG open source software. The pervasive sentiment among DPGA members is that open source software should be free of cost. This sentiment is promulgated by some DPGA leadership. For example, at the closing plenary, Lucy Harris from the DPGA Secretariat said that open source software should be “freely available”, by which she meant that it should not cost anything. It’s important to align the incentives with the desired outcomes. For DPG software, the activity we need to incentivize is continued maintenance (the boring stuff that still needs to be done) and innovation. If the software is free, the result we’ll get is less innovation, less security, and less of everything else that comprises the product (documentation, training, deployment assets, community engagement, QA, etc). Donations are helpful but not sustainable in the long term, and not necessarily proportional to the usage of the software. Donations also rely on the technical enlightenment of the funders to make the right investments. A better solution is for each DPGA project to publish a metric for intrinsic value measurement in the DPG Registry–funding should flow back to projects proportional to the size of the deployment, for as long as the software is in use. Not funding DPG maintenance and innovation is currently problematic and only getting worse.

Two: Digital Identity Accessibility

When rolling out a citizen identity infrastructure, governments should try to meet people where they are. Not everyone has a smart phone or laptop. In some cases, a smart phone is a shared resource in a family or community. A piece of paper might be the only affordable way to connect a person to the digital economy.

But digital identity is a moving target. Governments need to “skate to where the puck is going” (to use a hockey analogy…). Changing the culture of digital identity takes years. Governments need to make investments to take advantage of technology that is on the cusp of mainstream adoption. And even if these capabilities are not widespread at the moment, early experience can provide valuable feedback to inform the technology roadmap.

And finally, we need to remember that digital identity by itself is useless. Value is derived from the presentation of digital identity to obtain services, and consumption of the digital identity by local and national governments and the private sector. Creating a trusted digital ecosystem requires a deliberate outreach effort. More countries should look to Singapore’s effort to sign up hundreds of businesses to accept Singpass, their national digital identity federation. If you want more adoption of citizen identity, make it more valuable to citizens!

Three: Architecture Maturity

Western governments are not exactly paragons of successful digital transformation. As an American, I sometimes wonder why Google can authenticate me, but my own government cannot. Open banking in the United States is unlikely any time soon. Renewing a driver license online in my home state of Texas reminds me what ecommerce was like in the 90’s. Are we really in a position to advise on how to build a government technical stack?

For this reason, I think we need to approach the challenge with a bit more humility. Some of the experience we have from the private sector is relevant. But at the same time, there are still many unknowns–stuff we haven’t done, and stuff that might be different in a government context–especially a government that serves people with different cultures.

I like the idea of a “GovStack”, but it has to be flexible enough to adapt to changing local contexts, lessons learned, capacity constraints, and continued innovation. The DPGA can help catalyze the community to share best practices and lessons learned. We’re all in learning mode and dealing with unprecedented rate of change in the information technology space.

Four: Schedule

One point Yodahe Zemichael made in his introductory keynote was that to build a digital economy, there was a missing meta-infrastructure: a need to bring all the small pieces together. Because I hang out with a lot of engineers, I have a great appreciation for these “little things” that normal people take for granted. It’s really hard to build a digital service that is high performance, high integrity, and highly available. For sovereignty, many countries want to self-host digital public infrastructure. Even most US state and federal governments are relying on cloud providers to minimize much of the technical capabilities–not just compute, persistence, network but higher level services like streaming, caching, key management, and even authorization. And the biggest challenge is not technology–it’s people. It takes years for engineers to get the breadth of experience required to build and operate digital public infrastructure at scale. This is not to say that it’s not important to overcome the activation energy to start–goals like 50 in 5 can help here. But things may take longer then expected–they normally do in information technology projects. We need to focus on long term sustainability. Perhaps our kids may enjoy the benefits of our work, and we may not. From a historical perspective, change in one generation is still fast.

The post 4 Learnings: DPGA Meeting 2023 first appeared on Gluu.

]]>
Keycloak Roadmap https://gluu.org/keycloak-roadmap/ Fri, 02 Jun 2023 20:24:45 +0000 https://u38m26irx0.onrocket.site/?p=29189 The recent announcement that Keycloak is joining the CNCF as an incubating project was welcome news!!! It resolved two important questions. How would Red Hat transfer governance of the project?

The post Keycloak Roadmap first appeared on Gluu.

]]>
by Michael Schwartz, CEO of Gluu

The recent announcement that Keycloak is joining the CNCF as an incubating project was welcome news!!! It resolved two important questions. How would Red Hat transfer governance of the project? Who owns the Keycloak trademark? Consequently, Gluu is working to integrate Keycloak into both the Janssen Project and the commercial Gluu Flex distribution.

You may be wondering if Jans Auth Server and Keycloak are both identity providers, how do they connect?  And why are we doing this?

The answer is that there is room in the Janssen Project for lots of identity tools. While Jans Auth Server currently provides a lot of core functionality, other components are also important, like the FIDO and SCIM servers.  The modular Jans Config API and tools provide a single API management plane for all the components.  And finally, Janssen Project includes a setup script that bootstraps new deployments and cloud native assets like Helm charts and a Terraform provider.

At Gluu, we realize that no one open source IDP will rule them all. There are lots of different IDPs that were written to solve specific problems. People are still writing new open source identity providers, like Zitadel.  It would be cool if there was a Rust IDP (maybe one day at the Janssen Project?) Janssen Auth Server was designed for FIPS, high concurrency, multi-datacenter, database agnostic, auto-scaling deployments… customizable with reusable low code technology. There is no way we could have done that, and solved the myriad of other design objectives that various open source IDPs pursue.  Keycloak, is a “complete, ready-to-run IAM service in a single lightweight container image.” It supports SAML and “Realms”–features primarily required for enterprise workforce applications and access control, that some in the Janssen community want.

Plus Keycloak users are our kind of people–they believe that enterprise IAM infrastructure should leverage code developed through an open source community. In other words… the enlightened.  We want to create a bridge for collaboration.

There are seven integrations we’re undertaking to leverage the new capabilities from KeyCloak

  1. Add Keycloak to Jans Setup, as an optional component
  2. Write a Keycloak authentication provider to achieve SSO between Jans Auth Server and Keycloak client
  3. Update Cache Refresh to sync Keycloak database
  4. Add a Jans Config API endpoint to manage SAML Trust Relations and attribute release policies in Keycloak
  5. Add SAML trust relationship management in the Jans Text UI and command line interface
  6. Add SAML trust relationship management in the Gluu Flex Admin UI
  7. Add Agama Engine support directly into Keycloak

Hopefully, we’ll see an early release by the end of June that covers at least the first four of these items.

The post Keycloak Roadmap first appeared on Gluu.

]]>
Inter-Operable Identity Journeys with Agama https://gluu.org/inter-operable-identity-journeys-with-agama/ Mon, 08 May 2023 16:15:43 +0000 https://u38m26irx0.onrocket.site/?p=28688 Whiteboard your consumer, citizen, or workforce identity journeys with Agama Lab.

The post Inter-Operable Identity Journeys with Agama first appeared on Gluu.

]]>

As developers, we love the convenience of cloud identity, but we also want the flexibility to meet our exact business requirements for login, registration, and account recovery. Most good cloud identity platforms provide a way to customize the user experience by implementing code. But frequently this code requires us to learn some vendor-specific black magic which can lead to lock-in.

And login is not getting any easier–identity journeys are getting more complex. For example, with the help of a risk score, we can add just the right amount of security friction to keep things safe, while not bothering the end-user too much.

At Gluu, our black magic is “interception scripts”, with which you can implement any multi-step authentication flow asynchronously. But over the years we saw how hard it is to write, maintain and transfer these interception scripts. We wanted something more re-usable.

So in early 2021, Gluu introduced Agama, a low-code programming language to simplify the development of identity journeys. Developed at the Linux Foundation Janssen Project, Agama defines a standard way to build web identity journeys in a vendor-neutral way. It’s both a programming language and a project archive format.

In February 2023, at the State of Open conference in London, Gluu launched Agama Lab, the new developer tool to author and release Agama projects. Agama Lab takes low code to a new level by enabling developers and architects to graphically white board identity journeys and to release deployable Agama archives directly to a GitHub repository.

You can learn Agama programming in 18 minutes. With Agama Lab you can whiteboard the consumer, citizen, or workforce identity journeys of your dreams!

Help us build a shared public catalog of re-usable authentication, registration, and credential management flows!

Watch the video on YouTube:

Low Code Orchestration: Learn Agama Lab in 18 minutes!

The post Inter-Operable Identity Journeys with Agama first appeared on Gluu.

]]>
APAC Digital Identity Unconference 2023: Notes from Session 1 https://gluu.org/apac-digital-identity-unconference-2023-notes-from-session-1/ Mon, 13 Mar 2023 17:49:34 +0000 https://u38m26irx0.onrocket.site/?p=27928 "Building Identity Journeys with Low Code" with Mike Schwartz, Founder and CEO of Gluu Inc. at the APAC Digital Identity Unconference 2023

The post APAC Digital Identity Unconference 2023: Notes from Session 1 first appeared on Gluu.

]]>
“Building Identity Journeys with Low Code”
with Mike Schwartz at APAC Digital Identity Unconference 2023

Modern authentication has evolved significantly. In the past, a single web page, typically a username/password form, would be presented to authenticate a person, and access to the website would either be granted or denied based on the outcome. Nowadays, modern identity services use a sequence of web pages to authenticate a person via a web browser. The web pages displayed can vary based on security assessment. For instance, if the authentication process seems unusual, such as when a person is using an unrecognized browser or logging in from a new location, additional steps may be necessary to ensure the person’s identity.

Authentication is just one part of the “identity journey,” which includes social login, registration, “forgot password,” two-factor credential enrollment, and consent, among other things. At a high level, the three most typical identity journeys are registration, sign-in (authentication), and forgot password (credential enrollment). This process of building identity journeys is also known as “identity orchestration” in the security industry.

As modern authentication is increasingly based on open standards, especially OpenID Connect, the question arises of how websites can enable the building of identity journeys as part of the OpenID Connect flow. In the past, Gluu exposed a mechanism called “interception scripts,” written in Java or Python syntax, which were highly powerful, allowing websites to build any type of authentication flow. However, creating such scripts required at least intermediate-level programming skills, and for more complex flows, it was better to be an experienced Java developer.

One significant challenge regarding the maintenance of these scripts is knowledge transfer. The original author of the scripts has deep knowledge of how they work, but how can they transfer the script to the next team? As the scripts are asynchronous, there is no sequential way to read them. Basically, the new team needs to invest a lot of time to study and understand the code.

In recent years, “low code” has emerged as a strategy to enable developers to use graphical tools to create programs. Some platforms even promise “no code” by providing pre-built components that perform all the tasks needed to create a working solution. However, “no-code” solutions suffer from one disadvantage, which is that if the platform designs haven’t provided the functionality, there is no way to accomplish the task. Given the diverse requirements for building identity journeys, a low code approach is better.

The Janssen Project, a Linux Foundation chartered group, was formed with the purpose of building a world-class OpenID Connect provider. In 2020, Gluu contributed the open-source code that it had developed since 2009 to provide an advanced starting point. The Gluu Server was the most certified OpenID Provider and was used in production by hundreds of companies, including a significant number of governments, financial institutions, large enterprises, and global security companies. Building on this, the project has continued to innovate rapidly, not only in functionality but also in tools to enable easy administration, cloud-native deployment, and to increase the transparency and quality of the CI/CD process. The Github URL for the Janssen Project is https://jans.io.

In 2021, the Janssen Project senior developers decided to build a low-code solution for identity journeys, following the success of similar platforms by large commercial, proprietary OpenID Connect vendors like ForgeRock and Ping Identity. There was no open-source low-code way to build these identity journeys, and the Janssen team projected that enabling developers to use this approach would attract a large community, especially with respect to Red Hat Keycloak, another popular (but difficult to customize) open-source identity platform.

After surveying several possible solutions to build a low-code identity orchestration solution, Jose Gonzalez, one of the lead developers at the Janssen Project, proposed an interesting idea: why not design a domain-specific language, or DSL, that specifically addressed the requirements for building web flows for identity orchestration. This approach was ultimately selected, and Agama was born. (Mike Schwartz picked the name Agama based on a visit to the San Antonia zoo reptile collection, where he saw some cute shield-tailed Agamas).

You can find an introduction to the Agama DSL in the Janssen documentation: https://docs.jans.io/head/admin/developer/agama/

The post APAC Digital Identity Unconference 2023: Notes from Session 1 first appeared on Gluu.

]]>