The post Leveraging a Semantic Layer for Research Curation and Conversational Experiences appeared first on Enterprise Knowledge.
]]>
A global philanthropic organization focused on health programs struggled to fully leverage knowledge from semi-structured and unstructured documents. Specifically, within a health-related funding program, researchers lacked access to key qualitative data from end-user surveys and transcripts. Consequently, they were unable to fully incorporate end-user experiences into their decision-making processes. This blind spot cost the organization significant time and resources, as researchers manually generated artifacts that lacked critical context.
At an enterprise level, multiple teams faced difficulties ingesting the complex, unstructured knowledge assets required for research or investment strategy. Without a standardized process, the organization wasted resources developing disconnected projects across programs. In addition, the enterprise sought guidance on how to utilize a newly acquired suite of advanced AI and semantic tooling, as well as how to govern and apply best practices to their knowledge and data management efforts.

EK partnered with the organization to develop end-user capabilities for the health program and establish a repeatable pattern to scale these capabilities across enterprise programs and applications. To demonstrate the pattern’s effectiveness, EK first piloted the capabilities within a prioritized program.
Discovery & Use Case Backlog:
EK initiated a series of modeling discovery workshops with the health program department to identify a starting point, key data challenges, and a repeatable use case. The team developed personas to map the data and process challenges facing researchers, translating these requirements into a backlog of potential AI use cases to pilot. After narrowing the focus with the health program team to qualitative product research insights, EK formalized a set of business outcomes and requirements that would guide the overall solution.
Domain Taxonomies & Ontology:
A core part of the solution involved codifying health program knowledge into semantic models (taxonomies and ontologies) for graph ingestion and retrieval. Custom taxonomies categorize end-user responses and enrich survey questions with metadata, enabling efficient retrieval for AI conversational experiences and agentic loops. Developed in collaboration with subject matter experts, these taxonomies include alternative labels and terminology to reflect how researchers search for information.
Custom ontologies support data ingestion, classification, and knowledge domain connectivity. They provide a graph schema and structure for data extracted from surveys and transcripts, facilitating insight retrieval. Built using the Resource Description Framework (RDF) and Semantic Web Standards, these ontologies are extensible to other organizational areas and future datasets.
Solution Build:
1. Data Enrichment & Graph Curation
EK developed a reusable pipeline that transforms unstructured surveys and transcripts into a populated graph of connected data. This pipeline uses a large language model (LLM) to automatically classify questions and responses based on the custom taxonomies. The pipeline also leverages the ontology as the schema to build an instantiated graph of connected data. Leveraging the ontology as its schema, the LLM takes tagged statements and inferred relationships to automate the creation of triples required for the RDF knowledge graph.
2. RDF to LPG Transformation:
EK developed a reusable framework to transform RDF graph data into Labeled Property Graph (LPG) data. This transformation provides the final LPG with new semantic context unavailable in out-of-the-box LPGs, unlocking new insights and improved inference for chatbot responses.
3. Chatbot Development:
EK designed and implemented an agentic AI solution that translates natural language questions into queries using semantic domain models. This AI agent parses incoming questions based on modeled concepts and user intent to optimize retrieval requests. Once records are retrieved, results are summarized and pulled together in a custom front-end interface developed by EK. An additional AI-agent brings together the records to formulate a chatbot response in the front-end containing both a summary and links to relevant data sources.
Solution Architecture & Roadmap:
To ensure scalability, EK developed a future-state semantic architecture and roadmap to guide how the organization uses semantic tools and unifies multiple AI projects. Leveraging EK’s proprietary Semantic Layer Maturity Benchmark, the roadmap aligns the factors for implementing an enterprise-wide Semantic Layer as a Service. Furthermore, the architecture consolidates multiple AI pilots and initiatives occurring across the organization. This new framework provides a framework for how AI agents reuse components and provide insights across various programs.

EK provided specialized expertise in semantic modeling, graph architecture, and technical strategy to successfully deliver both the semantically backed chatbot and the repeatable system architecture. The program’s technical components were designed for interoperability, allowing the organization to extend and reuse them across future search and conversational AI use cases.
EK provided end-to-end support to the organization throughout the engagement, leading the program through strategy, design, and implementation phases. This approach ensured that both business and technical requirements remained aligned from the project’s inception to its long-term sustainment.

The semantic layer solution provided the health program with new qualitative research insights, repeatable data transformation processes, and a connected data framework for future solutions. The enrichment pipeline automatically transforms unstructured surveys and transcripts into machine-readable data, making fresh data available to health program researchers. The pilot chatbot saves the program researchers time by generating inferred insights on product experiences, a task that previously required high manual effort. The semantic models established an expandable, connected data framework that is used to link data and relevant research beyond the initial program.
Business Outcomes
As a result of this work, the organization is expanding how it leverages knowledge graphs, semantic models, and AI agents to support researchers and decision-makers. EK laid the groundwork for a new operating model and a semantic solution team that will continue to enable teams and end-users across the enterprise.
The post Leveraging a Semantic Layer for Research Curation and Conversational Experiences appeared first on Enterprise Knowledge.
]]>The post What’s Next for Search? Moving Beyond Q&A to Context, Discovery, and Action appeared first on Enterprise Knowledge.
]]>Enterprise search is in the middle of a reset. For years, most enterprise search programs were built to return documents and links across repositories, and success was measured by whether employees could locate the right file. That model is no longer enough. The next era of search extends beyond simple question-and-answer queries. It consists of answers paired with context, evidence, and guided discovery, including definitions, constraints, provenance, and clear paths to supporting sources and next steps, enabling users to make decisions with confidence rather than merely retrieving information.
There is a misconception I keep hearing in generative AI conversations: that chat will replace the search box, and the future is simply “ask a question, get an answer.” While AI has facilitated a more robust conversational interface, it overlooks the crucial elements required for search within organizations. Modern enterprise search needs to meet consumer-grade expectations for intent and relevance while also satisfying enterprise requirements like security, traceability, governance, and explainability. Therefore, organizations should move beyond questioning, “How do we integrate AI into search?” and instead ask, “How can I enable holistic search, discovery, and context?”
Understanding what’s next for enterprise search requires understanding where organizations currently are. EK’s Semantic Maturity Spectrum provides a useful framework for assessing this progression. Advancing along this maturity spectrum does not mean replacing what came before, as each stage adds capabilities that build on the foundation of the previous stages. Enterprise search remains essential even as recommendations, assistants, and agents are introduced because organizations still need a reliable, permission-aware way to locate, filter, and validate source content

1. Disconnected and Siloed Knowledge:
At the lowest maturity level, teams, systems, and informal networks fragment knowledge. Information exists in email threads, personal drives, and undocumented expertise. Users rely on “who you know” to find what they need, creating friction, duplicate work, and significant knowledge loss when employees leave. This state represents not just a search problem but a foundational knowledge management challenge.
2. Enterprise Search: Unifying access across repositories is the first significant step forward. Enterprise search platforms index content from multiple sources, enabling users to query across systems rather than searching each one individually. However, relevance often plateaus at this stage. Without semantic understanding or richer metadata, search engines return keyword matches that may or may not address user intent. Users learn to work around these limitations by trying multiple queries or reverting to asking colleagues. This is where acronyms, product codenames, policy versions, and regional terminology differences quietly break relevance, even when the right content is technically indexed.
3. Recommendations: A significant shift occurs when organizations move from pull-based searching to proactive delivery of relevant knowledge. Recommendation systems surface related content, similar cases, next-best resources, and relevant experts based on work context. This shift means that reusing knowledge is becoming the norm. EK’s work building a recommendation engine that automatically connects learning content to product data shows that recommendations depend on structured signals: taxonomy tagging, semantic relationships, engagement data, and domain constraints. Practical measures of progress include reduced time-to-answer, increased reuse of validated assets, fewer duplicate artifacts, and fewer escalations to SMEs for routine questions.
4. Virtual Assistants and Chatbots: The interaction model changes fundamentally when users move from scanning result lists to conversational guidance and synthesis. Chat-based interfaces are most valuable when they provide an answer and the supporting context, rather than just a response string. Trust-building experiences prioritize citations, confidence signals, and clarity on when escalation is required.
5. Autonomous Agents: At the highest maturity level, systems move beyond answering questions to completing bounded tasks with clear accountability. Critically, autonomous execution operates within a human-in-the-loop model, where agents handle preparation, synthesis, and execution steps while humans retain oversight for approval, exception handling, and final decision-making. Agentic workflows include drafting briefings with citations, creating structured outputs, routing work items, and updating downstream systems. Agent safety requires permission-aware orchestration, audit trails, and strong constraints on tool use and scope.
As organizations progress along the maturity spectrum, the underlying search architecture must evolve. Hybrid retrieval is becoming the standard pattern because it is the most practical way to capture user intent and balance results coverage with relevance. As I explored in a previous blog on vector search, keywords, semantics, and vectors are complementary approaches, not competing ones. Vector search excels at finding relevant results for natural language and fuzzy intent, but it also brings challenges with explainability, drift, and transparency that must be addressed. Without evaluation and monitoring, relevance quietly degrades over time, and user trust collapses before the issue becomes apparent.
The following four factors provide a framework for maturing search capabilities in alignment with the knowledge discovery spectrum.

1. Designing for Relevance
Search relevance improves dramatically when organizations model the relevant concepts in their domain, such as products, policies, clients, systems, teams, and topics. This is where relevance is enabled because the system can connect what users mean to how knowledge is organized. This process requires understanding how users think about information and what they need to accomplish. Search design best practices emphasize that effective search experiences start with user research and intentional design, not technology selection.
Taxonomies, ontologies, and knowledge graphs increase interpretability and consistency by anchoring searches to business meanings and relationships. EK’s five-step approach to enhancing search with a knowledge graph outlines how organizations can analyze search content, develop an ontology, design the user experience, ingest data, and iterate toward continuous improvement. Knowledge graphs connect entities, enrich results with context, and support exploration paths beyond keyword matches. That same semantic foundation supports both list-based exploration and conversational experiences by keeping meaning consistent across modalities.
Because users express intent differently depending on the task, search needs flexibility across modalities, not a single interface. Faceted navigation and consistent metadata remain foundational to findability, especially in heterogeneous content ecosystems. Traditional results lists remain the most efficient way to compare versions, filter by date or system, and validate sources, while chat adds synthesis and guidance when users need explanations, summaries, or a path forward. The goal is not chat versus lists, but the right experience for the task with traceability and control. Actionable results further reduce time-to-value by enabling users to take the next step directly from the results, such as previewing content, initiating workflows, or connecting with experts, instead of merely opening a file.

2. Providing Proactive Recommendations
Recommendations signal that knowledge is becoming easier to reuse, not just easier to locate. Discovery patterns include related content, similar cases, next-best resources, and relevant experts based on their contributions, affiliations, and demonstrated expertise in the searched topic. This shift from pull-based searching to proactive delivery represents a fundamental change in how organizations surface knowledge.
Recommendation systems depend on structured signals: taxonomy tagging, semantic relationships, engagement data, and domain constraints. EK’s work building a recommendation engine that automatically connects learning content to product data demonstrates how semantic relationships serve as a real-world pattern for bridging silos across systems. Knowledge delivery experiences increasingly blur the line between portals, search, and learning, especially when content must be personalized to user roles and context.

3. Enabling Users with Context
Chat-based interfaces are most valuable when they deliver more than a response string. In enterprise settings, the critical output is an answer paired with the context needed to trust and apply it. Context-rich answers include definitions, scope limitations, rationale, and source evidence, plus guidance on what to look at next.
This is where the “Q&A is the future” misconception breaks down. A system that only answers questions might look impressive at first, but it fails quickly when users need to verify, explain, or operationalize what it provides. Trust-building experiences prioritize citations, confidence signals, and clear escalation behavior. The right pattern is not to always answer, but rather to answer when grounded and show uncertainty or even suggest alternative actions when not.
EK’s semantic search work for an online healthcare information provider illustrates why semantics matter in practice. Medical concepts can be referred to in different ways across audiences and regions, and patients rarely use clinical terminology. A semantic approach that blends enrichment and modern retrieval techniques supports both medical professionals and patients, enabling doctors to search with clinical terminology while allowing patients and caregivers to find the same concepts using everyday language. In other words, assistant success is not only a model problem; it also encompasses architecture, governance, and operations.

4. From Search to Agent Autonomy
In practice, the goal is not open-ended autonomy but bounded execution in well-defined workflows with explicit controls. Autonomous agents move from information delivery to bounded execution. They draft briefings with citations, generate structured outputs, route work items, and update downstream systems. The opportunity is meaningful, but the bar is higher than it is for assistants: once a system can act, it needs enforceable constraints, auditability, and clear accountability.
The practical lesson is that “agent-ready” requires more than a good prompt. It requires permission-aware orchestration, consistent identity and access enforcement, and operating models that define when the system can proceed and when it must hand off to a human. Agents are the natural next step for organizations that already have trustworthy retrieval, grounded assistant behavior, and mature governance.
A practical path forward starts with focus. Identify a small set of high-impact use cases and the knowledge assets that actually drive those outcomes, rather than trying to make all content searchable at once. From there, set a starting point for AI content readiness by checking the quality, structure, duplication, recency, and contextual completeness to ensure accurate retrieval and reliable answers.
Next, tie readiness findings to the operating reality required to scale. Readiness is not only about content cleanup; it is about organizational capability, the state of enterprise data and content, and the change threshold needed to move beyond pilots. Governance and operating models should explicitly address coordination, filling gaps for unanswerable questions, and systematic response to hallucinations so that readiness remains robust over time.
Finally, treat unified entitlements as a first-class dependency for any assistant or agent experience. If permissions are inconsistent across systems, the experience will either leak content or become unusable. Unified entitlements provide a holistic way to apply access rights consistently across asset types and platforms.
With those foundations in place, the maturity sequence stays simple: unify findability, enrich meaning, enable recommendations, introduce assistants with citations and evaluation, and then automate bounded workflows with agents.
The future state is not “Q&A as search,” but decision support that combines answers, context, evidence, and discovery. Search becomes the delivery layer for organizational context, and discovery becomes the mechanism for reuse at scale. Assistants and agents amplify impact only when the knowledge foundation is trustworthy, governed, and measurable.
Enterprise Knowledge helps organizations modernize enterprise search through strategy and roadmap development, semantic modeling and knowledge graph enablement, AI readiness for knowledge assets, and unified entitlements. Whether you are just getting started or moving toward recommendations, assistants, or agents, EK can help you build a foundation that is trustworthy, measurable, secure, and scalable. Contact us to discuss your enterprise search journey and what comes next.
The post What’s Next for Search? Moving Beyond Q&A to Context, Discovery, and Action appeared first on Enterprise Knowledge.
]]>The post Lulit Tesfaye and Lynn Miller to Speak at the APQC 2026 Process & Knowledge Management Conference appeared first on Enterprise Knowledge.
]]>In this presentation, Tesfaye and Miller will examine how organizations can build an integrated operating model that aligns Knowledge Management (KM), data, and AI functions to drive faster decisions and stronger business outcomes. The session will share real-world examples of how organizations are starting to break down silos across the org, facilitating new ways of working and exploring emerging team structures, evolving leadership roles, and holistic insights from organizations.
Find out more about the event and register at the conference website.
The post Lulit Tesfaye and Lynn Miller to Speak at the APQC 2026 Process & Knowledge Management Conference appeared first on Enterprise Knowledge.
]]>The post Building the Semantic Layer: Scaling Enterprise Intelligence at a Global Investment Firm appeared first on Enterprise Knowledge.
]]>
A global investment firm with a $330 billion dollar portfolio and 50,000+ employees struggled with fragmented data. Investment professionals were losing critical time hunting for assets across disconnected systems. Detailed deal records were scattered as a mix of structured data and unstructured documents. Finding information required knowing it existed in the first place, then identifying the right person to grant access—a process that often stalled for weeks.
Additionally, as part of their due diligence process, teams frequently commissioned external research reports, often unaware that similar studies already existed elsewhere in the firm. With expertise siloed by division, the firm was failing to leverage its greatest asset: the collective knowledge of its global workforce.

EK launched a global Knowledge Portal that serves as a single lens into the firm’s intellectual capital. This portal was built on a graph data model that provides organizational context, developed using AI-driven auto-tagging that enhances the search experience, and delivers dynamically updated pages to the user. The result is a single platform that provides a 360-degree view of existing firm resources, such as research, deals and investments data, relationships, and personnel information.
This has unlocked strategic capabilities such as:

What sets EK apart is the ability to deliver immediate business impact while simultaneously keeping in mind long-term scalability. We focused on speed to value, launching the Knowledge Portal early as a high-impact product to provide investment teams with immediate utility.
The true EK difference, however, lies in how we evolved that solution. Leveraging our specialized expertise, we are decoupling the Semantic Layer from the portal itself, transforming a single-use tool into a standalone enterprise platform. Throughout 2026, we will be scaling this infrastructure for firm-wide adoption. Because this semantic backbone was first “stress-tested” and enriched by the real-world experiences of the investment teams, the foundation now being laid across the enterprise is not just a theoretical model—it is a field-tested tool ready to power the entire organization.

The Knowledge Portal has transformed how the firm’s investment professionals access information and expertise.
Ultimately, the Knowledge Portal has shifted the firm from a state of fragmented information to one of collective intelligence, ensuring that every investment decision is backed by the full weight of the organization’s global expertise and data.
The post Building the Semantic Layer: Scaling Enterprise Intelligence at a Global Investment Firm appeared first on Enterprise Knowledge.
]]>The post Enterprise Knowledge Named Top Knowledge Management Consultancy by KMWorld appeared first on Enterprise Knowledge.
]]>EK’s thought leadership reflects its position in the field. It hosts a public knowledge base of over 1,000 articles on KM, Semantic Layers, Knowledge Graphs, and AI, produces the top-rated KM podcast, Knowledge Cast, and has published the definitive books in the field, Making Knowledge Management Clickable and Bridging Knowledge, Data, and AI.
“Our annual list of 100 Companies to Watch in the Knowledge Management space is a testament to their agility to thrive in an environment of rapidly changing technologies, while not losing track of the importance of human expertise. We’re proud to celebrate these organizations that are redefining what it means to lead, add business value, and recognize that human ingenuity and artificial intelligence are increasingly inseparable,” said KMWorld Editor in Chief, Marydee Ojala.
In addition to the Top 100 List, EK is also consistently recognized by KMWorld on their list of AI Trailblazers, and by Inc. Magazine and the Washington Business Journal as a Best Place to Work.
“EK is consistently recognized by the industry as a leading KM, AI, and Semantic Consultancy, and by our own employees as a great place to work. That combination is why we continue to thrive,” stated Zach Wahl, CEO of EK.
Read more about the recognition and EK’s place on the list at the KMWorld site.

The post Enterprise Knowledge Named Top Knowledge Management Consultancy by KMWorld appeared first on Enterprise Knowledge.
]]>The post GraphRAG in the Enterprise appeared first on Enterprise Knowledge.
]]>
However, as RAG is applied to more complex use cases, its limitations start to surface. Since it treats knowledge as flat chunks of text, it makes it difficult to convey relationships, maintain consistent context, or support multi-step and cross-document reasoning. GraphRAG addresses these gaps by incorporating graph-based structure into the retrieval process, where entities and their relationships are explicitly modeled. The result is a more context-aware retrieval methodology that sets the foundation for higher quality and more explainable responses at an enterprise level.
GraphRAG extends traditional RAG by grounding retrieval and reasoning in a knowledge graph built on semantic standards rather than relying solely on vector similarity over text. Instead of treating each retrieved chunk as an isolated unit, GraphRAG uses an explicit graph structure with entities, relationships, and shared vocabularies to represent how information is connected across documents and systems.
In practice, this means retrieval is driven not just by textual similarity, but by meaning and structure. Facts, entities, and relationships are anchored to a common ontology, allowing the system to understand how pieces of information relate to one another. This enables traceable reasoning paths, supports multi-step and cross-document questions, and produces answers that are more transparent and auditable—qualities that are essential in an enterprise environment.
Naive RAG systems return information primarily based on textual or vector similarity (keyword or embedding similarity) and assemble the responses by presenting the most relevant chunks in parallel. This approach works well for straightforward lookups, but it often breaks down when the questions require precision, complex reasoning, or an understanding of how information is related. Because each chunk is treated independently, the model has limited awareness of relationships across documents, and it can be difficult to explain why specific pieces of information were retrieved or how an answer was formed.
GraphRAG changes this behavior by shifting retrieval from isolated text to structured context. Instead of asking “which passages are similar,” the system can ask “which entities, relationships, and facts are relevant, and how do they connect?” This supports more consistent handling of ambiguity, enables cross-document and multi-hop questions, and creates reasoning paths that can be traced back to explicit graph structures. In enterprise settings, this fosters higher precision, more interpretable answers, and results that adapt as knowledge evolves.
A GraphRAG system is composed of several components that work together to introduce structure, context, and control into the retrieval and generation pipeline. Each component plays a specific role, ensuring that retrieved information is not only relevant but meaningfully connected before it is used by a language model.
At a high level, GraphRAG combines an enterprise ontology to define shared meaning, a knowledge graph to store entities and relationships, a retrieval and ranking layer to identify the most relevant graph context, and an LLM and orchestration layer to synthesize grounded responses. Together, these components shift RAG from document-centric retrieval towards context-aware reasoning, while preserving traceability and alignment with enterprise knowledge models.
A global foundation engaged EK after multiple failed attempts to apply LLMs to strategic investment analysis. The organization needed to synthesize insights across public-domain data, internal investment documents, and proprietary datasets to evaluate how investments aligned with strategic objectives, all while ensuring their results were precise, explainable, and usable by executive stakeholders. EK implemented a GraphRAG-based solution anchored by a purpose-built finance ontology and a unified knowledge graph that integrated structured and unstructured data under a shared semantic model. By combining graph-based retrieval, provenance-aware context assembly, and an agentic RAG workflow, the system consistently outperformed naive LLM and traditional RAG approaches on complex, cross-document questions. The result was a transparent and auditable AI capability that delivered coherent and explainable insights. Stakeholders are now able to trace how each answer was derived, establishing a foundation for organization-wide adoption.
Likewise, an investment agency responsible for managing assets and supporting long-term, high-stakes decisions needed to provide its professionals with faster and more reliable access to relevant research and insights. The agency was building a Knowledge Portal but lacked the foundational semantic frameworks required to implement AI and graph driven insights at scale for the enterprise, while also operating across a complex data environment with misaligned metadata and disparate structured and unstructured sources. EK delivered a GraphRAG-aligned solution by applying semantic modeling and a graph database to unify data definitions and integrate research content into a single access point. The resulting capability ensured trustworthy and consistent results along with reduced operational silos, and improved search through enhanced natural language processing and auto-tagging. This established a scalable semantic knowledge ecosystem that enabled a second phase focused on graph-driven tagging and expanded knowledge graph capabilities.
GraphRAG represents an evolution of retrieval-augmented generation that aligns more naturally with how enterprises structure, govern, and reason over knowledge. By grounding retrieval and reasoning in explicit semantics and relationships, GraphRAG moves beyond keyword and vector similarity to deliver responses that are more precise, explainable, and contextually coherent. As organizations push AI systems into use cases with higher stakes that cross multiple domains, this graph-based foundation becomes critical for not only improving answer quality, but for building trust and transparency into scalable enterprise AI solutions.
To discuss how GraphRAG can support your organization’s next phase of AI adoption, contact us and connect with our team!
The post GraphRAG in the Enterprise appeared first on Enterprise Knowledge.
]]>The post Knowledge Cast – TJ Hsu of Amgen appeared first on Enterprise Knowledge.
]]>
Enterprise Knowledge CEO Zach Wahl speaks with TJ Hsu, Director of R&D Knowledge Management at Amgen. With over a decade of experience in artificial intelligence and knowledge services, TJ currently leads a team dedicated to enhancing Amgen’s research, development, and medical capabilities through innovative KM strategies.
In this conversation, Zach and TJ discuss the impact TJ has made at Amgen so far, how moving from a collection of intelligent individuals to a collective intelligence helps organizations learn faster and get smarter over time, and different ways to get your organization applying knowledge rather than just “learning” it. They also chat about how to scale KM for the enterprise through selective persona targeting and thoughtful application of technology.
If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.
The post Knowledge Cast – TJ Hsu of Amgen appeared first on Enterprise Knowledge.
]]>The post Graph-Based Security & Entitlements: Transforming Access Control for the Modern Enterprise appeared first on Enterprise Knowledge.
]]>Modern organizations face significant challenges in managing entitlements across their increasingly complex technology landscapes. The proliferation of cloud-native architectures and Software-as-a-Service (SaaS) platforms has dramatically expanded the digital footprint of most organizations. Critical assets, including sensitive data, analytics, collaboration tools, content, and AI applications, are now distributed across numerous platforms. Each platform leverages its own unique identity model, authorization patterns, and administrative tools, rendering unified entitlements management nearly impossible based on traditional means.
This fragmentation leads to entitlements becoming scattered, causing security teams to lose visibility into how access is genuinely granted. One system relies on roles, another uses attributes or tags, a third employs direct permissions, and a fourth utilizes nested group inheritance. This inconsistency creates both compliance gaps and operational hazards. For a deeper overview of why this problem is escalating across modern platforms, see Why Your Organization Needs Unified Entitlements.
To effectively address security and entitlement challenges, organizations must consolidate and connect disparate information sources, such as identity providers, application configurations, permission lists, and platform-specific rules. Furthermore, the increasing adoption of AI-assisted search and agentic workflows makes robust entitlements more critical than ever. Access controls now govern not only who can view a document but also what information an AI system is authorized to retrieve, summarize, and act upon.
This article explores why traditional entitlement management fails in cloud-first environments and presents graphs and digital twins as a practical architectural solution for unified entitlements. They provide a single, holistic definition of access rights applicable consistently across all systems and asset types.

Zero Trust treats every access request as untrusted by default and requires explicit, policy-driven verification. As a result, it has become the standard direction for enterprise security, but many organizations still struggle to operationalize it. The reason is straightforward: access control decisions depend on fragmented entitlements spread across platforms, identity sources, and repositories. Policies are defined in business language and then implemented as platform-specific rules that drift over time.
Traditional role-based access control (RBAC) remains useful for clear-cut assignments. The challenge is that modern access is not granted in one step: it is accumulated through relationships such as team membership, project participation, shared workspaces, and delegated ownership. In practice, permissions frequently flow through multi-step chains, for example:
These access paths are often invisible to conventional security monitoring, which tends to focus on direct assignments. As the organization evolves, so do these relationships. Policy drift occurs gradually as platform-specific configurations diverge due to factors like personnel transitioning between teams, project status changes (closing or reopening), evolving client and regulatory policies, contractor rotation, and data product reclassification. This creates a gap between what the organization intends to enforce and what the systems actually enforce. EK explores how these unexpected access paths show up in real enterprise scenarios in Unified Entitlements: The Hidden Vulnerability in Modern Enterprises.
Graphs transform how organizations model and reason about access control by representing the enterprise as it truly exists: a network of interconnected entities. Nodes can represent identities such as people, groups, service accounts, and agents, as well as resources like documents, datasets, dashboards, and APIs; edges represent the relationships between them (i.e., member-of, owns, steward-of, can-view, and can-admin). In other words, graphs model security the way organizations operate: through relationships, not just static assignments.
Graphs turn a common authorization question (“Can User A access Resource B?”) into a relationship evaluation problem. The system traverses the graph to determine whether a valid permission path exists (and whether any conditions apply). The resulting traversal offers advantages in both speed and explainability, and the path itself serves as the justification: access is allowed because User A is part of Team X, Team X is assigned to Project Y, and Project Y has approved access to Workspace Z, which contains Resource B.
Graphs also unlock practical security analysis that is difficult to do with scattered point permissions:
Operating a graph within a deliberate entitlement architecture greatly improves its value. EK’s unified entitlements guidance describes a Unified Entitlements Service (UES) pattern that centralizes policy intent and converts it into enforcement across systems, effectively acting as a “universal translator” for security policy. A useful way to think about this is as an entitlements digital twin: a continuously updated model of “effective access” across the enterprise. A practical UES plus digital twin implementation typically includes:
Digital twins are valuable because they let security teams safely ask “What happens if…” questions before changes hit production. In my experience, three workflows drive the most value:
For a deeper walkthrough of the UES components and interactions, see Inside the Unified Entitlements Architecture.
Entitlements are not purely technical. They depend on business meaning: data classification, ownership, stewardship, domains, and regulatory obligations. A semantic layer connects raw identity and permission data to that shared context so policies can be expressed in the terms the business actually uses to operate.
AI intensifies the challenges by enhancing the speed and scope of retrieval, summarization, and recombination. If an application can pull content quickly, then entitlement drift propagates faster and becomes harder to unwind after the fact. That is why entitlements are the safety boundary for AI-enabled discovery and agentic workflows: what an AI experience can retrieve must be constrained by the same effective access as the user behind it.
The practical implication is that “AI-safe access control” is really “business-aligned access control.” When sensitivity, stewardship, and usage obligations are encoded in the semantic layer and connected to identities, resources, and relationships, the organization can safely scale search and AI experiences without relying on scattered point controls or brittle platform-specific rules.
Graph-based security is not just a data model: it is an operating capability. The fastest way to make progress is to treat unified entitlements as both an architecture and a change program, delivered in phases.
How to Get Started1. Assess and prioritize: Inventory your highest-risk assets and repositories, map where sensitive content and data live, and pick the domain where entitlement failure would have the highest impact.2. Standardize policy intent: Define a canonical policy model (roles, attributes, and relationships) in business terms before you try to enforce it everywhere. 3. Pilot UES and entitlement graph: Stand up the Unified Entitlements Service and the entitlements graph for one domain and one measurable use case. Prove ingestion, evaluation, and evidence end-to-end. 4. Expand and improve: Onboard additional systems in waves, translate policies consistently into enforcement points, and continuously monitor for drift and new access paths. |
Implementation Considerations1. Data quality and lifecycle hygiene: If identity and resource metadata are stale, the graph will be confidently wrong. Establish ownership, lifecycle expectations, and lightweight quality checks.2. Identity resolution: Unify people, contractors, service accounts, integrations, and agent identities into coherent profiles so policy is enforced consistently across systems. 3. Exception workflows: Define how exceptions are requested, approved, time-bounded, and reviewed so temporary access does not become permanent drift. 4. Evidence and auditability by design: Capture decision context by default: what was accessed, what policy was evaluated, and what relationship path enabled the decision to automate audits. |
In summary, unified entitlements is about reducing uncertainty. Graphs and digital twins provide the structure to model how access actually works, the tooling to simulate change before it becomes disruption, and the evidence to feasibly prove who can access what and why. As AI adoption accelerates, this capability becomes even more critical because entitlements define the safety boundary for what AI systems can retrieve and act on.
If your organization is looking to operationalize unified entitlements, especially to reduce policy drift, strengthen Zero Trust controls, or make AI-enabled discovery safer, Enterprise Knowledge can help. We partner with teams to assess entitlement risk, define a scalable Unified Entitlements Service approach, and build an entitlements digital twin roadmap aligned to your governance model and technical ecosystem. Contact us to start your journey to unified entitlements.
The post Graph-Based Security & Entitlements: Transforming Access Control for the Modern Enterprise appeared first on Enterprise Knowledge.
]]>The post Making Search Less Taxing: Leveraging Semantics and Keywords in Hybrid Search appeared first on Enterprise Knowledge.
]]>At KMWorld 2025, Chris Marino of Enterprise Knowledge partnered with Jaime Martin, Senior Product Manager and Business Analyst at Tax Analysts, to discuss how semantics and keyword search strategies can improve search experiences for users. Users of a search platform expressed challenges with findability and usability. Martin and Marino’s solution improved flexibility and precision of search results, improving users’ experiences with the tool.
Session attendees learned important takeaways, such as:
The post Making Search Less Taxing: Leveraging Semantics and Keywords in Hybrid Search appeared first on Enterprise Knowledge.
]]>The post Understanding the New Knowledge, Data, and AI Ecosystem: Trends in Enterprise AI Architecture appeared first on Enterprise Knowledge.
]]>At the same time, while leaders see it as the next major organizational disruptor, that excitement is tempered by very real organizational challenges. Poor data quality, unclear governance models, and the sheer volume of overlapping tools flooding the market have left many organizations overwhelmed rather than empowered, increasing pressure to demonstrate return on investment (ROI) and move beyond pilots and proofs of concept.
By 2025, many organizations experienced record levels of churn driven by decision paralysis, stalled pilots, and slow realization of tangible value. Early adopters moved fast (but not always cohesively), resulting in disconnected platforms, siloed data estates, and AI initiatives that failed to scale. We are now seeing a shift where leaders are recognizing that, in this environment, standing still is not a safe choice but a step back; it widens the gap between leaders who operationalize AI and those who remain stuck in perpetual experimentation.
At Enterprise Knowledge, the question we’re now helping organizations navigate is no longer whether AI should be adopted or how to select the “best” AI tool, but how to architect it cohesively and at scale. It’s about intentionally designing a connected knowledge, data, and AI ecosystem. It’s through this shift in thinking that we explore the core components and key considerations of the technology ecosystem we’re seeing emerge and scale across organizations. This shift focuses on foundational data and governance for modern knowledge platforms and AI-enabled experiences. In this blog, we’ll outline what it takes to move beyond experimentation and build an ecosystem that supports the evolution of technology and is readily adopted by the organization.

For a long time, the standard assumption was simple: there would always be a human at the end of the content and data pipeline and curation. A content manager would create and approve workflows, a full-stack developer would expose data through an API and a web app, and a data analyst would write SQL and turn datasets into dashboards inside a BI tool. Content and data curation, governance, and delivery were all designed with those human intermediaries in mind.
This assumption no longer holds true. Today, a growing share of an organization’s “knowledge and data consumers” aren’t people alone; they’re increasingly becoming AI models, algorithms, and autonomous agents. These systems need to discover, understand, and use data on their own, clean it up, or fill in the gaps. This shift is what is changing the bar for how data and knowledge must be managed. It’s no longer enough for data to be intuitive for human comprehension – it also needs to be recognizable and understandable to machines.
While the requirements between these human and AI consumers overlap, they are not identical. Whereas people can infer meaning from fragmented content or tribal knowledge, AI solutions need something stricter. They need content and data that is granular, self-describing, consistent, and structurally complete. Ambiguity, missing context, or informal conventions that humans can work around quickly become blockers for machines. In short, we’re moving from a world where data was explained to one where it must explain itself. That shift has profound implications for how we design knowledge, data, and AI platforms and the capabilities that sit on top of them.
AI is rapidly becoming a foundational layer in enterprise software, embedded directly into everyday applications rather than added as a standalone capability.
We see this integration everywhere, driving personalization and automation of repetitive tasks directly within enterprise applications, such as Adobe Photoshop suggesting edits, GitHub Copilot assisting with code, and enterprise platforms like ServiceNow and Salesforce putting AI directly into their workflows.
As AI embeds into the background, making applications smarter and more autonomous, user adoption is naturally rising as well. This trend is supported by findings from a 2025 Gallup Workforce study, which indicated that 45% of U.S. employees use AI for work-related tasks at least a few times annually, with much higher usage in knowledge-based roles. Technology-focused employees, in particular, are using AI to summarize information, generate ideas, and learn new skills (mostly through chatbots, virtual assistants, and writing tools). More advanced capabilities, like coding assistants and analytics platforms, tend to be adopted by employees who are already using AI frequently.
Organizations are realizing that basic or “naive” AI is falling short of delivering on its promise (especially in complex enterprise use cases). The problem usually isn’t the models themselves; it’s that organizations lack structured, shared, and repeatable ways to provide meaning and context to their fragmented knowledge and data.
This convergence of the semantic layer and operational context is setting the new foundation for reliable AI performance and becoming the focal point within modern enterprise architectures. It is creating a modular, “glass box” foundation that favors transparency and factual accuracy over black-box models that rely on pattern matching.
A key, vendor-agnostic approach on the rise to help organizations take practical steps (no matter their technical maturity) is the semantic layer framework. Built on proven foundations such as metadata, taxonomies, ontologies, business glossaries, and graph solutions, this layer continues to provide the standards and methods for organizing knowledge and data while enabling companies to separate their core knowledge assets from specific applications. This framework isn’t new, but it is becoming essential as AI moves from experimentation to production.
A metadata-first semantic layer provides a unified, standards-based framework for context, governance, and discovery across enterprise data. It makes data accessible through a shared logical layer (often a knowledge graph) exposed via APIs in machine-readable formats that work equally well for people and AI agents. Organizations that have a shared semantic contract in place are able to connect secure, metadata-rich knowledge assets regardless of where they’re created or consumed, and serve both human and machine needs from the same source of meaning.
Finally, the different types of graphs we continue to employ are evolving beyond standard knowledge graphs toward context graphs. While knowledge graphs define and map facts and entities, context graphs add the ‘when, where, and why’ through decision traces (why judgment), temporal awareness (signals like time and location), working memory (user intent and in-progress interactions), and user profile information (role, preferences, and interaction history).
Many enterprise architecture challenges stem from the absence of a clear framework to connect and interpret all forms of organizational knowledge: structured data, unstructured content, human expertise, and business rules, collectively known as organizational knowledge assets.
As a result, today’s data and technology leaders are being stretched in every direction. While data architects and engineers are still wrestling with very real, persistent issues like data silos, misalignment with business goals, and pressure to show immediate ROI, their mandates have significantly expanded. A big driver of this expansion and complexity is Generative AI, which has pulled unstructured data into the spotlight – emails, documents, chat logs, policies, and research reports (over 80% of organizational assets). This information usually lives in different systems, owned by different teams, and has historically sat outside the purview of data and analytics leaders. As a result, many of the challenges organizations face with AI today stem from the lack of a clear framework for organizing and interpreting their knowledge assets within the context of the business.
In this realm, the biggest architectural shift happening now is toward connection and interoperability. Organizations are realizing that AI readiness isn’t just about models. It’s about preparing operating models, systems, and teams so AI can do what it does best without having to navigate organizational silos or politics. As such, these emerging AI ecosystems are investing in solutions that support reliable, structured, and understandable knowledge management as a trusted foundation that can turn fragmented information into true knowledge intelligence for the organization.
The quality of AI output is only as good as the data behind it. That’s why unifying knowledge management, data, and AI governance matters to enforce consistent data quality standards and ensure that AI systems leverage accurate and properly secured data to produce more reliable outcomes. Responsible enterprise AI architecture depends on a unified governance framework built around three essentials: strong monitoring, comprehensive observability, and strict access controls, all essential for managing the inherent risks of autonomous systems.
AI monitoring and transparency starts at design time, where AI solutions should have a clearly defined purpose, explicit constraints, and a known set of tools they’re allowed to use. For agentic solutions, it’s also important for agents to expose reasoning steps to users, while backend systems log context, tool calls, and memory usage for debugging. We have been working with observability platforms (e.g., MLFlow, Azure AI Foundry) to help teams trace decision paths, detect anomalies, and continuously refine behavior using feedback from both users and testers. When it is done well, we find that these governance frameworks and guardrails support innovation instead of slowing it down.
When it comes to security, AI models and agents need a very different approach than traditional, static access policies. Instead of rigid, application- or group-level permissions, AI requires dynamic, context-aware access, granted at the domain, concept, and data levels, based on the task at hand. Unified entitlements are especially powerful here, providing a consistent definition of access rights across all knowledge assets (for both humans and AI). In practice, this means restricting an agent’s access to specific attributes on specific nodes within a graph setting, rather than locking down or exposing entire systems or datasets.
Finally, organizations are investing in solutions that will give them reliable ways to consistently observe and evaluate AI efficacy. Alongside broader quality, risk, and safety checks, observability includes task-level metrics such as intent resolution, task adherence, tool selection accuracy, and response completeness. Advanced telemetry analytics also provide additional insight into performance, latency, cost, and efficiency trade-offs, even though they aren’t strictly measured.
The bottom line is that today’s AI systems, especially agentic and non-deterministic models, require going beyond traditional observability. By implementing end-to-end observability frameworks, organizations are starting to gain deep visibility into internal decision-making, execution lineage from input to output, performance, and detecting unauthorized tool or API use proactively.
The abundance of AI tools and the rise of AI agents are fundamentally changing how we engage with technology. Successfully leveraging AI within the enterprise starts with understanding the full knowledge, data, and AI ecosystem.
The primary effort and value is first in connecting structured and unstructured data, making human expertise machine-readable, and capturing business rules as a unified, context-rich foundation. As a result, the conventional architectures of knowledge portals, data lakehouses, and data science workbenches, as we have known them, are becoming obsolete. A new, unified KM, data, and AI architecture is now necessary, one that can support distributed intelligence, ensure secure access to knowledge assets regardless of their source, and maintain continuous monitoring for modular, reliable, and consistent performance.
If you are in the process of evaluating your ecosystem and architecture, learn more from our case studies on how other organizations are tackling this, or email us at [email protected] for more information or support.
The post Understanding the New Knowledge, Data, and AI Ecosystem: Trends in Enterprise AI Architecture appeared first on Enterprise Knowledge.
]]>