Enterprise Knowledge https://enterprise-knowledge.com/ Wed, 18 Mar 2026 20:37:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://enterprise-knowledge.com/wp-content/uploads/2022/04/EK_Icon_512x512.svg Enterprise Knowledge https://enterprise-knowledge.com/ 32 32 Leveraging a Semantic Layer for Research Curation and Conversational Experiences https://enterprise-knowledge.com/leveraging-a-semantic-layer-for-research-curation-and-conversational-experiences/ Wed, 18 Mar 2026 19:09:34 +0000 https://enterprise-knowledge.com/?p=26585 The Challenge A global philanthropic organization focused on health programs struggled to fully leverage knowledge from semi-structured and unstructured documents. Specifically, within a health-related funding program, researchers lacked access to key qualitative data from end-user surveys and transcripts. Consequently, they … Continue reading

The post Leveraging a Semantic Layer for Research Curation and Conversational Experiences appeared first on Enterprise Knowledge.

]]>

The Challenge

A global philanthropic organization focused on health programs struggled to fully leverage knowledge from semi-structured and unstructured documents. Specifically, within a health-related funding program, researchers lacked access to key qualitative data from end-user surveys and transcripts. Consequently, they were unable to fully incorporate end-user experiences into their decision-making processes. This blind spot cost the organization significant time and resources, as researchers manually generated artifacts that lacked critical context.

At an enterprise level, multiple teams faced difficulties ingesting the complex, unstructured knowledge assets required for research or investment strategy. Without a standardized process, the organization wasted resources developing disconnected projects across programs. In addition, the enterprise sought guidance on how to utilize a newly acquired suite of advanced AI and semantic tooling, as well as how to govern and apply best practices to their knowledge and data management efforts.

The Solution

EK partnered with the organization to develop end-user capabilities for the health program and establish a repeatable pattern to scale these capabilities across enterprise programs and applications. To demonstrate the pattern’s effectiveness, EK first piloted the capabilities within a prioritized program.

Discovery & Use Case Backlog:
EK initiated a series of modeling discovery workshops with the health program department to identify a starting point, key data challenges, and a repeatable use case. The team developed personas to map the data and process challenges facing researchers, translating these requirements into a backlog of potential AI use cases to pilot. After narrowing the focus with the health program team to qualitative product research insights, EK formalized a set of business outcomes and requirements that would guide the overall solution.

Domain Taxonomies & Ontology:
A core part of the solution involved codifying health program knowledge into semantic models (taxonomies and ontologies) for graph ingestion and retrieval. Custom taxonomies categorize end-user responses and enrich survey questions with metadata, enabling efficient retrieval for AI conversational experiences and agentic loops. Developed in collaboration with subject matter experts, these taxonomies include alternative labels and terminology to reflect how researchers search for information.

Custom ontologies support data ingestion, classification, and knowledge domain connectivity. They provide a graph schema and structure for data extracted from surveys and transcripts, facilitating insight retrieval. Built using the Resource Description Framework (RDF) and Semantic Web Standards, these ontologies are extensible to other organizational areas and future datasets.

Solution Build:
1. Data Enrichment & Graph Curation
EK developed a reusable pipeline that transforms unstructured surveys and transcripts into a populated graph of connected data. This pipeline uses a large language model (LLM) to automatically classify questions and responses based on the custom taxonomies. The pipeline also leverages the ontology as the schema to build an instantiated graph of connected data. Leveraging the ontology as its schema, the LLM takes tagged statements and inferred relationships to automate the creation of triples required for the RDF knowledge graph.

2. RDF to LPG Transformation:
EK developed a reusable framework to transform RDF graph data into Labeled Property Graph (LPG) data. This transformation provides the final LPG with new semantic context unavailable in out-of-the-box LPGs, unlocking new insights and improved inference for chatbot responses.

3. Chatbot Development:
EK designed and implemented an agentic AI solution that translates natural language questions into queries using semantic domain models. This AI agent parses incoming questions based on modeled concepts and user intent to optimize retrieval requests. Once records are retrieved, results are summarized and pulled together in a custom front-end interface developed by EK. An additional AI-agent brings together the records to formulate a chatbot response in the front-end containing both a summary and links to relevant data sources.

Solution Architecture & Roadmap:
To ensure scalability, EK developed a future-state semantic architecture and roadmap to guide how the organization uses semantic tools and unifies multiple AI projects. Leveraging EK’s proprietary Semantic Layer Maturity Benchmark, the roadmap aligns the factors for implementing an enterprise-wide Semantic Layer as a Service. Furthermore, the architecture consolidates multiple AI pilots and initiatives occurring across the organization. This new framework provides a framework for how AI agents reuse components and provide insights across various programs.

The EK Difference

EK provided specialized expertise in semantic modeling, graph architecture, and technical strategy to successfully deliver both the semantically backed chatbot and the repeatable system architecture. The program’s technical components were designed for interoperability, allowing the organization to extend and reuse them across future search and conversational AI use cases.

EK provided end-to-end support to the organization throughout the engagement, leading the program through strategy, design, and implementation phases. This approach ensured that both business and technical requirements remained aligned from the project’s inception to its long-term sustainment.

The Results

The semantic layer solution provided the health program with new qualitative research insights, repeatable data transformation processes, and a connected data framework for future solutions. The enrichment pipeline automatically transforms unstructured surveys and transcripts into machine-readable data, making fresh data available to health program researchers. The pilot chatbot saves the program researchers time by  generating inferred insights on product experiences, a task that previously required high manual effort. The semantic models established an expandable, connected data framework that is used to link data and relevant research beyond the initial program.

Business Outcomes

  • Designed the technology stack so that 90% of its components can be implemented within existing enterprise systems, saving costs on custom development and reducing technical debt.
  • Provided health program researchers with a comprehensive and accessible dataset of transcript and survey data.
  • Reduced the steps required to gather qualitative data from 5+ to one, decreasing the time spent on manual analysis.
  • Developed a 12+ month plan for productionalization and pilot scaling, preparing the organization for future development and broader AI initiatives.

As a result of this work, the organization is expanding how it leverages knowledge graphs, semantic models, and AI agents to support researchers and decision-makers. EK laid the groundwork for a new operating model and a semantic solution team that will continue to enable teams and end-users across the enterprise.

Download Flyer

Ready to Get Started?

Get in Touch

The post Leveraging a Semantic Layer for Research Curation and Conversational Experiences appeared first on Enterprise Knowledge.

]]>
What’s Next for Search? Moving Beyond Q&A to Context, Discovery, and Action https://enterprise-knowledge.com/whats-next-for-search-moving-beyond-qa-to-context-discovery-and-action/ Tue, 17 Mar 2026 17:54:36 +0000 https://enterprise-knowledge.com/?p=26574 Beyond the Search Box Enterprise search is in the middle of a reset. For years, most enterprise search programs were built to return documents and links across repositories, and success was measured by whether employees could locate the right file. … Continue reading

The post What’s Next for Search? Moving Beyond Q&A to Context, Discovery, and Action appeared first on Enterprise Knowledge.

]]>
Beyond the Search Box

Enterprise search is in the middle of a reset. For years, most enterprise search programs were built to return documents and links across repositories, and success was measured by whether employees could locate the right file. That model is no longer enough. The next era of search extends beyond simple question-and-answer queries. It consists of answers paired with context, evidence, and guided discovery, including definitions, constraints, provenance, and clear paths to supporting sources and next steps, enabling users to make decisions with confidence rather than merely retrieving information.

There is a misconception I keep hearing in generative AI conversations: that chat will replace the search box, and the future is simply “ask a question, get an answer.” While AI has facilitated a more robust conversational interface, it overlooks the crucial elements required for search within organizations. Modern enterprise search needs to meet consumer-grade expectations for intent and relevance while also satisfying enterprise requirements like security, traceability, governance, and explainability. Therefore, organizations should move beyond questioning, “How do we integrate AI into search?” and instead ask, “How can I enable holistic search, discovery, and context?”

 

Knowledge Discovery Maturity Spectrum: How Search Evolves in the Enterprise

Understanding what’s next for enterprise search requires understanding where organizations currently are. EK’s Semantic Maturity Spectrum provides a useful framework for assessing this progression. Advancing along this maturity spectrum does not mean replacing what came before, as each stage adds capabilities that build on the foundation of the previous stages. Enterprise search remains essential even as recommendations, assistants, and agents are introduced because organizations still need a reliable, permission-aware way to locate, filter, and validate source content

1. Disconnected and Siloed Knowledge:
At the lowest maturity level, teams, systems, and informal networks fragment knowledge. Information exists in email threads, personal drives, and undocumented expertise. Users rely on “who you know” to find what they need, creating friction, duplicate work, and significant knowledge loss when employees leave. This state represents not just a search problem but a foundational knowledge management challenge.

2. Enterprise Search: Unifying access across repositories is the first significant step forward. Enterprise search platforms index content from multiple sources, enabling users to query across systems rather than searching each one individually. However, relevance often plateaus at this stage. Without semantic understanding or richer metadata, search engines return keyword matches that may or may not address user intent. Users learn to work around these limitations by trying multiple queries or reverting to asking colleagues. This is where acronyms, product codenames, policy versions, and regional terminology differences quietly break relevance, even when the right content is technically indexed.

3. Recommendations: A significant shift occurs when organizations move from pull-based searching to proactive delivery of relevant knowledge. Recommendation systems surface related content, similar cases, next-best resources, and relevant experts based on work context. This shift means that reusing knowledge is becoming the norm. EK’s work building a recommendation engine that automatically connects learning content to product data shows that recommendations depend on structured signals: taxonomy tagging, semantic relationships, engagement data, and domain constraints. Practical measures of progress include reduced time-to-answer, increased reuse of validated assets, fewer duplicate artifacts, and fewer escalations to SMEs for routine questions.

4. Virtual Assistants and Chatbots: The interaction model changes fundamentally when users move from scanning result lists to conversational guidance and synthesis. Chat-based interfaces are most valuable when they provide an answer and the supporting context, rather than just a response string. Trust-building experiences prioritize citations, confidence signals, and clarity on when escalation is required.

5. Autonomous Agents: At the highest maturity level, systems move beyond answering questions to completing bounded tasks with clear accountability. Critically, autonomous execution operates within a human-in-the-loop model, where agents handle preparation, synthesis, and execution steps while humans retain oversight for approval, exception handling, and final decision-making. Agentic workflows include drafting briefings with citations, creating structured outputs, routing work items, and updating downstream systems. Agent safety requires permission-aware orchestration, audit trails, and strong constraints on tool use and scope.

Four Key Factors for the New Search Paradigm

As organizations progress along the maturity spectrum, the underlying search architecture must evolve. Hybrid retrieval is becoming the standard pattern because it is the most practical way to capture user intent and balance results coverage with relevance. As I explored in a previous blog on vector search, keywords, semantics, and vectors are complementary approaches, not competing ones. Vector search excels at finding relevant results for natural language and fuzzy intent, but it also brings challenges with explainability, drift, and transparency that must be addressed. Without evaluation and monitoring, relevance quietly degrades over time, and user trust collapses before the issue becomes apparent.

The following four factors provide a framework for maturing search capabilities in alignment with the knowledge discovery spectrum.

 1. Designing for Relevance

Search relevance improves dramatically when organizations model the relevant concepts in their domain, such as products, policies, clients, systems, teams, and topics. This is where relevance is enabled because the system can connect what users mean to how knowledge is organized. This process requires understanding how users think about information and what they need to accomplish. Search design best practices emphasize that effective search experiences start with user research and intentional design, not technology selection.

Taxonomies, ontologies, and knowledge graphs increase interpretability and consistency by anchoring searches to business meanings and relationships. EK’s five-step approach to enhancing search with a knowledge graph outlines how organizations can analyze search content, develop an ontology, design the user experience, ingest data, and iterate toward continuous improvement. Knowledge graphs connect entities, enrich results with context, and support exploration paths beyond keyword matches. That same semantic foundation supports both list-based exploration and conversational experiences by keeping meaning consistent across modalities. 

Because users express intent differently depending on the task, search needs flexibility across modalities, not a single interface. Faceted navigation and consistent metadata remain foundational to findability, especially in heterogeneous content ecosystems. Traditional results lists remain the most efficient way to compare versions, filter by date or system, and validate sources, while chat adds synthesis and guidance when users need explanations, summaries, or a path forward. The goal is not chat versus lists, but the right experience for the task with traceability and control. Actionable results further reduce time-to-value by enabling users to take the next step directly from the results, such as previewing content, initiating workflows, or connecting with experts, instead of merely opening a file.

 2. Providing Proactive Recommendations

Recommendations signal that knowledge is becoming easier to reuse, not just easier to locate. Discovery patterns include related content, similar cases, next-best resources, and relevant experts based on their contributions, affiliations, and demonstrated expertise in the searched topic. This shift from pull-based searching to proactive delivery represents a fundamental change in how organizations surface knowledge.

Recommendation systems depend on structured signals: taxonomy tagging, semantic relationships, engagement data, and domain constraints. EK’s work building a recommendation engine that automatically connects learning content to product data demonstrates how semantic relationships serve as a real-world pattern for bridging silos across systems. Knowledge delivery experiences increasingly blur the line between portals, search, and learning, especially when content must be personalized to user roles and context.

 3. Enabling Users with Context

Chat-based interfaces are most valuable when they deliver more than a response string. In enterprise settings, the critical output is an answer paired with the context needed to trust and apply it. Context-rich answers include definitions, scope limitations, rationale, and source evidence, plus guidance on what to look at next.

This is where the “Q&A is the future” misconception breaks down. A system that only answers questions might look impressive at first, but it fails quickly when users need to verify, explain, or operationalize what it provides. Trust-building experiences prioritize citations, confidence signals, and clear escalation behavior. The right pattern is not to always answer, but rather to answer when grounded and show uncertainty or even suggest alternative actions when not.

EK’s semantic search work for an online healthcare information provider illustrates why semantics matter in practice. Medical concepts can be referred to in different ways across audiences and regions, and patients rarely use clinical terminology. A semantic approach that blends enrichment and modern retrieval techniques supports both medical professionals and patients, enabling doctors to search with clinical terminology while allowing patients and caregivers to find the same concepts using everyday language. In other words, assistant success is not only a model problem; it also encompasses architecture, governance, and operations.

4. From Search to Agent Autonomy

In practice, the goal is not open-ended autonomy but bounded execution in well-defined workflows with explicit controls. Autonomous agents move from information delivery to bounded execution. They draft briefings with citations, generate structured outputs, route work items, and update downstream systems. The opportunity is meaningful, but the bar is higher than it is for assistants: once a system can act, it needs enforceable constraints, auditability, and clear accountability.

The practical lesson is that “agent-ready” requires more than a good prompt. It requires permission-aware orchestration, consistent identity and access enforcement, and operating models that define when the system can proceed and when it must hand off to a human. Agents are the natural next step for organizations that already have trustworthy retrieval, grounded assistant behavior, and mature governance.

Practical Path Forward

A practical path forward starts with focus. Identify a small set of high-impact use cases and the knowledge assets that actually drive those outcomes, rather than trying to make all content searchable at once. From there, set a starting point for AI content readiness by checking the quality, structure, duplication, recency, and contextual completeness to ensure accurate retrieval and reliable answers.

Next, tie readiness findings to the operating reality required to scale. Readiness is not only about content cleanup; it is about organizational capability, the state of enterprise data and content, and the change threshold needed to move beyond pilots. Governance and operating models should explicitly address coordination, filling gaps for unanswerable questions, and systematic response to hallucinations so that readiness remains robust over time.

Finally, treat unified entitlements as a first-class dependency for any assistant or agent experience. If permissions are inconsistent across systems, the experience will either leak content or become unusable. Unified entitlements provide a holistic way to apply access rights consistently across asset types and platforms.

With those foundations in place, the maturity sequence stays simple: unify findability, enrich meaning, enable recommendations, introduce assistants with citations and evaluation, and then automate bounded workflows with agents.

Conclusion: The Future of Enterprise Search Is Enterprise Decision Support

The future state is not “Q&A as search,” but decision support that combines answers, context, evidence, and discovery. Search becomes the delivery layer for organizational context, and discovery becomes the mechanism for reuse at scale. Assistants and agents amplify impact only when the knowledge foundation is trustworthy, governed, and measurable.

Enterprise Knowledge helps organizations modernize enterprise search through strategy and roadmap development, semantic modeling and knowledge graph enablement, AI readiness for knowledge assets, and unified entitlements. Whether you are just getting started or moving toward recommendations, assistants, or agents, EK can help you build a foundation that is trustworthy, measurable, secure, and scalable. Contact us to discuss your enterprise search journey and what comes next.

The post What’s Next for Search? Moving Beyond Q&A to Context, Discovery, and Action appeared first on Enterprise Knowledge.

]]>
Lulit Tesfaye and Lynn Miller to Speak at the APQC 2026 Process & Knowledge Management Conference https://enterprise-knowledge.com/lulit-tesfaye-and-lynn-miller-to-speak-at-the-apqc-2026-process-knowledge-management-conference/ Wed, 11 Mar 2026 17:49:36 +0000 https://enterprise-knowledge.com/?p=26544 Lulit Tesfaye, Partner and Vice President of Knowledge and Data Services, and Lynn Miller, Knowledge Management Consultant, will present “The Next Evolution of KM: Operating Models for Data and AI Integration” at the APQC 2026 Process & Knowledge Management Conference … Continue reading

The post Lulit Tesfaye and Lynn Miller to Speak at the APQC 2026 Process & Knowledge Management Conference appeared first on Enterprise Knowledge.

]]>
Lulit Tesfaye, Partner and Vice President of Knowledge and Data Services, and Lynn Miller, Knowledge Management Consultant, will present “The Next Evolution of KM: Operating Models for Data and AI Integration” at the APQC 2026 Process & Knowledge Management Conference on April 22nd in Houston, Texas.

In this presentation, Tesfaye and Miller will examine how organizations can build an integrated operating model that aligns Knowledge Management (KM), data, and AI functions to drive faster decisions and stronger business outcomes. The session will share real-world examples of how organizations are starting to break down silos across the org, facilitating new ways of working and exploring emerging team structures, evolving leadership roles, and holistic insights from organizations.

Find out more about the event and register at the conference website.

Image of the APQC Connect 2026 webpage with the training and conference dates and location

The post Lulit Tesfaye and Lynn Miller to Speak at the APQC 2026 Process & Knowledge Management Conference appeared first on Enterprise Knowledge.

]]>
Building the Semantic Layer: Scaling Enterprise Intelligence at a Global Investment Firm https://enterprise-knowledge.com/building-the-semantic-layer-scaling-enterprise-intelligence-at-a-global-investment-firm/ Wed, 11 Mar 2026 15:50:24 +0000 https://enterprise-knowledge.com/?p=26547 The Challenge A global investment firm with a $330 billion dollar portfolio and 50,000+ employees struggled with fragmented data. Investment professionals were losing critical time hunting for assets across disconnected systems. Detailed deal records were scattered as a mix of … Continue reading

The post Building the Semantic Layer: Scaling Enterprise Intelligence at a Global Investment Firm appeared first on Enterprise Knowledge.

]]>

The Challenge

A global investment firm with a $330 billion dollar portfolio and 50,000+ employees struggled with fragmented data. Investment professionals were losing critical time hunting for assets across disconnected systems. Detailed deal records were scattered as a mix of structured data and unstructured documents. Finding information required knowing it existed in the first place, then identifying the right person to grant access—a process that often stalled for weeks.

Additionally, as part of their due diligence process, teams frequently commissioned external research reports, often unaware that similar studies already existed elsewhere in the firm. With expertise siloed by division, the firm was failing to leverage its greatest asset: the collective knowledge of its global workforce.

The Solution

EK launched a global Knowledge Portal that serves as a single lens into the firm’s intellectual capital. This portal was built on a graph data model that provides organizational context, developed using AI-driven auto-tagging that enhances the search experience, and delivers dynamically updated pages to the user. The result is a single platform that provides a 360-degree view of existing firm resources, such as research, deals and investments data, relationships, and personnel information.

This has unlocked strategic capabilities such as:

  • Surfacing Lessons Learned: EK moved beyond basic search by leveraging AI to “read” unstructured content within PowerPoints and emails. The system automatically tags files, extracts “lessons learned,” and maps them to structured data points, surfacing insights that were previously buried.
  • Reducing Risk and Increasing Security: We replaced manual, multi-step approval chains with an automated permissions model. By applying security at the data-point level through a Unified Entitlements Graph, we ensured that users see only what they are authorized to see, without the burden of 22 separate approvals.
  • Making Connections Seamless: The portal integrates information from 12+ systems into a single view. A semantic layer maintains the relationships between people, deals, and research, allowing users to navigate the enterprise as a connected network rather than a series of folders.

The EK Difference

What sets EK apart is the ability to deliver immediate business impact while simultaneously keeping in mind long-term scalability. We focused on speed to value, launching the Knowledge Portal early as a high-impact product to provide investment teams with immediate utility.

The true EK difference, however, lies in how we evolved that solution. Leveraging our specialized expertise, we are decoupling the Semantic Layer from the portal itself, transforming a single-use tool into a standalone enterprise platform. Throughout 2026, we will be scaling this infrastructure for firm-wide adoption. Because this semantic backbone was first “stress-tested” and enriched by the real-world experiences of the investment teams, the foundation now being laid across the enterprise is not just a theoretical model—it is a field-tested tool ready to power the entire organization.

The Results

The Knowledge Portal has transformed how the firm’s investment professionals access information and expertise.

  • Accelerated Investment Decisions: Critical investment data that previously required 22 approvals and up to two weeks of manual processing is now available instantly. This allows the firm to move faster on time-sensitive opportunities, securing a competitive edge in a high-stakes market.
  • Improved Trust and Reliability: By delivering a 360-degree view of due diligence, financial performance, and partner history in one place, the portal has eliminated the “guesswork” of decision-making. Investment committees now operate with a higher degree of confidence, knowing they are viewing the most reliable, complete, and up-to-date data available across the enterprise.
  • Reduced Reliance on External Research: A significant drop in the purchase of duplicate external research and data sets.
  • Rapid Upskilling: New hires and junior staff can now get up to speed faster by accessing the “lessons learned” automatically extracted from historical deal documents as well as quickly finding experts across the firm who can help.
  • Enterprise Readiness: What began as a high-impact MVP is now being scaled as the firm’s universal semantic layer, ready to power the next generation of data-driven decisions.

Ultimately, the Knowledge Portal has shifted the firm from a state of fragmented information to one of collective intelligence, ensuring that every investment decision is backed by the full weight of the organization’s global expertise and data.

Download Flyer

Ready to Get Started?

Get in Touch

The post Building the Semantic Layer: Scaling Enterprise Intelligence at a Global Investment Firm appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge Named Top Knowledge Management Consultancy by KMWorld https://enterprise-knowledge.com/enterprise-knowledge-named-top-knowledge-management-consultancy-by-kmworld/ Tue, 10 Mar 2026 18:34:19 +0000 https://enterprise-knowledge.com/?p=26536 Enterprise Knowledge (EK) has been named on KMWorld’s list of Top 100 Companies in Knowledge Management for the twelfth consecutive year. EK is the world’s largest dedicated Knowledge Management (KM) consulting firm, consistently recognized for global leadership in KM consulting … Continue reading

The post Enterprise Knowledge Named Top Knowledge Management Consultancy by KMWorld appeared first on Enterprise Knowledge.

]]>
Enterprise Knowledge (EK) has been named on KMWorld’s list of Top 100 Companies in Knowledge Management for the twelfth consecutive year. EK is the world’s largest dedicated Knowledge Management (KM) consulting firm, consistently recognized for global leadership in KM consulting services, and focused on bridging knowledge, data, and artificial intelligence (AI) for the world’s largest and most complex organizations.

EK’s thought leadership reflects its position in the field. It hosts a public knowledge base of over 1,000 articles on KM, Semantic Layers, Knowledge Graphs, and AI, produces the top-rated KM podcast, Knowledge Cast, and has published the definitive books in the field, Making Knowledge Management Clickable and Bridging Knowledge, Data, and AI

“Our annual list of 100 Companies to Watch in the Knowledge Management space is a testament to their agility to thrive in an environment of rapidly changing technologies, while not losing track of the importance of human expertise. We’re proud to celebrate these organizations that are redefining what it means to lead, add business value, and recognize that human ingenuity and artificial intelligence are increasingly inseparable,” said KMWorld Editor in Chief, Marydee Ojala.

In addition to the Top 100 List, EK is also consistently recognized by KMWorld on their list of AI Trailblazers, and by Inc. Magazine and the Washington Business Journal as a Best Place to Work.

“EK is consistently recognized by the industry as a leading KM, AI, and Semantic Consultancy, and by our own employees as a great place to work. That combination is why we continue to thrive,” stated Zach Wahl, CEO of EK.

Read more about the recognition and EK’s place on the list at the KMWorld site.

 

The post Enterprise Knowledge Named Top Knowledge Management Consultancy by KMWorld appeared first on Enterprise Knowledge.

]]>
GraphRAG in the Enterprise https://enterprise-knowledge.com/graphrag-in-the-enterprise/ Thu, 05 Mar 2026 16:46:34 +0000 https://enterprise-knowledge.com/?p=26520 Retrieval-Augmented Generation (RAG) is a commonly utilized pattern for grounding large language models in enterprise data. Instead of solely relying on a model’s training, RAG collects relevant information from internal sources, documents, knowledge bases, and other systems; it then uses … Continue reading

The post GraphRAG in the Enterprise appeared first on Enterprise Knowledge.

]]>
Retrieval-Augmented Generation (RAG) is a commonly utilized pattern for grounding large language models in enterprise data. Instead of solely relying on a model’s training, RAG collects relevant information from internal sources, documents, knowledge bases, and other systems; it then uses that context to guide generation. This approach improves accuracy, allows models to work with proprietary or frequently changing data, and has made RAG a natural starting point for many enterprise AI initiatives.

However, as RAG is applied to more complex use cases, its limitations start to surface. Since it treats knowledge as flat chunks of text, it makes it difficult to convey relationships, maintain consistent context, or support multi-step and cross-document reasoning. GraphRAG addresses these gaps by incorporating graph-based structure into the retrieval process, where entities and their relationships are explicitly modeled. The result is a more context-aware retrieval methodology that sets the foundation for higher quality and more explainable responses at an enterprise level.

 

What is GraphRAG?

GraphRAG extends traditional RAG by grounding retrieval and reasoning in a knowledge graph built on semantic standards rather than relying solely on vector similarity over text. Instead of treating each retrieved chunk as an isolated unit, GraphRAG uses an explicit graph structure with entities, relationships, and shared vocabularies to represent how information is connected across documents and systems.

In practice, this means retrieval is driven not just by textual similarity, but by meaning and structure. Facts, entities, and relationships are anchored to a common ontology, allowing the system to understand how pieces of information relate to one another. This enables traceable reasoning paths, supports multi-step and cross-document questions, and produces answers that are more transparent and auditable—qualities that are essential in an enterprise environment.

 

How Does GraphRAG Differ from Naive RAG?

Naive RAG systems return information primarily based on textual or vector similarity (keyword or embedding similarity) and assemble the responses by presenting the most relevant chunks in parallel. This approach works well for straightforward lookups, but it often breaks down when the questions require precision, complex reasoning, or an understanding of how information is related. Because each chunk is treated independently, the model has limited awareness of relationships across documents, and it can be difficult to explain why specific pieces of information were retrieved or how an answer was formed.

GraphRAG changes this behavior by shifting retrieval from isolated text to structured context. Instead of asking “which passages are similar,” the system can ask “which entities, relationships, and facts are relevant, and how do they connect?” This supports more consistent handling of ambiguity, enables cross-document and multi-hop questions, and creates reasoning paths that can be traced back to explicit graph structures. In enterprise settings, this fosters higher precision, more interpretable answers, and results that adapt as knowledge evolves.

 

What Are the Components of GraphRAG?

A GraphRAG system is composed of several components that work together to introduce structure, context, and control into the retrieval and generation pipeline. Each component plays a specific role, ensuring that retrieved information is not only relevant but meaningfully connected before it is used by a language model.

 

Components of a GraphRAG system.

 

At a high level, GraphRAG combines an enterprise ontology to define shared meaning, a knowledge graph to store entities and relationships, a retrieval and ranking layer to identify the most relevant graph context, and an LLM and orchestration layer to synthesize grounded responses. Together, these components shift RAG from document-centric retrieval towards context-aware reasoning, while preserving traceability and alignment with enterprise knowledge models.

 

The Purpose of Each Component

 

Layers of a GraphRAG.

  • Ontology: Establishes a shared semantic foundation for the system by defining entities, relationships, and accepted meanings up front. This removes ambiguity and ensures that retrieval and reasoning align with enterprise definitions rather than surface-level text, allowing the system to interpret questions consistently, even when terminology varies across teams, documents, or data sources.
  • Graph Database: Provides the structural backbone that stores entities and relationships explicitly, supporting traversal-based retrieval, where responses are built from established paths through the graph instead of probabilistic guesses. This makes it possible to understand how an answer was formed and to anchor reasoning in real-world entities and interactions.
  • Retrieval and Ranking: Determines which parts of the graph matter most for a given question by identifying relevant nodes, relationships, and paths, and prioritizing them before passing context to the model. This enables multi-hop and cross-domain reasoning while controlling noise, ensuring the model receives a focused and context-aware view of the knowledge graph rather than an unstructured collection of facts.
  • LLM and Orchestration: Assembles a final response from the curated graph context, combining retrieved relationships, supporting evidence, and provenance into a coherent answer that can be traced back to specific sources and graph paths. This turns structured context into natural language output while preserving explainability, making the system suitable for enterprise use cases where trust and accountability are required.

 

Success Stories

A global foundation engaged EK after multiple failed attempts to apply LLMs to strategic investment analysis. The organization needed to synthesize insights across public-domain data, internal investment documents, and proprietary datasets to evaluate how investments aligned with strategic objectives, all while ensuring their results were precise, explainable, and usable by executive stakeholders. EK implemented a GraphRAG-based solution anchored by a purpose-built finance ontology and a unified knowledge graph that integrated structured and unstructured data under a shared semantic model. By combining graph-based retrieval, provenance-aware context assembly, and an agentic RAG workflow, the system consistently outperformed naive LLM and traditional RAG approaches on complex, cross-document questions. The result was a transparent and auditable AI capability that delivered coherent and explainable insights. Stakeholders are now able to trace how each answer was derived, establishing a foundation for organization-wide adoption.

Likewise, an investment agency responsible for managing assets and supporting long-term, high-stakes decisions needed to provide its professionals with faster and more reliable access to relevant research and insights. The agency was building a Knowledge Portal but lacked the foundational semantic frameworks required to implement AI and graph driven insights at scale for the enterprise, while also operating across a complex data environment with misaligned metadata and disparate structured and unstructured sources. EK delivered a GraphRAG-aligned solution by applying semantic modeling and a graph database to unify data definitions and integrate research content into a single access point. The resulting capability ensured trustworthy and consistent results along with reduced operational silos, and improved search through enhanced natural language processing and auto-tagging. This established a scalable semantic knowledge ecosystem that enabled a second phase focused on graph-driven tagging and expanded knowledge graph capabilities.

 

Conclusion

GraphRAG represents an evolution of retrieval-augmented generation that aligns more naturally with how enterprises structure, govern, and reason over knowledge. By grounding retrieval and reasoning in explicit semantics and relationships, GraphRAG moves beyond keyword and vector similarity to deliver responses that are more precise, explainable, and contextually coherent. As organizations push AI systems into use cases with higher stakes that cross multiple domains, this graph-based foundation becomes critical for not only improving answer quality, but for building trust and transparency into scalable enterprise AI solutions.

To discuss how GraphRAG can support your organization’s next phase of AI adoption, contact us and connect with our team!

The post GraphRAG in the Enterprise appeared first on Enterprise Knowledge.

]]>
Knowledge Cast – TJ Hsu of Amgen https://enterprise-knowledge.com/knowledge-cast-tj-hsu-of-amgen/ Tue, 03 Mar 2026 14:36:23 +0000 https://enterprise-knowledge.com/?p=26511 Enterprise Knowledge CEO Zach Wahl speaks with TJ Hsu, Director of R&D Knowledge Management at Amgen. With over a decade of experience in artificial intelligence and knowledge services, TJ currently leads a team dedicated to enhancing Amgen’s research, development, and … Continue reading

The post Knowledge Cast – TJ Hsu of Amgen appeared first on Enterprise Knowledge.

]]>

Enterprise Knowledge CEO Zach Wahl speaks with TJ Hsu, Director of R&D Knowledge Management at Amgen. With over a decade of experience in artificial intelligence and knowledge services, TJ currently leads a team dedicated to enhancing Amgen’s research, development, and medical capabilities through innovative KM strategies.

In this conversation, Zach and TJ discuss the impact TJ has made at Amgen so far, how moving from a collection of intelligent individuals to a collective intelligence helps organizations learn faster and get smarter over time, and different ways to get your organization applying knowledge rather than just “learning” it. They also chat about how to scale KM for the enterprise through selective persona targeting and thoughtful application of technology.

 

 

If you would like to be a guest on Knowledge Cast, contact Enterprise Knowledge for more information.

The post Knowledge Cast – TJ Hsu of Amgen appeared first on Enterprise Knowledge.

]]>
Graph-Based Security & Entitlements: Transforming Access Control for the Modern Enterprise https://enterprise-knowledge.com/graph-based-security-entitlements-transforming-access-control-for-the-modern-enterprise/ Tue, 24 Feb 2026 18:14:58 +0000 https://enterprise-knowledge.com/?p=26497 Introduction Modern organizations face significant challenges in managing entitlements across their increasingly complex technology landscapes. The proliferation of cloud-native architectures and Software-as-a-Service (SaaS) platforms has dramatically expanded the digital footprint of most organizations. Critical assets, including sensitive data, analytics, collaboration … Continue reading

The post Graph-Based Security & Entitlements: Transforming Access Control for the Modern Enterprise appeared first on Enterprise Knowledge.

]]>
Introduction

Modern organizations face significant challenges in managing entitlements across their increasingly complex technology landscapes. The proliferation of cloud-native architectures and Software-as-a-Service (SaaS) platforms has dramatically expanded the digital footprint of most organizations. Critical assets, including sensitive data, analytics, collaboration tools, content, and AI applications, are now distributed across numerous platforms. Each platform leverages its own unique identity model, authorization patterns, and administrative tools, rendering unified entitlements management nearly impossible based on traditional means.

This fragmentation leads to entitlements becoming scattered, causing security teams to lose visibility into how access is genuinely granted. One system relies on roles, another uses attributes or tags, a third employs direct permissions, and a fourth utilizes nested group inheritance. This inconsistency creates both compliance gaps and operational hazards. For a deeper overview of why this problem is escalating across modern platforms, see Why Your Organization Needs Unified Entitlements.

To effectively address security and entitlement challenges, organizations must consolidate and connect disparate information sources, such as identity providers, application configurations, permission lists, and platform-specific rules. Furthermore, the increasing adoption of AI-assisted search and agentic workflows makes robust entitlements more critical than ever. Access controls now govern not only who can view a document but also what information an AI system is authorized to retrieve, summarize, and act upon.

This article explores why traditional entitlement management fails in cloud-first environments and presents graphs and digital twins as a practical architectural solution for unified entitlements. They provide a single, holistic definition of access rights applicable consistently across all systems and asset types.

 

The Hidden Risk: Relationship Blind Spots and Policy Drift

Zero Trust treats every access request as untrusted by default and requires explicit, policy-driven verification. As a result, it has become the standard direction for enterprise security, but many organizations still struggle to operationalize it. The reason is straightforward: access control decisions depend on fragmented entitlements spread across platforms, identity sources, and repositories. Policies are defined in business language and then implemented as platform-specific rules that drift over time.

Traditional role-based access control (RBAC) remains useful for clear-cut assignments. The challenge is that modern access is not granted in one step: it is accumulated through relationships such as team membership, project participation, shared workspaces, and delegated ownership. In practice, permissions frequently flow through multi-step chains, for example:

  1. A user is added to a project team.
  2. The project team has access to a workspace.
  3. The workspace contains folders and dashboards.
  4. Those resources link to datasets and reports.
  5. Service accounts execute jobs on the user’s behalf.

These access paths are often invisible to conventional security monitoring, which tends to focus on direct assignments. As the organization evolves, so do these relationships. Policy drift occurs gradually as platform-specific configurations diverge due to factors like personnel transitioning between teams, project status changes (closing or reopening), evolving client and regulatory policies, contractor rotation, and data product reclassification. This creates a gap between what the organization intends to enforce and what the systems actually enforce. EK explores how these unexpected access paths show up in real enterprise scenarios in Unified Entitlements: The Hidden Vulnerability in Modern Enterprises.

 

Graphs as a Natural Fit for Security and Entitlements

Graphs transform how organizations model and reason about access control by representing the enterprise as it truly exists: a network of interconnected entities. Nodes can represent identities such as people, groups, service accounts, and agents, as well as resources like documents, datasets, dashboards, and APIs; edges represent the relationships between them (i.e., member-of, owns, steward-of, can-view, and can-admin). In other words, graphs model security the way organizations operate: through relationships, not just static assignments.

Graphs turn a common authorization question (“Can User A access Resource B?”) into a relationship evaluation problem. The system traverses the graph to determine whether a valid permission path exists (and whether any conditions apply). The resulting traversal offers advantages in both speed and explainability, and the path itself serves as the justification: access is allowed because User A is part of Team X, Team X is assigned to Project Y, and Project Y has approved access to Workspace Z, which contains Resource B.

Graphs also unlock practical security analysis that is difficult to do with scattered point permissions:

  • Access path analysis: Identify direct and indirect routes to sensitive resources.
  • Least-privilege diagnostics: Reveal inherited access that “looks invisible” in traditional reviews.
  • Toxic combination detection: Surface risky permission pairings that enable escalation through composition.
  • Attack-path modeling: Visualize how lateral movement could occur through chained access.

 

The Unified Entitlements Service and the Entitlements Digital Twin

Operating a graph within a deliberate entitlement architecture greatly improves its value. EK’s unified entitlements guidance describes a Unified Entitlements Service (UES) pattern that centralizes policy intent and converts it into enforcement across systems, effectively acting as a “universal translator” for security policy. A useful way to think about this is as an entitlements digital twin: a continuously updated model of “effective access” across the enterprise. A practical UES plus digital twin implementation typically includes:

  • Ingestion and synchronization: Connect to identity sources, group structures, resource inventories, and existing permissions.
  • Identity resolution: Unify fragmented identities into a coherent view so access decisions are consistent everywhere.
  • Graph-based policy evaluation: Evaluate access using roles, attributes, and relationships, and return a decision and an explanation.
  • Federated enforcement and translation: Apply policy where access happens (applications, data access layers, portals, search experiences, and AI retrieval paths).
  • Evidence and provenance: Capture audit-ready traces showing which policy checks ran and which relationship path enabled (or denied) the decision.

Digital twins are valuable because they let security teams safely ask “What happens if…” questions before changes hit production. In my experience, three workflows drive the most value:

  • Change simulation: Before reorganizing teams, migrating a repository, onboarding a contractor workforce, or launching a new AI-enabled experience, you can simulate the impact: Who gains access? Who loses access? Which new relationship paths appear?
  • Policy validation: You can validate that high-risk assets have clean, defensible access paths and that exceptions are scoped, time-bounded, and reviewable.
  • Evidence on demand: Instead of assembling entitlement answers manually across systems, the graph produces a defensible view of effective access with the relationship path that enabled it.

For a deeper walkthrough of the UES components and interactions, see Inside the Unified Entitlements Architecture.

 

The Semantic Layer and AI: Entitlements as the Safety Boundary

Entitlements are not purely technical. They depend on business meaning: data classification, ownership, stewardship, domains, and regulatory obligations. A semantic layer connects raw identity and permission data to that shared context so policies can be expressed in the terms the business actually uses to operate.

AI intensifies the challenges by enhancing the speed and scope of retrieval, summarization, and recombination. If an application can pull content quickly, then entitlement drift propagates faster and becomes harder to unwind after the fact. That is why entitlements are the safety boundary for AI-enabled discovery and agentic workflows: what an AI experience can retrieve must be constrained by the same effective access as the user behind it.

The practical implication is that “AI-safe access control” is really “business-aligned access control.” When sensitivity, stewardship, and usage obligations are encoded in the semantic layer and connected to identities, resources, and relationships, the organization can safely scale search and AI experiences without relying on scattered point controls or brittle platform-specific rules.  

 

UES Implementation Considerations and Getting Started

Graph-based security is not just a data model: it is an operating capability. The fastest way to make progress is to treat unified entitlements as both an architecture and a change program, delivered in phases. 

How to Get Started

1. Assess and prioritize: Inventory your highest-risk assets and repositories, map where sensitive content and data live, and pick the domain where entitlement failure would have the highest impact.

2. Standardize policy intent: Define a canonical policy model (roles, attributes, and relationships) in business terms before you try to enforce it everywhere.

3. Pilot UES and entitlement graph: Stand up the Unified Entitlements Service and the entitlements graph for one domain and one measurable use case. Prove ingestion, evaluation, and evidence end-to-end.

4. Expand and improve: Onboard additional systems in waves, translate policies consistently into enforcement points, and continuously monitor for drift and new access paths.

Implementation Considerations

1. Data quality and lifecycle hygiene: If identity and resource metadata are stale, the graph will be confidently wrong. Establish ownership, lifecycle expectations, and lightweight quality checks.

2. Identity resolution: Unify people, contractors, service accounts, integrations, and agent identities into coherent profiles so policy is enforced consistently across systems.

3. Exception workflows: Define how exceptions are requested, approved, time-bounded, and reviewed so temporary access does not become permanent drift.

4. Evidence and auditability by design: Capture decision context by default: what was accessed, what policy was evaluated, and what relationship path enabled the decision to automate audits.

 

Conclusion

In summary, unified entitlements is about reducing uncertainty. Graphs and digital twins provide the structure to model how access actually works, the tooling to simulate change before it becomes disruption, and the evidence to feasibly prove who can access what and why. As AI adoption accelerates, this capability becomes even more critical because entitlements define the safety boundary for what AI systems can retrieve and act on.

If your organization is looking to operationalize unified entitlements, especially to reduce policy drift, strengthen Zero Trust controls, or make AI-enabled discovery safer, Enterprise Knowledge can help. We partner with teams to assess entitlement risk, define a scalable Unified Entitlements Service approach, and build an entitlements digital twin roadmap aligned to your governance model and technical ecosystem. Contact us to start your journey to unified entitlements.

The post Graph-Based Security & Entitlements: Transforming Access Control for the Modern Enterprise appeared first on Enterprise Knowledge.

]]>
Making Search Less Taxing: Leveraging Semantics and Keywords in Hybrid Search https://enterprise-knowledge.com/making-search-less-taxing-leveraging-semantics-and-keywords-in-hybrid-search/ Mon, 09 Feb 2026 17:16:08 +0000 https://enterprise-knowledge.com/?p=26486 Explore how Tax Analysts, the nonpartisan nonprofit behind Tax Notes, upgraded its search functionality to help subscribers both easily find information and discover unexpected, relevant content. At KMWorld 2025, Chris Marino of Enterprise Knowledge partnered with Jaime Martin, Senior Product Manager … Continue reading

The post Making Search Less Taxing: Leveraging Semantics and Keywords in Hybrid Search appeared first on Enterprise Knowledge.

]]>
Explore how Tax Analysts, the nonpartisan nonprofit behind Tax Notes, upgraded its search functionality to help subscribers both easily find information and discover unexpected, relevant content.

At KMWorld 2025, Chris Marino of Enterprise Knowledge partnered with Jaime Martin, Senior Product Manager and Business Analyst at Tax Analysts, to discuss how semantics and keyword search strategies can improve search experiences for users. Users of a search platform expressed challenges with findability and usability. Martin and Marino’s solution improved flexibility and precision of search results, improving users’ experiences with the tool.

Session attendees learned important takeaways, such as:

  • Keyword search remains relevant in modern search systems, excelling at exact matches and specific terminology.
  • Vector search captures semantic meaning and context, leading to more comprehensive results that satisfy both precise and conceptual queries.
  • By combining keyword and semantic methods, hybrid search allows for improved recall and precision.

The post Making Search Less Taxing: Leveraging Semantics and Keywords in Hybrid Search appeared first on Enterprise Knowledge.

]]>
Understanding the New Knowledge, Data, and AI Ecosystem: Trends in Enterprise AI Architecture https://enterprise-knowledge.com/understanding-the-new-knowledge-data-and-ai-ecosystem-trends-in-enterprise-ai-architecture/ Thu, 05 Feb 2026 15:47:31 +0000 https://enterprise-knowledge.com/?p=26470 The way we consume, create, and engineer information has changed dramatically, with the last several years demonstrating a marked transformation due to Artificial Intelligence (AI). According to a recent report,  Generative AI adoption has reached a tipping point, with nearly … Continue reading

The post Understanding the New Knowledge, Data, and AI Ecosystem: Trends in Enterprise AI Architecture appeared first on Enterprise Knowledge.

]]>
The way we consume, create, and engineer information has changed dramatically, with the last several years demonstrating a marked transformation due to Artificial Intelligence (AI). According to a recent report,  Generative AI adoption has reached a tipping point, with nearly 40% of adults adopting GenAI within just two years. That’s faster than the rise of smartphones and other major technological shifts, sending a clear message that AI integration can no longer be considered “experimental” and is instead becoming core infrastructure for mature knowledge-and-data-driven enterprises.

At the same time, while leaders see it as the next major organizational disruptor, that excitement is tempered by very real organizational challenges. Poor data quality, unclear governance models, and the sheer volume of overlapping tools flooding the market have left many organizations overwhelmed rather than empowered, increasing pressure to demonstrate return on investment (ROI) and move beyond pilots and proofs of concept.

By 2025, many organizations experienced record levels of churn driven by decision paralysis, stalled pilots, and slow realization of tangible value. Early adopters moved fast (but not always cohesively), resulting in disconnected platforms, siloed data estates, and AI initiatives that failed to scale. We are now seeing a shift where leaders are recognizing that, in this environment, standing still is not a safe choice but a step back; it widens the gap between leaders who operationalize AI and those who remain stuck in perpetual experimentation.

At Enterprise Knowledge, the question we’re now helping organizations navigate is no longer whether AI should be adopted or how to select the “best” AI tool, but how to architect it cohesively and at scale. It’s about intentionally designing a connected knowledge, data, and AI ecosystem. It’s through this shift in thinking that we explore the core components and key considerations of the technology ecosystem we’re seeing emerge and scale across organizations. This shift focuses on foundational data and governance for modern knowledge platforms and AI-enabled experiences. In this blog, we’ll outline what it takes to move beyond experimentation and build an ecosystem that supports the evolution of technology and is readily adopted by the organization.

A visual of a technical architecture for the new knowledge, data, and AI ecosystem

Mixed User Ecosystem: Designing for Humans and Machines

For a long time, the standard assumption was simple: there would always be a human at the end of the content and data pipeline and curation. A content manager would create and approve workflows, a full-stack developer would expose data through an API and a web app, and a data analyst would write SQL and turn datasets into dashboards inside a BI tool. Content and data curation, governance, and delivery were all designed with those human intermediaries in mind.

This assumption no longer holds true. Today, a growing share of an organization’s “knowledge and data consumers” aren’t people alone; they’re increasingly becoming AI models, algorithms, and autonomous agents. These systems need to discover, understand, and use data on their own, clean it up, or fill in the gaps. This shift is what is changing the bar for how data and knowledge must be managed. It’s no longer enough for data to be intuitive for human comprehension – it also needs to be recognizable and understandable to machines. 

While the requirements between these human and AI consumers overlap, they are not identical. Whereas people can infer meaning from fragmented content or tribal knowledge, AI solutions need something stricter. They need content and data that is granular, self-describing, consistent, and structurally complete. Ambiguity, missing context, or informal conventions that humans can work around quickly become blockers for machines. In short, we’re moving from a world where data was explained to one where it must explain itself. That shift has profound implications for how we design knowledge, data, and AI platforms and the capabilities that sit on top of them.

Ubiquitous AI (within Enterprise Applications)

AI is rapidly becoming a foundational layer in enterprise software, embedded directly into everyday applications rather than added as a standalone capability. 

We see this integration everywhere, driving personalization and automation of repetitive tasks directly within enterprise applications, such as Adobe Photoshop suggesting edits, GitHub Copilot assisting with code, and enterprise platforms like ServiceNow and Salesforce putting AI directly into their workflows.

As AI embeds into the background, making applications smarter and more autonomous, user adoption is naturally rising as well. This trend is supported by findings from a 2025 Gallup Workforce study, which indicated that 45% of U.S. employees use AI for work-related tasks at least a few times annually, with much higher usage in knowledge-based roles. Technology-focused employees, in particular, are using AI to summarize information, generate ideas, and learn new skills (mostly through chatbots, virtual assistants, and writing tools). More advanced capabilities, like coding assistants and analytics platforms, tend to be adopted by employees who are already using AI frequently.

The Semantic & Context Layer

Organizations are realizing that basic or “naive” AI is falling short of delivering on its promise (especially in complex enterprise use cases). The problem usually isn’t the models themselves; it’s that organizations lack structured, shared, and repeatable ways to provide meaning and context to their fragmented knowledge and data.

This convergence of the semantic layer and operational context is setting the new foundation for reliable AI performance and becoming the focal point within modern enterprise architectures. It is creating a modular, “glass box” foundation that favors transparency and factual accuracy over black-box models that rely on pattern matching. 

A key, vendor-agnostic approach on the rise to help organizations take practical steps (no matter their technical maturity) is the semantic layer framework. Built on proven foundations such as metadata, taxonomies, ontologies, business glossaries, and graph solutions, this layer continues to provide the standards and methods for organizing knowledge and data while enabling companies to separate their core knowledge assets from specific applications. This framework isn’t new, but it is becoming essential as AI moves from experimentation to production.

A metadata-first semantic layer provides a unified, standards-based framework for context, governance, and discovery across enterprise data. It makes data accessible through a shared logical layer (often a knowledge graph) exposed via APIs in machine-readable formats that work equally well for people and AI agents. Organizations that have a shared semantic contract in place are able to connect secure, metadata-rich knowledge assets regardless of where they’re created or consumed, and serve both human and machine needs from the same source of meaning.
Finally, the different types of graphs we continue to employ are evolving beyond standard knowledge graphs toward context graphs. While knowledge graphs define and map facts and entities, context graphs add the ‘when, where, and why’ through decision traces (why judgment), temporal awareness (signals like time and location), working memory (user intent and in-progress interactions), and user profile information (role, preferences, and interaction history).

Knowledge Asset Layer & AI Readiness

Many enterprise architecture challenges stem from the absence of a clear framework to connect and interpret all forms of organizational knowledge: structured data, unstructured content, human expertise, and business rules, collectively known as organizational knowledge assets.  

As a result, today’s data and technology leaders are being stretched in every direction. While data architects and engineers are still wrestling with very real, persistent issues like data silos, misalignment with business goals, and pressure to show immediate ROI, their mandates have significantly expanded. A big driver of this expansion and complexity is Generative AI, which has pulled unstructured data into the spotlight – emails, documents, chat logs, policies, and research reports (over 80% of organizational assets). This information usually lives in different systems, owned by different teams, and has historically sat outside the purview of data and analytics leaders. As a result, many of the challenges organizations face with AI today stem from the lack of a clear framework for organizing and interpreting their knowledge assets within the context of the business.

In this realm, the biggest architectural shift happening now is toward connection and interoperability. Organizations are realizing that AI readiness isn’t just about models. It’s about preparing operating models, systems, and teams so AI can do what it does best without having to navigate organizational silos or politics. As such, these emerging AI ecosystems are investing in solutions that support reliable, structured, and understandable knowledge management as a trusted foundation that can turn fragmented information into true knowledge intelligence for the organization. 

Monitoring, Observability, and Unified Access & Entitlements

The quality of AI output is only as good as the data behind it. That’s why unifying knowledge management, data, and AI governance matters to enforce consistent data quality standards and ensure that AI systems leverage accurate and properly secured data to produce more reliable outcomes. Responsible enterprise AI architecture depends on a unified governance framework built around three essentials: strong monitoring, comprehensive observability, and strict access controls, all essential for managing the inherent risks of autonomous systems. 

AI monitoring and transparency starts at design time, where AI solutions should have a clearly defined purpose, explicit constraints, and a known set of tools they’re allowed to use. For agentic solutions, it’s also important for agents to expose reasoning steps to users, while backend systems log context, tool calls, and memory usage for debugging. We have been working with observability platforms (e.g., MLFlow, Azure AI Foundry) to help teams trace decision paths, detect anomalies, and continuously refine behavior using feedback from both users and testers. When it is done well, we find that these governance frameworks and guardrails support innovation instead of slowing it down. 

When it comes to security, AI models and agents need a very different approach than traditional, static access policies. Instead of rigid, application- or group-level permissions, AI requires dynamic, context-aware access, granted at the domain, concept, and data levels, based on the task at hand. Unified entitlements are especially powerful here, providing a consistent definition of access rights across all knowledge assets (for both humans and AI). In practice, this means restricting an agent’s access to specific attributes on specific nodes within a graph setting, rather than locking down or exposing entire systems or datasets.

Finally, organizations are investing in solutions that will give them reliable ways to consistently observe and evaluate AI efficacy. Alongside broader quality, risk, and safety checks, observability includes task-level metrics such as intent resolution, task adherence, tool selection accuracy, and response completeness. Advanced telemetry analytics also provide additional insight into performance, latency, cost, and efficiency trade-offs, even though they aren’t strictly measured. 

The bottom line is that today’s AI systems, especially agentic and non-deterministic models, require going beyond traditional observability. By implementing end-to-end observability frameworks, organizations are starting to gain deep visibility into internal decision-making, execution lineage from input to output, performance, and detecting unauthorized tool or API use proactively.

Conclusion & Looking Ahead

The abundance of AI tools and the rise of AI agents are fundamentally changing how we engage with technology. Successfully leveraging AI within the enterprise starts with understanding the full knowledge, data, and AI ecosystem. 

The primary effort and value is first in connecting structured and unstructured data, making human expertise machine-readable, and capturing business rules as a unified, context-rich foundation. As a result, the conventional architectures of knowledge portals, data lakehouses, and data science workbenches, as we have known them, are becoming obsolete. A new, unified KM, data, and AI architecture is now necessary, one that can support distributed intelligence, ensure secure access to knowledge assets regardless of their source, and maintain continuous monitoring for modular, reliable, and consistent performance.

If you are in the process of evaluating your ecosystem and architecture, learn more from our case studies on how other organizations are tackling this, or email us at [email protected] for more information or support. 

The post Understanding the New Knowledge, Data, and AI Ecosystem: Trends in Enterprise AI Architecture appeared first on Enterprise Knowledge.

]]>