The post The Market and Applications Working Group — Bringing the Next G Alliance 6G Vision to Vertical Markets appeared first on ATIS.
]]>Since its formation in June 2025, the MAWG focuses on ecosystem requirements, market needs, and the economic and technical factors that will drive 6G commercialization in North America. Its membership includes leading communications service providers, network infrastructure vendors, smart device and applications developers, and public safety organizations. The group works closely with industry partners to gather insights, evaluate emerging applications and use cases, and identify target customers and market opportunities.
The mission of the MAWG is to clearly articulate the unique value 6G can deliver and to develop a practical blueprint that addresses ecosystem barriers, market readiness, and pathways to commercial realization for North America. The Group has prioritized verticals based on their economic impact, strategic importance to North American competitiveness, and readiness to benefit from 6G capabilities, including clear use cases that demand enhanced connectivity, measurable market size, and growth potential. The first phase of this work includes Utilities, Public Safety, and Agriculture verticals, with plans to expand into additional sectors, for example, Automotive, Smart Manufacturing, Transportation and Supply Chain, Mining, Oil, and Gas.
For each vertical, the scope includes:
The MAWG is collaborating with academia and industry leaders through external speaker sessions to broaden perspectives, exchange ideas, and deepen its understanding of challenges and opportunities for North America.
As it advances its vertical analyses, the Next G Alliance MAWG remains committed to fostering collaboration, driving innovation, and aligning diverse stakeholders toward a shared 6G vision. By bridging industry insights with academic research and transforming ideas into actionable blueprints, the goal is to ensure that North America not only leads in 6G technology adoption but also reaps the economic and societal benefits it promises. The journey to 6G is a collective endeavor. Together, we are building the foundation for a connected future that empowers every sector and community.
The post The Market and Applications Working Group — Bringing the Next G Alliance 6G Vision to Vertical Markets appeared first on ATIS.
]]>The post Preparing Telecom for the Quantum-Safe Future: Why a Telecom-Specific CBOM Matters appeared first on ATIS.
]]>From the core to the edge, from base stations to hyperscalers, cryptographic operations are embedded in every layer of the network. But how many telecom providers today can confidently answer: Where are we using cryptography? What algorithms are in use? Which are vulnerable?
The first step in any security transformation, especially one as consequential as migrating to QSC, is understanding your inventory of cryptographic assets. Without that visibility, it’s impossible to evaluate risk, plan migrations, or ensure compliance with emerging national and international mandates.
That’s where the Cryptographic Bill of Materials (CBOM) comes in.
CBOM is becoming an essential tool in the IT industry, allowing organizations to document and manage cryptographic algorithms, protocols, keys, and certificates embedded in their systems. By providing a machine-readable, standardized inventory of cryptographic usage, CBOM helps teams understand complex environments, identify weak cryptography, and plan for future transitions.
However, telecom security differs from general IT.
Telecom networks use a range of domain-specific cryptographic protocols, specifically within the Mobile 5G network. Protocols such as 5G-AKA, PRINS, MILENAGE, and EAP-AKA, that are not captured in general-purpose CBOM schemas. Furthermore, 5G network functions communicate over standardized interfaces (such as N1–N32 in 5G), each with unique trust and encryption requirements. Many cryptographic operations are offloaded to hardware (HSMs, TPMs, or embedded elements), making it more challenging to capture the cryptographic inventory of telecom networks.
To truly support telecom providers in their transition towards implementing QSC, CBOM definitions must be extended to reflect the operational and architectural realities of the telecom domain.
To address this need, ATIS is working closely with leading telecom providers and ecosystem partners to define a Telecom-specific CBOM standard, one that builds on existing CycloneDX foundations while introducing the schema, context, and tooling necessary for 5G and beyond.
This Telecom CBOM effort will enable the industry to:
By establishing a Telecom CBOM standard now, we lay the foundation for automated risk management, consistent cryptographic assurance, and confident migration planning is laid before quantum disruption becomes a reality.
This work is open, collaborative, and essential. If you or your organization is are interested in contributing to the development of the Telecom CBOM standard or participating in pilot implementations, we invite you to contact ATIS and get involved.
The path to quantum-safe telecom begins with understanding your cryptography. CBOM is how we achieve that goal together.
The post Preparing Telecom for the Quantum-Safe Future: Why a Telecom-Specific CBOM Matters appeared first on ATIS.
]]>The post Reimagining Wireless Intelligence: A Foundation Model for the Physical Layer appeared first on ATIS.
]]>The post Reimagining Wireless Intelligence: A Foundation Model for the Physical Layer appeared first on ATIS.
]]>The post Beyond Chatbots: Why Telecom Needs a Cognitive Assistant for the 6G Era appeared first on ATIS.
]]>Generative AI excels at complex pattern recognition and intricate sequence generation. However, it suffers from three systemic problems:
Cognition, in contrast, goes much further. It involves understanding not just from context, but from the current situation (e.g., what goals the system is trying to achieve). It learns from experience to improve its operation, reasoning through complex and sometimes conflicting trade-offs (e.g., latency vs. security vs. cost vs. energy savings), and adapting in real-time to dynamic, changing demands. That is the kind of intelligence future networks will require.
A cognitive assistant for telecom is not just a smarter chatbot; it is a foundational AI system that continuously senses the state of the network, learns from traffic and fault patterns, reasons through complex optimization trade-offs (considering latency, energy and security), and takes autonomous actions – configuring, optimizing, and healing the network – with minimal human intervention.
Cognitive networks represent a paradigm shift from pre-defined rules and automation scripts to intelligent, adaptive autonomy. These systems are architected to continuously sense real-time conditions across heterogeneous domains (e.g., RAN, transport, core, cloud, and services), learn from operational telemetry and fault histories to model complex behaviors, reason and decide based on operator intent and encoded knowledge (e.g., topology, policy constraints, and 3GPP technical specifications), and act and adapt autonomously through self-optimization, self-healing, and resource orchestration. The hallmark of the Cognitive Assistant is its ability to employ various types of logic to reason and to provide explanations. This intelligent autonomous system enables proactive and resilient network performance, even under unforeseen conditions and at the scale and complexity demanded by 5G-Advanced and future 6G systems.
The proliferation of mobile devices and the explosive growth in data traffic are severely exacerbating the challenge of spectrum scarcity. Existing static spectrum management models are inherently inefficient, leading to wasted resources and an inability to adapt to rapidly changing service demands. The Cognitive Assistant enables real-time adjustments, moving beyond static rules to truly opportunistic and efficient spectrum utilization, which is fundamental for achieving the promised performance of 6G.
The energy consumption of 5G networks is a growing concern, as it is estimated to be three to five times higher than that of 4G systems. This increase is due to wider bandwidths, more channels, and more complex equipment architectures. Additionally, the use of higher frequency bands in 5G necessitates a greater number of base stations for equivalent coverage, further increasing deployment costs and energy footprints. Current operational management systems often lack the flexibility and intelligence to dynamically adjust the operational status of base stations in response to real-time changes in user traffic, resulting in significant energy waste during periods of low demand.
The 6G era demands a commitment to “lower-carbon wireless coverage”. The Cognitive Assistant can dynamically manage network components for optimal energy consumption by understanding how the situation is changing and learning experientially from its operation. This capability positions cognitive AI not just as a performance enhancer but as a critical enabler for sustainable and cost-effective network operations, addressing a key strategic imperative for telecom operators.
The inherent complexity of modern networks makes it increasingly difficult to manage and enforce security protocols effectively. Furthermore, the integration of AI into 6G networks introduces new concerns regarding data privacy and algorithmic transparency. The Cognitive Assistant includes the ability to provide an “audit trail” and “explain” decisions to engineers and auditors. In an environment with complex cyber threats, the proliferation of AI for detection and response necessitates a high degree of trust in AI decisions. The Cognitive Assistant inherently supports auditability and explainability, meeting EU AI ACT requirements and addressing a gap in existing 5G AI/ML frameworks. This directly addresses the need for algorithmic transparency and trustworthiness, making the Cognitive Assistant uniquely suited for high-stakes security applications where accountability is non-negotiable.
This capability elevates the role of the Cognitive Assistant beyond merely detecting threats to building inherently trustworthy and auditable autonomous security systems, a foundational requirement for critical infrastructure. In addition, it supports dynamic microsegmentation aligned with Zero Trust principles, preventing lateral movement in Mobile Edge Computing (MEC) . By continuously assessing device posture and user behavior, the Cognitive Assistant enables adaptive access control, the core principle of Zero Trust security. The Cognitive Assistant can also integrate fragmented data from Security Information Event Management (SIEM), Endpoint Detection and Response (EDR), and Network Data Analytics Function (NWDAF), a gap noted in the ATIS 5G Enhanced Zero Trust analysis.
Modern telecom networks are characterized by extreme complexity, encompassing multiple layers of virtualized resources, software-defined components, and a diverse mix of new technologies and legacy systems from various vendors. The Cognitive Assistant can continuously sense the network state across heterogeneous domains (RAN, transport, core, cloud, and services) and learn from these diverse environments.
The inevitable shift towards automated and AI-driven networks creates a significant skills gap within the telecom workforce. The Cognitive Assistant is designed to understand natural language queries and explain decisions to engineers and auditors. These human-centric interaction features directly mitigate the skills gap. Engineers do not need to become AI developers; rather, they need to become proficient operators and interpreters of AI-driven insights. Successful AI adoption in telecom, therefore, is not solely about technological advancement; it is equally about enabling effective human-AI collaboration and ensuring a smooth workforce transformation.
To realize this vision of intelligent autonomy, the industry needs more than traditional AI. As telecom networks grow in complexity, there’s a rising need for AI systems that can go beyond pattern recognition to support contextual, rules-based decision-making. A Neuro-Symbolic Cognitive Assistant (NeSy) represents a next-generation approach—combining machine learning with symbolic reasoning to meet the specific demands of telecom operations. NeSy combines the fluency and learning power of transformers with the structure, reasoning, and verifiability of symbolic AI. This hybrid approach embodies two fundamental aspects of intelligent cognitive behavior: the ability to learn continuously from experience and the capacity to reason based on acquired, structured knowledge. This integration leads to enhanced generalization capabilities, improved interpretability, and greater robustness. NeSy integrates:
Let’s take 5G network slicing as an example. Slices must be dynamically allocated and optimized for various services, including gaming, IoT, and emergency response. NeSy can:
The result? Fewer outages, better resource use, and full transparency.
NeSy interoperates with and enhances the functionality of key 3GPP Network Functions by enabling causal reasoning and refining policies using symbolic logic. The symbolic component, with its explicit reasoning and structured knowledge, is particularly valuable in telecom scenarios where certain critical data might be sparse, sensitive, or require strict adherence to pre-defined rules, thereby enhancing robustness and efficiency where purely data-driven approaches might fall short. For example:
Cognitive autonomy is no longer optional. As networks become more dynamic and service demands escalate, telecom operators must adopt AI systems that can reason, learn, and act with intent. The ATIS AI Network Applications (ANA) group is advancing the requirements for NeSy, a neuro-symbolic cognitive assistant designed to serve as the intelligent fabric of next-generation networks—self-evolving, highly autonomous, and deeply aware of the complexities of wireless environments. This work aligns with and extends industry efforts such as ETSI ENI (Experiential Networked Intelligence), which defines an extension of the Observe-Orient-Decide-Act (OODA) closed-loop AI mechanisms for network cognition and adaptation. In particular, NeSy extends ENI’s OODA implementation to include meta-cognition and advanced reasoning and planning, providing explainable, knowledge-driven reasoning.
In future work, NeSy will be integrated with the Wireless Physical Foundation Model (WPFM). This work will kick off exploring various architectural integration strategies that enable the WPFM to play the role of a foundational “sensory” layer that perpetually enriches NeSy’s cognitive core. This approach is grounded in established principles of cognitive science and advanced AI system design, which advocate for a clear separation between low-level perception and high-level reasoning.
If you’re interested in shaping the future of cognitive autonomy in telecom networks, we invite you to get involved in the ATIS AI Network Applications (ANA) group. Help define the next generation of intelligent, explainable AI for network operations. Contact Rich Moran at [email protected] to learn more about participation opportunities.
The post Beyond Chatbots: Why Telecom Needs a Cognitive Assistant for the 6G Era appeared first on ATIS.
]]>The post The Value of Verifiable Credentials in Telecom – Building a Framework for Trust appeared first on ATIS.
]]>In the telecom ecosystem, establishing trusted identities for individuals and organizations is essential to addressing challenges including spoofed calls or SMS messages, impersonation, and ensuring regulatory compliance.
However, not all verifiable credentials can be trusted equally. Verifiable credentials are issued across a variety of ecosystems, such as government authorities, industry-specific governance bodies, or independent organizations. Without proper oversight, telecom service providers verifying presented credentials face a challenge: how to know which credentials meet the necessary security and trust requirements.
This is where the concept of Telecom Verifiable Credential (VC) Governance becomes essential. The role of telecom VC governance is to sanction and authorize the use of credentials issued under the control of external governance authorities. Rather than setting standards for external governance itself, the telecom governance framework establishes criteria for identifying credentials backed by strong governance and robust vetting, thus establishing a foundation of trust and reliability.
For organizational identities, this means that the telecom VC governance authority can endorse credentials issued by external governance bodies — such as those managing LEIs or business certifications—provided their policies meet telecom-specific standards. These external bodies operate independently, but their vetting and verification processes align with the telecom VC framework to ensure interoperability and trust. Similarly, organizational attributes, such as business licenses, industry certifications, and operational details, can also be endorsed. Once properly governed, the telecom VC governance authority can maintain and share a list of endorsed externally issued verifiable credentials with telecom verifiers. This enables telecom verifiers, such as service providers or end-user devices, to confidently identify which credentials can be trusted and verify the presented information as both accurate and reliable within the telecom domain.
What makes this approach particularly powerful is that any entity in the call path — whether a telecom service provider, an intermediary, or even an end-user device — can act as a verifier. When a verifiable credential is presented during a call setup or transaction, it can be authenticated using cryptographic proofs and governance policies. This ensures that all parties involved can trust the presented identity or organizational information. For instance, a telecom provider receiving a VoIP call can validate the business’s verifiable credential indicating their identity, business name, and purpose of the call, ensuring the call is legitimate before connecting it to the recipient. The telecom VC governance framework ensures that this verification process is reliable, secure, and scalable across the entire ecosystem.
By endorsing trusted verifiable credentials that are already in the public domain and aligning with external governance bodies, the telecom industry can create a foundation for trusted identity verification. This not only enhances security by reducing fraud and impersonation but also improves the efficiency of telecom operations by enabling streamlined and automated verification processes. Furthermore, integrating these credentials into the telecom framework offers greater interoperability across different domains, ensuring that trusted identities can be verified seamlessly, whether for regulatory compliance, enterprise communication or consumer services.
In conclusion, verifiable credentials have the potential to transform the telecom industry by providing a trusted, secure, and scalable method for verifying individuals and organizational identities along with their attributes. However, this trust relies on a robust governance framework that authorizes the use of externally issued credentials backed by strong governance, rigorous vetting processes, and robust policy controls, enabling telecom verifiers to authenticate these credentials with confidence. The ATIS Enterprise Identity Working Group is actively examining how a Telecom VC Governance Framework can bring these trusted verifiable credentials — backed by strong governance from governments and industry associations — into the telecom domain. By taking a proactive role in managing credential governance, the telecom industry can address long-standing challenges such as call fraud and identity spoofing, while creating a future where trusted digital identities form the backbone of secure and seamless communication.
The post The Value of Verifiable Credentials in Telecom – Building a Framework for Trust appeared first on ATIS.
]]>The post Navigating the Quantum Leap: PQC Migration and What It Means for the ICT Industry appeared first on ATIS.
]]>The post Navigating the Quantum Leap: PQC Migration and What It Means for the ICT Industry appeared first on ATIS.
]]>The post Navigating Quantum Risks: The Imperative of Crypto Agility KPIs for Risk Managers appeared first on ATIS.
]]>Enter Crypto Agility — not just a buzzword but a strategic move that is imperative for business continuity. It is about swiftly adapting new cryptographic strategies in response to evolving quantum threats. Crypto Agility is not just about the technical deployment of new cryptographic algorithms; it reaches across every aspect of an organization’s business and its operations; it is a proactive stance, a commitment to staying ahead of the curve.
At the heart of our strategy are Crypto Agility Key Performance Indicators (KPIs). These are not mere metrics; they are the linchpin for our quantum risk assessment. They provide measurable insights into an organization’s readiness to counter this quantum threat. It is about understanding, measuring, and fortifying crypto agility.
But how do we practically implement this strategy? Here is the breakdown:
By following these steps, organizations can systematically and comprehensively manage the implementation of a crypto agility strategy, from forming a dedicated team to monitoring KPIs for ongoing adaptability.
Successfully meeting the challenges present in the quantum era demands an approach that is not static. Crypto Agility KPIs are not just metrics for measuring past efforts; they are tools for proactively planning a resilient future. It is about engineering our operations not just for imminent threats but for the enduring quantum era. Amidst this transformative journey, adopting a strategic framework for Crypto Agility and Quantum Risk Assessment is paramount. This framework not only allows organizations to measure and report on Crypto Agility KPIs but does so through a lens of standardized metrics. It introduces common ground, enabling interoperability and facilitating a shared language across the industry.
In the dynamic cybersecurity landscape, achieving quantum resilience is not a solo endeavor. It requires collaborative efforts and shared commitment, especially when interacting with vendors and third-party entities. The integration of Crypto Agility Key Performance Indicators (KPIs) in these collaborations becomes paramount. These shared metrics serve as a common language, providing a unified understanding of the progress made collectively toward achieving cryptographic agility. By fostering a consistent approach to tracking and interpreting these KPIs, organizations and their collaborators can align their strategies, reinforcing their joint dedication to the implementation of quantum-resistant cryptographic solutions. This collaborative stance ensures a robust and unified defense against emerging quantum threats, laying the foundation for a secure digital future.
In summary, the significance of Crypto Agility KPIs for an organization and their risk managers goes beyond bureaucratic measures. It is our essential toolkit for not just surviving but thriving in the face of quantum challenges. Embracing a standards-based framework for Crypto Agility KPIs is our unified commitment to building a future that is both resilient and secure against evolving threats. It is not just a strategy; it is our collective pledge to safeguard the integrity of our digital landscape in the quantum era.

This article only scratches the surface of the critical role Crypto Agility KPIs play in fortifying organizations against the impending quantum threat. For a more in-depth exploration of this imperative strategy and a comprehensive guide to implementing a standards-based framework, we invite you to download ATIS’ Strategic Framework for Crypto Agility and Quantum Risk Assessment, which introduces crypto agility metrics that ICT organizations can use to proactively measure, assess, and enhance their preparedness for the shift to quantum-safe cryptography. Delve into detailed analyses, practical recommendations, and real-world examples that will empower you and your organization to navigate the quantum era securely. The ATIS report will be your resource for building a resilient and quantum-ready future. Download now for comprehensive insights.
The post Navigating Quantum Risks: The Imperative of Crypto Agility KPIs for Risk Managers appeared first on ATIS.
]]>The post How Self-Sovereign Identity Can Provide an Effective Defense Against SIM Swap Fraud appeared first on ATIS.
]]>With the aim of establishing a resilient solution to combat SIM Swap fraud, ATIS’ User-Controlled Privacy Using Self-Sovereign Identity (SSI) initiative is exploring ways in which SSI can enhance security and maintain identity verification integrity. An SSI-based solution would provide cryptographic linkage between the proof of identity and the telephone number utilized, offering a formidable measure to combat SIM swap fraud.
SIM swap fraud, a form of identity theft, occurs when cybercriminals use stolen personal data to impersonate a targeted victim and request the transfer of the victim’s mobile telephone number to a new SIM card. Once in control of the victim’s telephone number, the attacker can intercept calls, messages, and even two-factor authentication (2FA) codes, thereby gaining access to a multitude of the victim’s personal accounts.
SIM swap attacks are becoming increasingly sophisticated in how they target consumers. This means there is an urgent need for secure identity verification processes for mobile network operators. These processes need to be more robust and less prone to manipulations by fraudsters, while also being user-friendly for both the mobile customers and the operators to operate.
In the pursuit of a more secure and reliable identity verification solution, SSI emerges as a compelling contender. SSI is an approach to digital identity that empowers individuals with ownership and control over their personal data, dictating when and where they provide it, such as to a website or in person. The user’s digital identity, along with personal data, can be conveniently selected via a mobile wallet application and is cryptographically signed by the user to affirm its origin. This information, when received, can be cryptographically verified against the individual’s digital identity, ensuring that the data comes from the legitimate owner and has not been spoofed by an attacker.
By applying SSI identity verification for telecommunication customer authentication, we can mitigate risks associated with SIM swap fraud. Here’s how:
In the era of rising digital threats, such as SIM swap fraud, the importance of secure identity cannot be overstated. SSI offers an effective solution, enhancing security and maintaining identity verification integrity. By adopting SSI, mobile network operators can step up their defense against identity theft, secure their operations, and, most importantly, safeguard their users. The future of identity verification lies in empowering individuals with control over their data, and SSI provides the framework to make this possible.
For a more comprehensive understanding of how SSI can enhance security and maintain integrity in identity verification for the telecommunications industry, view ATIS’ Self-Sovereign Identity in Telecommunications Services white paper.
[1] Jones, David. “Hackers steal thousands of dollars through victims’ cell phones using SIM swap fraud, Mar. 17, 2023, Fox 8 Live.
[2] https://www.fcc.gov/document/chairwoman-proposes-rules-protect-consumers-cell-phone-accounts
The post How Self-Sovereign Identity Can Provide an Effective Defense Against SIM Swap Fraud appeared first on ATIS.
]]>The post In Anticipation of Metaverse Standardization appeared first on ATIS.
]]>Much tougher challenges lie ahead, however. For example, how will industry players agree on interoperability standards for portals, or walled garden metaverses? Will service providers agree on a standard for location information to support teleporting across the digital universe? These issues question whether participants can agree on scalable and industry-wide standards in “pay-to-play” or restricted-participation standardization bodies.
Consumer and Industrial Metaverses
A common perception of the metaverse is one of consumers operating their avatar presences in digital worlds. Typical examples apply to multi-player games in online worlds. A different example, from the education sector, involves a science student interacting with a digitally rendered plant in a game-like setting to see how well they could nurture it. In time, novel interfaces and sensors will add touch and smell sensations to this learning experience.
A second metaverse category applies to industrial users. A simple version might involve immersive training. This is where technicians use an augmented reality application to practice a complex maintenance procedure in a controlled environment before they tackle any real-world repairs.
IoT and the Metaverse
Both consumer and industrial examples make use of digital representations of physical objects. These are commonly referred to as digital twins. In a predictive maintenance situation, a digital twin mimics the behavior of a “healthy” machine. Comparing the dynamic behavior of a device and its digital twin allows operational staff to detect the onset of failure and to schedule preventative maintenance. Consider the example of an automatic door in a subway train or an elevator. If the pattern of a door’s opening and closing movement deviates from the prediction of its digital twin, an automated system would detect speed or jerkiness differences. This might trigger an alert about an incipient motor failure or the need for cleaning to remove an obstruction. Allowing technicians to enter this industrial metaverse and visualize what might be happening can also lower the cost of human validation and prevent unnecessary shutdowns.
The value of a simulated engine or the digital twin of a smart city’s infrastructure depends on how well physical and virtual worlds are connected. IoT sensors, connected devices and interoperable data models are, therefore, critical components of industrial metaverses.
Interoperability and Standardization
In practice, metaverse scenarios involve multi-stakeholder collaboration, often involving completely new use cases. By way of illustration, consider the case of a private car or a delivery truck passing through a city. There are many opportunities for data interactions between the vehicle owners and operators as well as municipal agencies in charge of traffic management and public safety, for example. This requires some infrastructure for data exchanges as well as capabilities to ensure trustworthy data sharing and mechanisms to share value among participants. The many stages in exchanging data lend themselves to a system of authentication marques and fractional payments.
The same pattern applies to condition monitoring insights in a manufacturing facility. The site manager would want condition monitoring information for pumps and motors supplied by different vendors as well as a consolidated picture for each manufacturing line.
Growth in the number of IoT devices and decentralized networks is taking place against a backdrop of rising concerns about data privacy. This creates a requirement for secure and reliable digital identities linked to personal data management. As a result, there is a need for traditional identity systems to adapt.
In anticipation of emerging requirements, ATIS launched an initiative focusing on User-Controlled Privacy Using Self-Sovereign Identity. Its aim is to examine the use of Self-Sovereign Identity (SSI) to provide a portable and interoperable identity and authentication solution that can function across both physical and virtual domains. SSI can unlock personal data in a way that fosters greater trust between consumers and businesses, while also helping companies comply with privacy regulations. The metaverse presents new challenges for portable identity and authentication across both physical and virtual domains of which SSI can provide an effective solution. A recent ATIS paper explores how SSI can help communications service providers comply with new data privacy mandates and create value for their customers.
At the same time, broader needs for interoperability standards are driving industry alliances such as the Metaverse Standards Forum and the Open Metaverse Interoperability Group. The close interdependencies between metaverse and IoT sectors motivated oneM2M to explore these issues. Several organizations from Korea, a country that is firmly on the metaverse path, are involved alongside oneM2M members from other countries. Andrew Min-gyu Han of Hansung University and France-based Shane He from Nokia are co-leading the effort. Their aim is to identify and assess the feasibility of key use cases and requirements to enable metaverse services based on IoT.
An important aspect of the work will deal with metaverse devices. Standardizing the definition of information models will create the foundations for easy data interoperability. Following best-practice standardization procedures, findings from the initial research will be published in a technical report. This will feed into the next step to define technical standards. As with other oneM2M publications, these are openly accessible and free to download, which increases the prospect for scalable and economically affordable solutions. To stay updated, follow oneM2M’s regular posts on IoT standardization.
The post In Anticipation of Metaverse Standardization appeared first on ATIS.
]]>The post It’s About Time – For a National Resilient Timing Architecture appeared first on ATIS.
]]>Delivery drivers can defeat fleet timing and tracking with a $30 device ordered off the internet. For just a few dollars more, criminals can get a device that will shift time and location to lure those drivers into areas where they can be easily hijacked. The government of Mexico says 85% of all cargo thefts involve a GPS disruption device of some kind. These disruptions can also affect air travel. Most folks don’t even know they are vulnerable.
In 2011, Todd Humphreys showed how manipulating time in an exchange could enable someone to reverse the trade sequence, allowing them to sell something before they bought it, potentially reaping millions. Now, twelve years later, exchanges and the core financial industry have multiple resilient time sources and sufficient algorithmic protections to prevent that from happening. Yet 99% of retail financial service customers are outside the New York, Chicago, and San Francisco core financial enclaves. Most likely lack authenticated and resilient time. For them, over-dependency, complacency, and false trust are still real issues.
Resiliency and sync tend to be local (or relative) and costly. Synchronization has enabled innumerable applications and technologies over the last 30 years. How could we have cell phones without precise time sync?
Yet, in the absence of a sufficiently accurate, resilient, and widely distributed national time scale, that synchronization has tended to be intra-system, rather than to an external common standard. This adds a layer of complexity and difficulty when systems try to operate nationally and/or with each other.
It also inhibits innovation, makes those without great timing more vulnerable, and limits sales of some equipment and services to the few users and environments which already have authenticated, and resilient timing. It also means sync is more costly and difficult for innovators and startup entrepreneurs.
Our technology needs to operate nationwide, operating efficiently, and avoid conflict. We need to synchronize operations across industries and the nation. To do this we need to democratize precise timing. Easily accessed national timing at an acceptable level of precision is needed if America is going to foster innovation and keep finding efficiencies to improve the way we operate. It’s about time for a Resilient National Timing Architecture. We at the Resilient Navigation and Timing Foundation (RNTF) have supported this for some time. We published a white paper on the topic in October 2020, and followed it up in 2021 with another on how government could lead establishment of the architecture easily and inexpensively.
While we urge government leadership, we don’t think the government should build anything. There are more than enough companies that can provide timing services more economically and efficiently than the government ever could.
The government should support the effort with commercial contracts and subscriptions. The RNTF’s proposed architecture provides multiple diverse methods of delivering time that could be accessed by as many Americans as possible. It includes fiber connections; suites of existing atomic clocks at USNO, NIST, national labs, and elsewhere; L-band signals from space; and terrestrial broadcast.
We weren’t the only ones who thought this. Three months after we published our paper, the Department of Transportation released its report on GPS Backup Technologies. They also said the nation needed L-Band from space, fiber, and terrestrial broadcast. Also agreeing with us is a group of CEOs and senior executives from major telecom companies acting as the National Security Telecommunications Advisory Committee (NSTAC).
In their May 2021 report to President Biden, they discussed GPS vulnerabilities and threats, urged establishment of a national timing capability, and funding. They recommended a structure, and I quote:
“…similar to that reflected in the Resilient Navigation and Timing Foundation’s paper entitled “A Resilient National Timing Architecture.” Further, to enhance the ability of commercial entities to afford leveraging this architecture, the Administration should appropriate sufficient funds to lay the foundation for creating this timing architecture, with the Federal Government being the first customer for what will ultimately become a resilient, interconnected network for PNT delivery.”
Very few in government, notably in the Office of Management and Budget, assert that government involvement and leadership is not needed. A resilient national timing architecture will grow organically as a result of free market forces.
Among the most important reasons why this is wrong is that there are no commercial incentives to create this kind of fundamental tech infrastructure for broad adoption and use. As the NSTAC mentioned in its report, it is not possible to compete with free GPS. Even if it were, the kind of broad adoption needed to ensure innovation and national resilience would be stifled by charging fees for basic, utility-level timing. And even if such an architecture did arise organically as a result of market forces, would it really meet the nation’s needs?
Some form of government policy and financial leadership is needed to make a resilient national timing architecture happen. That was recognized by the capitalist CEOs that make up the NSTAC and has been reinforced by industry groups since then.
It’s about time for us to establish a resilient national timing architecture.
(Adapted from Goward’s presentation at ATIS’ Time and Money Workshop, held at the New York Stock Exchange, 17 January 2023)
The post It’s About Time – For a National Resilient Timing Architecture appeared first on ATIS.
]]>