

At SPIE West in January 2026, CESNET and GEANT presented the research results of their research into successfully combining data and non-data services across single fibre optics. SPIE is the international society for optics and photonics. It works to strengthen the global optics and photonics community through conferences, publications, and professional development, bringing together engineers, […]
The post CESNET and GÉANT combining data and non-data services across single fibre optics. first appeared on GÉANT Network.
]]>

At SPIE West in January 2026, CESNET and GEANT presented the research results of their research into successfully combining data and non-data services across single fibre optics.
SPIE is the international society for optics and photonics. It works to strengthen the global optics and photonics community through conferences, publications, and professional development, bringing together engineers, scientists, students, and industry leaders to advance light-based science and technologies. Over the past five years SPIE has contributed more than $25 million to the international optics community.
At the event, Jan Radil talked about coexistence of different classes of optical signals transmitted simultaneously in one fibre. Optical networks are well known as the crucial backbone for all high-speed data transmissions, however, these are now being increasingly used for something which may be called non-data services. Metrology applications such as accurate (or precise) time and ultrastable frequency transfer are no longer exotic as used to be 15 years ago.
Given the geopolitical situation and the consequential national security challenges of the last few years, great interest has grown in using traditional optical data networks for fiber sensing based on technologies such as distributed fiber sensing (DAS) or state of polarization (SoP) and also for quantum technologies, including quantum key distribution (QKD). What is really critical is the reliability and stability of optical fibre links and not the huge amounts of data transmitted.
The concept sounds rather simple – just combining different optical signals in one fibre – but this is a rather challenging task for all Internet service providers (ISP), including both commercial operators and more academic operators, such as national research and educational networks (NREN). New non-data services do use slow ‘legacy’ signals, utilizing simple modulation schemes, which may interfere with high speed data signals, utilizing more advanced modulation schemes. Some engineering rules must be adhered to and
elaborate considerations should be given to all technical aspects when combining all mentioned applications. It is rather surprising that after 15 years this situation has not been resolved, and it is the reason why CESNET and GÉANT have been working in this field for a number of years. Previously the work has been presented as posters in 2023, 2024, 2025 and finally in 2026 as oral presentation.
We have found that new generations of coherent systems are pretty resilient to slow data signals and new non data services may be deployed together with standard data transmissions. We are convinced that the results are not just some academic exercises. While it is true that slow signals up to 10 Gb/s are becoming obsolete for data transmission, they are used by new applications like time transfer and distribution. Fiber sensing also do use slow amplitude modulated signals, and both applications (or services) are clearly very important for national security reasons, providing additional means for localization capabilities (e.g. sabotage vessels identification and determent) and adding profundity to GNSS based time services (which is also important for 5G, 6G and other mobile services).
You can find out more about the 2026 presentation here
The post CESNET and GÉANT combining data and non-data services across single fibre optics. first appeared on GÉANT Network.
]]>

18 November 2025 | Sonja Filiposka The path to network automation Full automation of network services is not a trivial task. Even something routine, such as the provisioning of a Layer 2 circuit, requires putting together a detailed orchestration pipeline. The workflow steps involve reading from the single source of truth, selecting the right resources, […]
The post From prompt to provisioning: AI as your new network orchestration assistant first appeared on GÉANT Network.
]]>

18 November 2025 | Sonja Filiposka
Full automation of network services is not a trivial task. Even something routine, such as the provisioning of a Layer 2 circuit, requires putting together a detailed orchestration pipeline. The workflow steps involve reading from the single source of truth, selecting the right resources, sequencing multiple API calls, handling errors, and ensuring compliance with internal service models. In mature research and education networks, where usually each environment is a blend of legacy systems and modern platforms, this process becomes even more demanding.
Traditionally, service automation means that engineers need to develop orchestration workflows by hand, relying on deep system and network knowledge and hours of scripting, coding and testing. Nowadays, we on the verge of a shift in the approach with a new kind of coding opportunity. Vibe coding is a software development approach that uses AI to generate functional code from natural language prompts. In essence, based on this approach you can use an AI assistant to build and maintain automation pipelines. Given the increasing service demands managers face, vibe coding might be the game-changer that can help all NRENs, especially the small ones.
We recently tested the idea of using vibe coding for service design and workflow development from scratch in the GP4L testbed. The task was straightforward on the surface: provision a Layer 2 service across the network using existing systems (Maat as a single source of truth, LSO and Ansible for network configuration change) and corresponding APIs. But instead of writing the workflow in Airflow from scratch, we asked an AI assistant, ChatGPT-4o, to help us build the orchestration logic. The only input we gave was the high-level service description, information about the available systems and their APIs, and finally a request to make the solution TM Forum ODA compliant.
From there, the process became a dialogue. We described what the workflow should do, and the AI responded by generating building blocks of code. The blocks in this case were Airflow workflow (i.e DAG) and its tasks that implement the orchestration sequence. At first, the AI handled the structure: setting up task dependencies, determining where to fetch topology data, where to apply constraints, and where to insert decision points. Then, with a few more prompts, it began to fill the rest of the details, suggesting API calls, parameter formats, and error handling logic. The results were not perfect on the first try, but they were quite close. With a number of corrections and iterations, we were able to obtain a working automation flow.
One of the most impressive outcomes was the AI assistant’s performance at the high-level service modelling stage. The AI proved especially adept at producing TM Forum–compliant service definitions that align with standardised APIs and resource structures. This allowed us to establish a consistent, standards-aligned design framework from the beginning. The assistant not only understood the TM Forum design patterns but used them to guide the structure of the workflow and the relationships between services and resources. For teams already working with TM Forum Open APIs, this capability adds enormous value.
The experiment is especially valuable because it showcases the speed and adaptability of the approach. Traditionally, creating such workflows from scratch takes days of effort, especially if the orchestration logic needs to comply with TM Forum service models and Open APIs. In our test, we completed the initial scaffolding in under a day and had a functional prototype several times faster compared to the manual development. Furthermore, the AI assistant suggestions were right on track when aiming to make the solution production ready, prompting to add parameter validation, failback, and error handling. Over the course of the experiments, we found that the AI was also particularly good at handling the “boring but necessary” parts of the job, that developers often overlook and don’t want to waste too much time on, such as setting up the correct structure, repeating standard validation steps, and remembering the exact format of parameters. This helps engineers to focus on the tricky parts: how to translate service intent into something the infrastructure can actually execute.
The AI’s initial outputs were over 80% correct when generating the high-level scaffolding and around 50% accurate when producing detailed implementation logic. We also concluded that it quickly improved with feedback. Because the assistant works interactively, we didn’t have to start from scratch when something was off. Instead, we simply pointed out the issue and asked for a fix. In many cases, the AI was able to correct itself immediately. This kind of interaction felt like collaborating with a junior engineer that is always happy to oblige and improve, and never forgets what you told them five minutes ago.
Over the full span of the test, the AI-assisted approach cut development time by more than half compared to our traditional process. The even more important benefit is that the produced workflows are easier to review and maintain due to the generated standardised and readable code.
Hence, vibe coding isn’t science fiction, it is (not perfect) reality. AI is now capable of significantly contributing in the design process by translating structured intent into executable orchestration steps. The key is to treat the assistant as a collaborator. It won’t get everything right immediately, but it can generate results fast, and those results improve with every prompt. For teams working in environments where service onboarding speed is critical and automation needs are growing fast this approach offers a real advantage.
Of course, using AI in this way also means adapting ourselves. Engineers need to shift their thinking from scripts to prompts. Managers need to ensure that intent models and data sources are clearly defined with readily available clear documentation, so that the AI gets a standard input easy to parse. In this way, teams can reuse and quickly adapt AI-suggested templates.
This website presents more information on the topic: https://geant-netdev.gitlab-pages.pcss.pl/gp4ldocs/guides/playground/ai_workflows/idea/
Discover more innovation stories >
The post From prompt to provisioning: AI as your new network orchestration assistant first appeared on GÉANT Network.
]]>

29 May 2025 Hands-on practical exercises are a very important aspect of today’s learning. They enable students to get valuable real-world experience with the topic at hand, allowing them to grasp relevant concepts more easily. While the experience of taking part in such practical exercises is undoubtedly rewarding, organising them can be a challenging task […]
The post Supporting modern education using virtual labs with nmaas vLAB first appeared on GÉANT Network.
]]>

29 May 2025
Hands-on practical exercises are a very important aspect of today’s learning. They enable students to get valuable real-world experience with the topic at hand, allowing them to grasp relevant concepts more easily. While the experience of taking part in such practical exercises is undoubtedly rewarding, organising them can be a challenging task for educators, especially when a large number of participants are involved. The exercise preparation usually entails hardware provisioning, configuration, application deployment, user management, per tenant isolation, and integration with existing platforms, such as learning management systems (LMS) and grading systems. Depending on the subject area and technical proficiency of the educators and supporting staff, this manual approach does not scale for moderate and large groups of students.
nmaas [https://docs.nmaas.eu/] is an orchestration system that can be used to simplify the organisation of such hands-on exercises. nmaas has evolved from a system oriented towards the management and monitoring of network infrastructures into a full-fledged cloud platform that can be used for a variety of purposes, wherever a cloud environment with efficient management of application instances is required. This versatility is also reflected in the name change from NMaaS (Network Management as a Service) to simply nmaas.
The goal of nmaas vLAB [https://vlab.dev.nmaas.eu/] is to be applicable to as many educational scenarios as possible, allowing educators to more easily organise quality hands-on exercises, while also providing an easy to use environment to the lab participants to complete these exercises. To this end, the nmaas team has already introduced reference applications to the application portfolio to support multiple learning topics and related exercises, including:
nmaas vLAB brings new and exciting features to the core nmaas platform to better organise hands-on exercises. It is now possible to deploy applications in a bulk fashion which is particularly useful for exercises where each participant needs to work with their own instance of a given application. As the nmaas portfolio supports many applications by default, virtual lab managers can create an infinite number of custom tailored application catalogues, restricting access to only those applications that are required for the course at hand. Recognising that in many cases the available compute resources cannot satisfy the requirements for organising a given hands-on exercise in which all involved students can take part in, nmaas also introduces a time-sharing concept relying on pausing idle application instances and reactivating them upon next user access attempt. This ensures that the same nmaas instance can be used for organising virtual lab exercises for multiple courses at once, while safeguarding the integrity of the platform, limiting potential resource usage, and providing a simpler experience for virtual lab participants.
To better illustrate a potential vLAB usage scenario, imagine two distinct educators Alice and Bob, who would like to use nmaas for organising virtual lab exercises for their courses – Introduction to Network Management and Advanced Web Development, respectively. Alice and Bob choose to use the managed nmaas vLAB instance offered as part of the GN5-2 project so that they do not have to deploy nor maintain any infrastructure on-premise. Both Alice and Bob export the list of course participants from their learning management system (LMS) and import it to nmaas using a dedicated form. They are now responsible for managing an nmaas domain group that represents their respective course comprising all individual student domains. In their dedicated domains, lab participants will deploy the necessary applications as part of the virtual lab exercise.
As this is the first time that Alice’s and Bob’s students use nmaas, they opted to restrict the catalogue of available applications using the nmaas domain group to only those required to complete the respective labs. Alice decides to let her students deploy the application instances by themselves, while Bob uses the bulk application deployment functionality to deploy personalised instances for each student ahead of time. Once the virtual lab exercise begins, students log in to the nmaas Portal. As Alice’s students have only local accounts since their institution is not part of the eduGAIN federation, they first reset their passwords upon initial login. Bob’s students use the single sign-on functionality, as their institution is on-boarded to eduGAIN. Alice’s students use the application deployment wizard to interactively deploy and configure their application instances. Bob’s students simply access the already deployed instances and begin their work. Even though Alice and Bob do not know each other (and most likely never will), the multi-tenant nmaas architecture allows them to use the same nmaas instance to organise virtual lab exercises without any friction.
All of the improvements done to the core nmaas orchestrator related to the vLAB use-case are available under the same open-source license as the rest of the software. Interested users can deploy nmaas on their own infrastructure today or get in touch with the nmaas team if they are interested in conducting a small-scale vLAB pilot on the managed nmaas instance provided within the GN5-2 project. For any questions the nmaas team is also available on the newly created nmaas Discord server [https://discord.com/invite/CZzvZH2TAe].
Discover more innovation stories >
The post Supporting modern education using virtual labs with nmaas vLAB first appeared on GÉANT Network.
]]>

nmaas is an open-source, multi-tenant orchestration platform for automated deployment and management of containerised applications in the cloud or on private infrastructure. It enables NRENs, institutions, research and development teams, and network and IT service providers to quickly deploy and operate a wide range of applications through a unified, self-service portal.
The post nmaas first appeared on GÉANT Network.
]]>

nmaas is an open-source, multi-tenant orchestration platform for automated deployment and management of containerised applications in the cloud or on private infrastructure. It enables NRENs, institutions, research and development teams, and network and IT service providers to quickly deploy and operate a wide range of applications through a unified, self-service portal.
The post nmaas first appeared on GÉANT Network.
]]>

The Special Interest Group – Network Operations Centres (SIG-NOC) is a community effort initiated by the National Research and Education Networks (NRENs) gathered under the GÉANT Association in Europe. The SIG-NOC creates an open forum where experts from the GÉANT Community and beyond exchange information, knowledge, ideas and best practices. These cover specific technical aspects […]
The post SIG-NOC Survey first appeared on GÉANT Network.
]]>

The Special Interest Group – Network Operations Centres (SIG-NOC) is a community effort initiated by the National Research and Education Networks (NRENs) gathered under the GÉANT Association in Europe. The SIG-NOC creates an open forum where experts from the GÉANT Community and beyond exchange information, knowledge, ideas and best practices. These cover specific technical aspects or other areas of business, relevant to the research and education networking community.
The latest survey – undertaken at the end of 2023, shows significant changes in the roles of NOCs and the tools used to support them. These recognise the changing demands made on NOC teams to support the changing use of networks and the new services being supported by them.
The full details of this survey can be downloaded here:
The post SIG-NOC Survey first appeared on GÉANT Network.
]]>

Supporting High Volume High Performance Data Transfer Requirements Research projects, like those related to high energy, genomics or astronomy, need to transfer large amounts of data to complete calculations and get results in a relatively short period of time. In the past, the physical shipping of hard disks full of data was frequently the fastest […]
The post GÉANT DTN Testing Facility first appeared on GÉANT Network.
]]>

Research projects, like those related to high energy, genomics or astronomy, need to transfer large amounts of data to complete calculations and get results in a relatively short period of time. In the past, the physical shipping of hard disks full of data was frequently the fastest option. With the high bandwidths offered by research and education networks the transfer can be easily done using the appropriate tools. However the “normal” tools available such as commercial file storage services are unable to cope with the extreme data volumes used by these projects.
To improve data transfer between different sites, dedicated computer systems and architectures are used to improve performance. Data Transfer Nodes (DTN) are used to overcome this problem. DTNs are dedicated (usually Linux based) servers, with specific hi-end hardware components and dedicated transfer tools and are configured specifically for wide area data transfer.
The GEANT DTN Testing Facility is a set of three powerful servers (located in London, Prague and Hamburg) that would allow to try, evaluate, test and verify the performances of specialised software and protocols to transfer research data both within Europe and across global distances so that projects can investigate and develop a DTN structure that supports their needs.
The post GÉANT DTN Testing Facility first appeared on GÉANT Network.
]]>

TimeMap is an open-source weather map-like platform which provides per-segment latency/jitter measurements on a network. TimeMap is especially important for low latency applications, as they require limited latency and jitter and is therefore extremely important for network engineers to be able to monitoring and identify any changes in these network parameters in the network.
The post TimeMap – Open-source Latency/Jitter Measurement Service first appeared on GÉANT Network.
]]>

TimeMap is an open-source weather map-like platform which provides per-segment latency/jitter measurements on a network. TimeMap is especially important for low latency applications, as they require limited latency and jitter and is therefore extremely important for network engineers to be able to monitoring and identify any changes in these network parameters in the network.
The post TimeMap – Open-source Latency/Jitter Measurement Service first appeared on GÉANT Network.
]]>

Argus is a tool for NOCs and service centers to aggregate incidents from all their monitoring applications into a single, unified dashboard and notification system. Most NOCs will, out of necessity, use a myriad of applications to monitor their infrastructure and services. In turn, they need to contend with manually managing notification profiles and monitoring […]
The post Argus – Alarm Aggregation and Correlation Tool first appeared on GÉANT Network.
]]>

Argus is a tool for NOCs and service centers to aggregate incidents from all their monitoring applications into a single, unified dashboard and notification system. Most NOCs will, out of necessity, use a myriad of applications to monitor their infrastructure and services. In turn, they need to contend with manually managing notification profiles and monitoring dashboards in each individual application. Argus mitigates these scenarios by providing the NOC with a singular overview of actionable incidents, and by providing a single point of notification configuration.
The post Argus – Alarm Aggregation and Correlation Tool first appeared on GÉANT Network.
]]>The post PerfSONAR PMP Privacy Policy first appeared on GÉANT Network.
]]>Description of the Performance Measurement Platform (PMP) service
The Performance Measurement Platform (PMP) consists of low-cost hardware nodes with pre-installed perfSONAR software. The nodes are performing regular measurements towards GÉANT measurement points located in the core of the network. The central components that manage the platform elements, gather, store and represent the performance data, are operated and maintained by the GÉANT Project. Small nodes users can shape the predefined setup and configure additional measurements to their needs and get more familiar with the platform.
We are committed to privacy and security and we are proud that the PMP service is designed in line with data protection principles, in particular with data minimalisation principle.
To view the general Privacy Notice for GÉANT, please visit the GÉANT website.
Why We Process Personal Data
We use your personal data:
Who Do We Share Data With?
In order to provide the PMP service we may commission other organisations. We require all these organisations to keep information safe and comply with current regulations.
Personal data gathered for website operations and statistics is only shared within the GÉANT Association and the PMP Operational Team for analysis and reporting.
We don’t forward your personal data to other recipients.
Personal Data Retention
The personal data processed for the PMP service are stored in European Union
If you have visited any of our PMP websites as a visitor / administrator, we hold your personal data (IP, timestamp) for 6 months after your status of visitor / administrator of service has been terminated, unless we need it to resolve particular issue.
Web server logs data for operations and statistics are retained for one month on the small node you have been visiting and the central server (located in Germany).
Security
We support the following processes to ensure the security of your data:
Your Rights
You have the following rights:
Contact Information
| Data Controller and Contact | Data Protection Officer GÉANT Association Hoekenrode 3 1102 BR Amsterdam – Zuidoost Netherlands Telephone number: +31 20 530 4488 Email: [email protected] |
| Jurisdiction | Netherlands Dutch Data Protection Authority Autoriteit Persoonsgegevens Postbus 93374 2509 AJ DEN HAAG. Telephone number: (+31) – (0)70 – 888 85 00. |
Last revision: January 2019
The post PerfSONAR PMP Privacy Policy first appeared on GÉANT Network.
]]>

Seamless Wi-Fi access for research and education around the world. eduroam (education roaming) is the secure, world-wide roaming access service developed for the international research and education community. eduroam allows students, researchers and staff to seamlessly access internet connectivity when within range of a hotspot, whether they’re moving across campus or visiting other participating institutions. […]
The post eduroam first appeared on GÉANT Network.
]]>

Seamless Wi-Fi access for research and education around the world. eduroam (education roaming) is the secure, world-wide roaming access service developed for the international research and education community. eduroam allows students, researchers and staff to seamlessly access internet connectivity when within range of a hotspot, whether they’re moving across campus or visiting other participating institutions. With benefits for users and for their campus IT departments, eduroam saves time and facilitates active and enduring collaboration.
The post eduroam first appeared on GÉANT Network.
]]>