<?xml version='1.0' encoding='UTF-8'?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <id>https://hpc.vub.be/</id>
  <title>VUB-HPC Posts</title>
  <updated>2026-04-19T02:01:06.345738+00:00</updated>
  <link href="https://hpc.vub.be/"/>
  <link href="https://hpc.vub.be/posts/atom.xml" rel="self"/>
  <generator uri="https://ablog.readthedocs.io/" version="0.11.11">ABlog</generator>
  <entry>
    <id>https://hpc.vub.be/news/2026/vsc-ap-invalid-accounts/</id>
    <title>VSC Accountpage problems</title>
    <updated>2026-04-17T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="vsc-accountpage-problems"&gt;

&lt;div class="note update admonition"&gt;
&lt;p class="admonition-title"&gt;Updated on 17/04/2026&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;10:30&lt;/strong&gt; The accountpage issue has been fixed. All users
should be able to access our services normally again.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Since yesterday afternoon (April 16th 2026) the VSC accountpage will return an
error about ‘Invalid account’ or ‘Error 500’ for some VUB users. We are aware and
looking for a solution. You can still login through ssh.&lt;/p&gt;
&lt;p&gt;We appreciate your patience during this time and apologize for the
inconvenience. Please contact &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt; if you need any comments or need further
help with this situation.&lt;/p&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/vsc-ap-invalid-accounts/"/>
    <summary>10:30 The accountpage issue has been fixed. All users
should be able to access our services normally again.</summary>
    <category term="hydra" label="hydra"/>
    <category term="motd" label="motd"/>
    <category term="outage" label="outage"/>
    <category term="portal" label="portal"/>
    <published>2026-04-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/hydra-litellm-security/</id>
    <title>LiteLLM python package compromised</title>
    <updated>2026-03-27T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="litellm-python-package-compromised"&gt;

&lt;p&gt;On March 24, 2026, the popular &lt;strong&gt;LiteLLM&lt;/strong&gt; python package was compromised by a
supply chain attack. If you are using LiteLLM in any form, please take
immediate action and check if any of the compromised versions (&lt;strong&gt;v1.82.7&lt;/strong&gt; or
&lt;strong&gt;v1.82.8&lt;/strong&gt;) are installed. See the LiteLLM blog post on the incident for all
the details on how to check if you are affected and what to do if that’s the
case: &lt;a class="reference external" href="https://docs.litellm.ai/blog/security-update-march-2026"&gt;https://docs.litellm.ai/blog/security-update-march-2026&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Beware that you are affected if a compromised LiteLLM version was installed on
any system that you have access to, whether that’s your laptop, a local server,
a pipeline that builds or deploys software, or a HPC cluster. The attack does
&lt;em&gt;not&lt;/em&gt; apply to any of the centrally installed software modules or containers on
the VUB-HPC clusters, as LiteLLM has not been centrally installed yet.&lt;/p&gt;
&lt;p&gt;If you have any further questions, please contact us at &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/hydra-litellm-security/"/>
    <summary>On March 24, 2026, the popular LiteLLM python package was compromised by a
supply chain attack. If you are using LiteLLM in any form, please take
immediate action and check if any of the compromised versions (v1.82.7 or
v1.82.8) are installed. See the LiteLLM blog post on the incident for all
the details on how to check if you are affected and what to do if that’s the
case: https://docs.litellm.ai/blog/security-update-march-2026.</summary>
    <category term="hydra" label="hydra"/>
    <category term="motd" label="motd"/>
    <published>2026-03-27T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/hydra-interactive-salloc/</id>
    <title>Improved way to launch interactive jobs</title>
    <updated>2026-03-24T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="improved-way-to-launch-interactive-jobs"&gt;

&lt;p&gt;We’ve changed the recommended way to launch an interactive job on a compute
node from the terminal interface. Previously, this was done using &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;srun&lt;/span&gt; &lt;span class="pre"&gt;--pty&lt;/span&gt;
&lt;span class="pre"&gt;bash&lt;/span&gt; &lt;span class="pre"&gt;-l&lt;/span&gt;&lt;/code&gt;. We now recommend using &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;salloc&lt;/span&gt;&lt;/code&gt;, which makes it possible to launch
parallel MPI jobs with &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;srun&lt;/span&gt;&lt;/code&gt; or &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;mpirun&lt;/span&gt;&lt;/code&gt; from within the interactive
session. All related documentation has been updated accordingly, see the
section on &lt;a class="reference internal" href="../docs/job-submission/main-job-types/#interactive-jobs"&gt;&lt;span class="std std-ref"&gt;Interactive jobs&lt;/span&gt;&lt;/a&gt; for details.&lt;/p&gt;
&lt;p&gt;If you have any further questions, please contact us at &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/hydra-interactive-salloc/"/>
    <summary>We’ve changed the recommended way to launch an interactive job on a compute
node from the terminal interface. Previously, this was done using srun --pty
bash -l. We now recommend using salloc, which makes it possible to launch
parallel MPI jobs with srun or mpirun from within the interactive
session. All related documentation has been updated accordingly, see the
section on interactive_jobs for details.</summary>
    <category term="hydra" label="hydra"/>
    <category term="motd" label="motd"/>
    <published>2026-03-24T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/vsc-authentication-outage/</id>
    <title>VSC Web Authentication Outage</title>
    <updated>2026-03-18T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="vsc-web-authentication-outage"&gt;

&lt;div class="note update admonition"&gt;
&lt;p class="admonition-title"&gt;Updated on 18/03/2026&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;10:00&lt;/strong&gt; The authentication to VSC web services has been restored. All
users from VUB and UZB can access our services normally.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Since around 08:30 CET (March 18th 2026) an outage is affecting the
authentication to VSC web services for users of VUB and UZB. This means that
access to the following services is closed for the moment:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;VSC account page: &lt;a class="reference external" href="https://account.vscentrum.be/"&gt;https://account.vscentrum.be/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VSC HPC Firewall: &lt;a class="reference external" href="https://firewall.vscentrum.be/"&gt;https://firewall.vscentrum.be/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HPC OnDemand Portal: &lt;a class="reference external" href="https://portal.hpc.vub.be/"&gt;https://portal.hpc.vub.be/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Globus: &lt;a class="reference external" href="https://app.globus.org/"&gt;https://app.globus.org/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Our team is actively investigating the issue, and we will provide updates as
soon as they are available in this page.&lt;/p&gt;
&lt;p&gt;We appreciate your patience during this time and apologize for the
inconvenience. Please contact &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt; if you need any comments or need further
help with this situation.&lt;/p&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/vsc-authentication-outage/"/>
    <summary>10:00 The authentication to VSC web services has been restored. All
users from VUB and UZB can access our services normally.</summary>
    <category term="hydra" label="hydra"/>
    <category term="outage" label="outage"/>
    <category term="portal" label="portal"/>
    <published>2026-03-18T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/hydra-shibboleth-upgrade/</id>
    <title>VUB VSC single sign-on server upgrade</title>
    <updated>2026-03-17T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="vub-vsc-single-sign-on-server-upgrade"&gt;

&lt;div class="note update admonition"&gt;
&lt;p class="admonition-title"&gt;Updated on 17/03/2026&lt;/p&gt;
&lt;p&gt;The upgrade is postponed to &lt;strong&gt;23 March&lt;/strong&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;The VUB VSC single sign-on server will be upgraded on &lt;strong&gt;18 March&lt;/strong&gt; at
approximately &lt;strong&gt;9:00&lt;/strong&gt;.  The maintenance is expected to take only a few minutes.
During this time, users may experience a brief interruption when accessing the
VSC account page, the VSC OnDemand portals, or the VSC Firewall page.  Running
jobs will be unaffected, and SSH connections to the clusters will continue to
function as normal.&lt;/p&gt;
&lt;p&gt;If you have any further questions, please contact us at &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/hydra-shibboleth-upgrade/"/>
    <summary>The upgrade is postponed to 23 March.</summary>
    <category term="hydra" label="hydra"/>
    <category term="motd" label="motd"/>
    <category term="outage" label="outage"/>
    <published>2026-03-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/vub-ondemand-update/</id>
    <title>Update of the HPC portal</title>
    <updated>2026-02-27T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="update-of-the-hpc-portal"&gt;

&lt;p&gt;We have updated the VUB
&lt;a class="reference external" href="https://portal.hpc.vub.be"&gt;OnDemand portal&lt;/a&gt;
by installing several new apps, upgrading to Open OnDemand 4.1 and making a slight change to the job submission forms.&lt;/p&gt;
&lt;section id="upgrade-to-open-ondemand-4-1"&gt;
&lt;h2&gt;Upgrade to Open OnDemand 4.1&lt;/h2&gt;
&lt;p&gt;The portal runs a new Open OnDemand release with general improvements and bug fixes.&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;New Module Browser&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the Clusters tab you can now find the Module Browser.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Module Browser allows you to inspect all the software modules installed in Hydra and Anansi and see which versions are present.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;New Project Manager&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the Jobs tab you can now find the Project Manager.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Project Manager provides a suite of tools to leverage the Open OnDemand features. You can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create batch connect apps from your own personal scripts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Track and organize your job history.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Assemble individual jobs into large-scale workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;See the &lt;a class="reference external" href="https://osc.github.io/ood-documentation/latest/tutorials/tutorials-project-manager.html"&gt;OnDemand documentation&lt;/a&gt; for more information.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/section&gt;
&lt;section id="new-interactive-apps"&gt;
&lt;h2&gt;New interactive apps&lt;/h2&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;Open WebUI app&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This app launches Ollama and interacts with it using Open WebUI, allowing you to chat easily with a local LLM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By default several models are available to use such as ChocoLlama, gpt-oss, deepseek-r1, …&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bioimage ANalysis Desktop&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Bioimage ANalysis Desktop is a desktop for Bioimage Analysis similar to the EMBL Bioimage ANalysis Desktop (BAND) platform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This desktop offers easy access to several GUI apps. Right now they are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fiji 2.14.0&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;QuPath 0.6.0&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CellProfiler 4.2.8&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;napari 0.6.6&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/section&gt;
&lt;section id="simplified-application-form"&gt;
&lt;h2&gt;Simplified application form&lt;/h2&gt;
&lt;p&gt;The partition selection option has been moved into the advanced section of the job submission form to simplify the default view.&lt;/p&gt;
&lt;p&gt;In the partition selection field you can now select the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;automatic&lt;/span&gt;&lt;/code&gt; option, which means your job will queue for each eligible partition. This should lead to decreased queueing times.&lt;/p&gt;
&lt;/section&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/vub-ondemand-update/"/>
    <summary>We have updated the VUB
OnDemand portal
by installing several new apps, upgrading to Open OnDemand 4.1 and making a slight change to the job submission forms.</summary>
    <category term="motd" label="motd"/>
    <category term="portal" label="portal"/>
    <published>2026-02-27T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/overview-of-2025/</id>
    <title>VUB HPC Overview 2025</title>
    <updated>2026-02-23T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="vub-hpc-overview-2025"&gt;

&lt;figure class="align-default"&gt;
&lt;img alt="https://hpc.vub.be/_images/overview-2025.png" src="https://hpc.vub.be/_images/overview-2025.png" style="width: 70%;" /&gt;
&lt;/figure&gt;
&lt;p&gt;2025 marked a year of strong growth and strategic preparation for the SDC team. While
significant effort was invested in preparing for the next VSC Tier-1
&lt;a class="reference external" href="https://www.vscentrum.be/post/vsc-inaugurates-sofia-the-new-flemish-tier-1-supercomputer"&gt;Sofia&lt;/a&gt;
, the Tier-2 infrastructure (Hydra and Anansi) continued to expand in usage, users, and services.&lt;/p&gt;
&lt;p&gt;GPU demand remained high, driven by increasing AI/ML workloads. Although average load was
stable, peak periods led to noticeable queue times, reinforcing the need for continued
GPU expansion and Tier-1 migration of heavy workloads.&lt;/p&gt;
&lt;p&gt;The launch of the Open OnDemand portal significantly lowered the barrier to entry,
with 44% of users leveraging interactive services. Outreach and targeted trainings
contributed to broadening adoption across faculties and departments. Many research
groups were visited to show our services and explain what HPC &amp;amp; Pixiu can mean for
a researcher.&lt;/p&gt;
&lt;p&gt;In parallel, Pixiu storage grew to 1.6 PB in active use and underwent major
infrastructure upgrades to improve redundancy and future scalability.&lt;/p&gt;
&lt;p&gt;Overall, 2025 positioned VUB strongly for the operational start of
Tier-1 Sofia in 2026, with a growing and increasingly mature HPC user
community ready to scale to larger systems.&lt;/p&gt;
&lt;p&gt;Now that 2025 is behind us, it is time to review
statistics on infrastructure usage for the year.&lt;/p&gt;
&lt;p&gt;&lt;span class="fas fa-trophy"&gt;&lt;/span&gt; Hydra Highlights of 2025:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;3.2 centuries of single-core CPU compute time used&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;21.4 years of GPU compute time used&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A total of 963,894 jobs were run, with the large majority lasting less than 1 hour&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;About 30% of the jobs were responsible for 99% of the used CPU compute time
and 97% of the GPU compute time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There were 562 unique users active on Hydra in 2025&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;369 new VUB VSC accounts were created in 2025&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;figure class="align-default" id="id1"&gt;
&lt;img alt="https://hpc.vub.be/_images/Hydra_Usage_per_month.png" src="https://hpc.vub.be/_images/Hydra_Usage_per_month.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Usage of Hydra per month&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;span class="fas fa-star"&gt;&lt;/span&gt; Most important changes in 2025:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;&lt;a class="reference internal" href="../news/2025/vub-ondemand-release/#vub-ondemand-release"&gt;&lt;span class="std std-ref"&gt;New OnDemand web portal for HPC&lt;/span&gt;&lt;/a&gt;. This new portal has drastically lowered the barrier to entry
to the cluster and allows us to offer new (GUI) programs in a
straightforward way.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a class="reference internal" href="../news/2025/anansi-new-ada-gpu/#anansi-new-ada-gpu"&gt;&lt;span class="std std-ref"&gt;New GPU nodes for interactive use on Anansi&lt;/span&gt;&lt;/a&gt;. These nodes have 4x NVIDIA L40S cards which are
ideal for interactive usage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a class="reference internal" href="../news/2025/hydra-new-turin-nodes/#hydra-new-turin-nodes"&gt;&lt;span class="std std-ref"&gt;New compute nodes added to Hydra&lt;/span&gt;&lt;/a&gt;. 24 new compute nodes with 4 nodes having 1.5 TB
of RAM memory to replace the old high memory nodes. All these nodes feature a
Turin CPU (&lt;cite&gt;zen5&lt;/cite&gt;) with NDR InfiniBand (200 Gbps) network connection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a class="reference internal" href="../news/2025/update-rocky-9/#update-rocky-9"&gt;&lt;span class="std std-ref"&gt;The cluster was upgraded to a new major operating system release:
Rocky Linux 9&lt;/span&gt;&lt;/a&gt;. The operation was done without downtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a class="reference internal" href="../news/2025/bye-bye-skylakes/#bye-bye-skylakes"&gt;&lt;span class="std std-ref"&gt;Last goodbye to the Skylake partition&lt;/span&gt;&lt;/a&gt;. All Skylake worker nodes were decommissioned. They
served the cluster well since 2018.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a class="reference internal" href="../news/2025/hydra-login-rocky-9/#hydra-login-rocky-9"&gt;&lt;span class="std std-ref"&gt;The login nodes were upgraded to Rocky Linux 9&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a class="reference internal" href="../news/2025/hpc-data-recovery/#hpc-data-recovery"&gt;&lt;span class="std std-ref"&gt;New command tool to recover lost data&lt;/span&gt;&lt;/a&gt;. Now it is easier than ever to recover accidentally
deleted data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a class="reference internal" href="../news/2025/vub-ondemand-update-2024a/#vub-ondemand-update-2024a"&gt;&lt;span class="std std-ref"&gt;The notebook platform was shut down&lt;/span&gt;&lt;/a&gt; to be
replaced by our Open OnDemand portal as it offers a superior experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We launched a closed pilot phase for the newly purchased GPU nodes with NVIDIA
H200 cards (&lt;a class="reference internal" href="../news/2026/hydra-new-hopper-nodes/#hydra-new-hopper-nodes"&gt;&lt;span class="std std-ref"&gt;which are in production in the mean time&lt;/span&gt;&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Outreach effort: we gave HPC and Pixiu introduction courses to many different
research groups during the year.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;section id="users"&gt;
&lt;span id="overview-users"&gt;&lt;/span&gt;&lt;h2&gt;Users&lt;/h2&gt;
&lt;p&gt;There were 599 unique users of VUB HPC clusters in 2025 (&lt;em&gt;i.e.&lt;/em&gt; submitted at
least one job), compared to 383 in &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-users"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;513 unique users of the CPU nodes (&lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-users"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;: 363)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;208 unique users of the GPU nodes (&lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-users"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;: 130)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;37 unique users which only used the interactive cluster Anansi&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id2"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Distribution of all users by employment type&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Employment Type&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Users&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Relative&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;UZB&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Guest professor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;7&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Administrative/technical staff&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;12&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Professor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;14&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Non-VUB&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;46&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;8.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Students&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;172&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;30.6%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Researcher&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;309&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;55.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;As expected, the majority of users on our clusters are researchers, but this
year the ratio of students slightly increased by 5%. It must be noted that the
&lt;em&gt;Students&lt;/em&gt; category includes only students up to the Master level.&lt;/p&gt;
&lt;p&gt;The students are a diverse group yet they all come from the Science or
Engineering faculty. The majority of students on VUB HPC are Master students,
which is the education level where most training courses happen.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id3"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Type of education program by students on VUB HPC&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Users&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Relative&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Bachelor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;5&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Guest&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;35&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;20.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Master&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;132&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;76.7%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;A &lt;em&gt;Guest&lt;/em&gt; here refers to a student following a course at VUB that is not
their main education program (for example a joint program with another
university).&lt;/p&gt;
&lt;figure class="align-default" id="id4"&gt;
&lt;img alt="https://hpc.vub.be/_images/active_users_per_month_2025.png" src="https://hpc.vub.be/_images/active_users_per_month_2025.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Distribution per month showing the busiest periods of the year&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The monthly number of active users is almost consistently higher compared to last
year to the point where the maximum of 2024 is close to the minimum of 2025.
Especially the last two months of 2025 had a lot of active users. This increase
was mainly caused by two factors, teaching courses using the HPC in that period
and many other new users that we suspect joined the HPC after our multiple
outreach campaigns carried out in prior weeks.&lt;/p&gt;
&lt;p&gt;We saw a steep increase in new VSC accounts for VUB in 2025: &lt;strong&gt;369 new accounts&lt;/strong&gt;
were created (compared to 237 on &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-users"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;). This is the
second year in a row with a steep increase in the number of created accounts.&lt;/p&gt;
&lt;figure class="align-default" id="id5"&gt;
&lt;img alt="https://hpc.vub.be/_images/vsc_accounts_per_year_20251.png" src="https://hpc.vub.be/_images/vsc_accounts_per_year_20251.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;New VSC accounts created per year for VUB&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Our efforts to attract more people to the cluster are paying off. In 2025, we
invested significant effort in this through targeted introductions and
trainings at the request of specific research groups. If you are also
interested in such introductions, or would like to receive custom trainings
for your research group, please contact us at &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="cpu-usage"&gt;
&lt;span id="overview-cpu"&gt;&lt;/span&gt;&lt;h2&gt;CPU Usage&lt;/h2&gt;
&lt;p&gt;On average, CPU usage on the cluster was 63%, slightly higher than in 2024.
The usage pattern matches that of &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-cpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;, &lt;cite&gt;i.e.&lt;/cite&gt; large
weekly usage fluctuations with higher usage during week days and lower usage in
the weekend.&lt;/p&gt;
&lt;figure class="align-default" id="id6"&gt;
&lt;img alt="https://hpc.vub.be/_images/CPU_Usage_of_Hydra.png" src="https://hpc.vub.be/_images/CPU_Usage_of_Hydra.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;CPU compute time on all partitions of Hydra for 2025. It is shown as a percentage
of the theoretical maximum capacity of the cluster.&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;As in previous years, the months April and May continue to be busy periods on
the cluster but August also stands out, despite being a holiday month.&lt;/p&gt;
&lt;div class="admonition tip"&gt;
&lt;p class="admonition-title"&gt;Tip&lt;/p&gt;
&lt;p&gt;Usage during weekends is systematically low, so if you are in a rush to get your jobs
started quickly that is the best time to do so.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Usage of VUB HPC clusters is dominated by VUB, which on one hand is expected,
but on the other the usage by other VSC sites is almost negligible. This
raises a potential point for improvement in the future.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id7"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Distribution of used CPU compute time per VSC institute&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Institute&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Usage&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;VUB&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;99.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;UAntwerpen&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;KULeuven&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;UGent&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;&lt;span class="fas fa-lightbulb"&gt;&lt;/span&gt; About 80% of CPU compute time is used by just 8% of the users, indicating that a small group dominates the workload on our clusters.
This situation is partially explained by the size of Hydra itself. As
it is a relatively small cluster with many different smaller partitions, it is
not difficult for a few users to use most of its capacity.
We also see many users who use the system very infrequently,
submitting only a few jobs with long intervals in between.
If you are one of those users who
only use our clusters infrequently due to difficulties in running your
computational jobs, please contact us at &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt; and we will help you.&lt;/p&gt;
&lt;p&gt;For the remainder of the analysis we only consider jobs which ran for at least
30 minutes as shorter jobs are assumed to be tests or failed jobs. These jobs
represent 99% of the used compute time.&lt;/p&gt;
&lt;div class="admonition note"&gt;
&lt;p class="admonition-title"&gt;Note&lt;/p&gt;
&lt;p&gt;A typical CPU job (90% of all jobs) runs on a single node, uses 16 or less
cores and ends in less than 20 hours.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table" id="id8"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;The percentiles of CPU jobs by number of cores, nodes and walltime.&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Percentile&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Cores&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Nodes&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Walltime&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.500&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 04:41:35&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.750&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;4&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 12:37:48&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.800&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;5&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 15:04:51&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.900&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;16&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 20:00:21&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.950&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;20&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1 days 08:47:06&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.990&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;64&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3 days 14:04:11&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.999&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;210&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;10&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;5 days 00:00:20&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;We see that jobs in 2025 use slightly more cores than the previous year. For
instance, the number of cores on the 95th percentile is 20 vs 16 in &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-cpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt; and at the 99.9th percentile it increased to 210. In general, the
duration of jobs is the same as previous year across all percentiles. The vast
majority of jobs complete in less than a day.&lt;/p&gt;
&lt;p&gt;If we look at the total used CPU time, split by number of cores used in the job,
we see that this matches the percentiles based on number of
jobs. Small jobs between 1-16 cores used about 32% of the compute time, while jobs
up to 64 cores use about 83% of the compute time. Compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-cpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;
we see a shift to larger jobs which follows the increasing amount of cores on a single node.&lt;/p&gt;
&lt;figure class="align-default" id="id9"&gt;
&lt;img alt="https://hpc.vub.be/_images/Used_CPU_Time_split_by_the_number_of_used_CPUs2.png" src="https://hpc.vub.be/_images/Used_CPU_Time_split_by_the_number_of_used_CPUs2.png" style="width: 80%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Total CPU time used split by number of CPU cores of the job&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The same analysis on number of nodes shows that the majority of jobs in our
clusters are single node jobs. This matches with usage patterns of previous
years.&lt;/p&gt;
&lt;figure class="align-default" id="id10"&gt;
&lt;img alt="https://hpc.vub.be/_images/Used_CPU_Time_split_by_the_number_of_used_nodes2.png" src="https://hpc.vub.be/_images/Used_CPU_Time_split_by_the_number_of_used_nodes2.png" style="width: 80%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Total CPU time used split by number of nodes used in the job&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;In conclusion:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;There were many &lt;em&gt;small&lt;/em&gt; jobs: 70% of the jobs ran on a single node and almost
60% on 40 or less cores.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There were many &lt;em&gt;short&lt;/em&gt; jobs: 90% ran less than 20 hours, 50% even less than
5 hours.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As the load in the cluster was not that high, the queuing time for jobs was
short for the large majority of jobs (&amp;gt;90%):&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table" id="id11"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Queueing time of jobs by CPU resources requested: single core (1 CPU
core in 1 node), single node (any number of cores in 1 node) and
multi node (any number of cores in 2 or more nodes)&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Percentile&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;All jobs&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Single core&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Single Node&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Multi Node&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.50&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 00:00:33&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 00:00:33&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 00:00:33&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 00:00:24&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.75&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 01:33:46&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 00:17:24&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 01:31:14&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 03:54:02&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.80&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 03:27:59&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 00:58:08&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 03:23:51&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 06:33:10&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.90&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 10:58:48&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 05:31:54&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 10:49:53&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1 days 04:08:07&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.95&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 23:56:04&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 13:19:33&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 23:29:48&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2 days 08:21:43&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.99&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3 days 07:25:19&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2 days 01:09:16&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3 days 05:26:05&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;4 days 14:12:43&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;Queuing time depends on the overall load of the cluster and the
resources requested by the job. The more resources requested, the higher the
queuing time for that job. However, we see that 50% of all jobs start
immediately, regardless of the resources requested, and 75% within 1.5 hours.
Even for multi node jobs, 80% starts with 6.5 hours.&lt;/p&gt;
&lt;p&gt;The previous queuing times are averages for the entire year, which flattens
out very busy moments when users can experience much longer queuing times. The
following graph breaks down queue times by month:&lt;/p&gt;
&lt;figure class="align-default" id="id12"&gt;
&lt;img alt="https://hpc.vub.be/_images/Percentiles_for_queue_time_per_month_for_CPUs2.png" src="https://hpc.vub.be/_images/Percentiles_for_queue_time_per_month_for_CPUs2.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Average queuing time on CPU nodes per month in 2025&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The months of July and October stand out with their higher queuing times. For
October this is expected as it is the beginning of the academic year with many
new users and trainings. However, the peak of July is not and we do not have
any explanation for it beyond a guess that this might be the rush before the
summer break. The 90th percentile shows very high peaks because it captures the
largest jobs submitted to the cluster.&lt;/p&gt;
&lt;p&gt;Due to the large quantity of short jobs submitted to the cluster, we see that
the queuing time on Saturday and Sunday is significantly shorter than during
the rest of the week. Conversely, Mondays and Tuesdays have the highest queuing
times.&lt;/p&gt;
&lt;figure class="align-default" id="id13"&gt;
&lt;img alt="https://hpc.vub.be/_images/Active_user_per_month_and_type_for_CPU_jobs.png" src="https://hpc.vub.be/_images/Active_user_per_month_and_type_for_CPU_jobs.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Active users on CPU nodes per month and grouped by type&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;If we look at the active users per month, we see no surprises: April and
November are the months with most people on the cluster.
This year, November was particularly busy because Hydra was used for a training
course with a relatively large number of students. If you would also like to
use our HPC clusters for teaching, contact us at &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt;.&lt;/p&gt;
&lt;div class="admonition note"&gt;
&lt;p class="admonition-title"&gt;Note&lt;/p&gt;
&lt;p&gt;In the following tables, which distribute usage across different entities and
user types, you will find two special categories:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Non-VUB&lt;/em&gt; are external users from other VSC Sites&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt; are students up to the Master level that are not affiliated to
any department or research group&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id14"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;CPU compute time used by faculty&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Faculty&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Usage&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Faculty of Social Sciences and Solvay Business School&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Non-VUB&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Faculty of Medicine and Pharmacy&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Department ICT&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;13.6%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Faculty of Engineering&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;36.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Faculty of Sciences and Bioengineering Sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;43.8%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;We only show faculties that used at least 0.1% of the total compute time of
the year. As expected, the Faculties of Sciences and (Bio-)Engineering use the
largest share of the CPU compute time. However, compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-cpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt; we see that the relative share of the Faculty of
Engineering has more than doubled, while that of the Faculty of Sciences has
decreased by 20%.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id15"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;CPU compute time used by department&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Department&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Usage&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Clinical sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Geography&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Business technology and Operations&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Department of Water and Climate&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Electronics and Informatics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.7%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Department of Bio-engineering Sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.7%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Non-VUB&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Basic (bio-) Medical Sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Pharmaceutical and Pharmacological Sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Engineering Technology&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.7%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Physics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.8%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Applied Mechanics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Administrative Information Processing&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Biology&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Applied Physics and Photonics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Electrical Engineering and Power Electronics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;9.8%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Informatics and Applied Informatics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;10.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;13.6%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Materials and Chemistry&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;19.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Chemistry&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;27.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;The overview of used compute time per department reveals the actual use of the
cluster per scientific domain. Compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-cpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;,
the number of departments using a significant portion of the cluster has
grown.&lt;/p&gt;
&lt;p&gt;We welcome the new departments on the cluster &lt;em&gt;‘Basic (bio-) Medical
Sciences’&lt;/em&gt; and &lt;em&gt;‘Business technology and Operations’&lt;/em&gt;. The &lt;em&gt;‘Chemistry’&lt;/em&gt;
department is still the biggest user but it dropped by 12% compared
to 2024. The departments of &lt;em&gt;‘Materials and Chemistry’&lt;/em&gt; and &lt;em&gt;‘Electrical
Engineering and Power Electronics’&lt;/em&gt; are pushing up and have increased their
usage of Hydra the most.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id16"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;CPU compute time usage distributed by employment type&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Employment Type&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Usage&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Professor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Non-VUB&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Administrative/technical staff&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Guest professor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;13.6%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Researcher&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;80.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;/section&gt;
&lt;section id="gpu-usage"&gt;
&lt;span id="overview-gpu"&gt;&lt;/span&gt;&lt;h2&gt;GPU Usage&lt;/h2&gt;
&lt;p&gt;The load on the clusters GPUs averaged 74%, with some periods
of low usage and others where all GPUs were constantly at 100%.&lt;/p&gt;
&lt;figure class="align-default" id="id17"&gt;
&lt;img alt="https://hpc.vub.be/_images/GPU_Usage_of_Hydra.png" src="https://hpc.vub.be/_images/GPU_Usage_of_Hydra.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;GPU compute time on all partition of Hydra for 2025, shown as a percentage
of the theorical maximum capacity of the cluster.&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;span class="fas fa-lightbulb"&gt;&lt;/span&gt; About 85% of the used GPU time comes from just 10% of the
users, similar to what we see on CPUs. On GPUs it is even more exacerbated as
those resources are much more scarce and not all users can use a software
application with support for GPUs. So it is more probable that a
number of power-users can dominate the workload on them.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id18"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Distribution of used GPU time per VSC institute on VUB HPC&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Institute&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Usage&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;VUB&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;98.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;UAntwerpen&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;KULeuven&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;UGent&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;For the remainder of the analysis we only consider jobs that ran for at least
30 minutes, as shorter jobs are assumed to be tests or failed jobs. The jobs
analysed represent 96.9% of the used GPU time.&lt;/p&gt;
&lt;div class="admonition note"&gt;
&lt;p class="admonition-title"&gt;Note&lt;/p&gt;
&lt;p&gt;A typical GPU job (90% of all jobs) uses a single GPU and runs for less than
16 hours.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table" id="id19"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;The percentiles of GPU jobs by number of GPUs, nodes and walltime.&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Percentile&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;GPUs&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Nodes&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Walltime&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.500&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 01:46:42&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.750&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 06:14:59&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.800&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 07:57:56&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.900&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 15:47:53&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.950&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1 days 04:26:29&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.990&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;4&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3 days 22:14:01&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.999&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;6&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;5 days 00:00:21&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;Looking at the number of GPUs used as function of the used GPU time by
those jobs, we see that this matches the previous percentiles based on number
of jobs.&lt;/p&gt;
&lt;figure class="align-default" id="id20"&gt;
&lt;img alt="https://hpc.vub.be/_images/Used_GPU_Time_split_by_the_number_of_used_GPUs2.png" src="https://hpc.vub.be/_images/Used_GPU_Time_split_by_the_number_of_used_GPUs2.png" style="width: 80%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Total GPU compute time used split by number of GPUs in the job&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;In conclusion:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;The majority of GPU jobs (78%) uses a single GPU&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are many &lt;em&gt;short&lt;/em&gt; GPU jobs: 80% of jobs run for less than 8 hours, 50%
for less than 2 hours&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-gpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;, the dominance of single GPU jobs
has taken a step back and at the same time the job duration has decreased.&lt;/p&gt;
&lt;p&gt;The amount of &lt;a class="vsclink reference external" href="https://docs.vscentrum.be/brussel/tier2_hardware/hydra.html#gpu-nodes"&gt;&lt;span class="sd-badge"&gt;&lt;span&gt;VSCdoc&lt;/span&gt;&lt;/span&gt;GPU
resources&lt;/a&gt; in Hydra is a lot
smaller than its &lt;a class="vsclink reference external" href="https://docs.vscentrum.be/brussel/tier2_hardware/hydra.html#cpu-only-nodes"&gt;&lt;span class="sd-badge"&gt;&lt;span&gt;VSCdoc&lt;/span&gt;&lt;/span&gt;CPU
resources&lt;/a&gt;,
which is reflected by the longer queuing time observed on GPUs. Moreover, we
see a general uptake in the use of AI/ML techniques and thus an increasing
demand for GPU resources. The latest extension of Hydra is a set of 5 new
GPU nodes, each equipped with two NVIDIA H200 GPUs. It is expected that these
nodes will become &lt;a class="reference internal" href="../news/2026/hydra-new-hopper-nodes/#hydra-new-hopper-nodes"&gt;&lt;span class="std std-ref"&gt;generally available in early 2026&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table" id="id21"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Queueing time of jobs by GPU resources requested: Single GPU (1
GPU), Multi GPU (2 or more GPUs in any number of nodes)&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Percentile&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;All jobs&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Single GPU&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Multi GPU&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.500&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 02:35:47&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 02:37:05&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 01:57:54&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.750&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 10:52:42&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 10:42:30&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 20:41:50&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.800&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 14:31:33&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0 days 14:18:03&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2 days 00:19:04&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.900&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1 days 03:24:32&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1 days 02:08:11&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;5 days 22:06:49&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.950&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1 days 19:42:51&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1 days 16:45:14&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;9 days 06:02:25&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;0.990&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;5 days 13:16:56&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;4 days 07:01:52&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;11 days 19:20:49&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;0.999&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;10 days 10:32:20&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;8 days 17:57:01&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;12 days 10:57:28&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;The load on the GPU nodes was slightly lower compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-gpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;
and the queuing time was also slightly lower (ignoring the 99.9% percentile). The positive
impact of shorter walltime for GPU jobs is higher throughput for the scheduler and shorter
queuing times. Not unexpectedly, the queuing time for multi-GPU jobs was quite a bit higher.
All in all, 50% of the GPU jobs started in 2.5 hours while 90% started within the day.&lt;/p&gt;
&lt;p&gt;The previous queuing times are averages for the entire year, which flattens
out very busy moments when users can experience much longer queuing times. The
following graph break down queue times by month:&lt;/p&gt;
&lt;figure class="align-default" id="id22"&gt;
&lt;img alt="https://hpc.vub.be/_images/Percentiles_for_queue_time_per_month_for_GPUs2.png" src="https://hpc.vub.be/_images/Percentiles_for_queue_time_per_month_for_GPUs2.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Average queuing time on GPU nodes by month in 2025&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;As expected, during busy periods on the cluster, queuing times were
higher but on average the 50th percentile queue time remained low. The spike
for October has no clear explanation. The load was not significantly different.&lt;/p&gt;
&lt;p&gt;The weekend effect is also observed for GPU jobs, although it is less pronounced.
Oddly enough, jobs submitted on a Thursday also had a distinctly lower queuing time.&lt;/p&gt;
&lt;figure class="align-default" id="id23"&gt;
&lt;img alt="https://hpc.vub.be/_images/Active_user_per_month_and_type_for_GPU_jobs.png" src="https://hpc.vub.be/_images/Active_user_per_month_and_type_for_GPU_jobs.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Monthly active users on GPU nodes, grouped by type&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Looking at the active users per month, we see that the number of GPU users was
quite stable over the year. However the sharp increases in March and November
pushed the yearly growth rate into positive territory, surpassing the
&lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-gpu"&gt;&lt;span class="std std-ref"&gt;2024 level&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;div class="admonition note"&gt;
&lt;p class="admonition-title"&gt;Note&lt;/p&gt;
&lt;p&gt;In the following tables, which distribute usage across different entities and
user types, you will find two special categories:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Non-VUB&lt;/em&gt; are external users from other VSC Sites&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt; are students up to the Master level that are not affiliated to
any department or research group&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id24"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;GPU compute time used by faculty&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Faculty&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Usage&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Department Research&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Department ICT&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;non-VUB&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Faculty of Medicine and Pharmacy&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;4.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;12.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Faculty of Social Sciences and Solvay Business School&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;20.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Faculty of Engineering&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;24.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Faculty of Sciences and Bioengineering Sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;35.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;We only show those faculties that use at least 0.1% of total compute time.
&lt;em&gt;‘Students’&lt;/em&gt; usage dropped by 10% compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-gpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;, while
the top 3 faculties increased their usage. The &lt;em&gt;‘Faculty of Social Sciences’&lt;/em&gt;
and &lt;em&gt;‘Solvay Business School’&lt;/em&gt; stand out as major GPU users, as they are
comparatively small users of Hydra’s CPU-only compute nodes.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id25"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;GPU compute time used by department&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Department&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Usage&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Research coordination&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Materials and Chemistry&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Electricity&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Applied Mechanics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Basic (bio-) Medical Sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Pharmaceutical and Pharmacological Sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Administrative Information Processing&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Non-VUB&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Department of Water and Climate&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.6%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Chemistry&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Clinical sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Department of Bio-engineering Sciences&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Engineering Technology&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;5.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Electrical Engineering and Power Electronics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;6.2%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;WIDSWE&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;7.7%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Electronics and Informatics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;10.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;12.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Business technology and Operations&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;20.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Informatics and Applied Informatics&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;23.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;The overview of used compute time per department reveals the actual use of the
GPUs per scientific domain.
The top 3 of departments have not changed, but the list has grown longer
compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-gpu"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;. Some departments
(like &lt;em&gt;‘WIDSWE’&lt;/em&gt;) have grown their GPU usage considerably.
A major newcomer is &lt;em&gt;‘Clinical Sciences’&lt;/em&gt; which now uses more than 2% of the
GPU time. We also welcome the departments of &lt;em&gt;‘Applied Mechanics’&lt;/em&gt;,
&lt;em&gt;‘Electricity’&lt;/em&gt; and &lt;em&gt;‘Materials and Chemistry’&lt;/em&gt; as new users of our GPU nodes.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id26"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;GPU compute time usage distributed by employment type&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Employment Type&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;relative&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Professor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Administrative/technical staff&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Non-VUB&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Guest professor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;6.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;12.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Researcher&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;78.6%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;/section&gt;
&lt;section id="software-usage"&gt;
&lt;span id="overview-software"&gt;&lt;/span&gt;&lt;h2&gt;Software Usage&lt;/h2&gt;
&lt;p&gt;In 2025, there were 3,155 modules available across all partitions of Hydra, representing 1,317 unique software installations and 3,844 unique extensions.
The following chart shows the most used software by counting the number of users
loading the module directly on the terminal or in their jobs.&lt;/p&gt;
&lt;figure class="align-default" id="id27"&gt;
&lt;img alt="https://hpc.vub.be/_images/top10_module_loads1.png" src="https://hpc.vub.be/_images/top10_module_loads1.png" style="width: 80%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Top software modules loaded during 2025 in VUB HPC&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Following last year’s trend, Python remains the dominant language
environment in the cluster, with the top 6 software modules all being Python packages.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="open-ondemand-usage"&gt;
&lt;span id="overview-ood"&gt;&lt;/span&gt;&lt;h2&gt;Open OnDemand Usage&lt;/h2&gt;
&lt;p&gt;In 2025 we launched a &lt;a class="reference internal" href="../news/2025/vub-ondemand-release/#vub-ondemand-release"&gt;&lt;span class="std std-ref"&gt;web portal&lt;/span&gt;&lt;/a&gt; at
&lt;a class="reference external" href="https://portal.hpc.vub.be"&gt;portal.hpc.vub.be&lt;/a&gt; based on the &lt;a class="reference external" href="https://openondemand.org"&gt;Open OnDemand&lt;/a&gt; platform.
It has been a game changer for our user community, greatly lowering the barrier
to entry to the cluster. Functionally replacing the HPC notebook platform,
it offers more flexibility and extensibility, which led to &lt;a class="reference internal" href="../news/2025/vub-ondemand-update-2024a/#vub-ondemand-update-2024a"&gt;&lt;span class="std std-ref"&gt;the decommissioning of the notebook platform&lt;/span&gt;&lt;/a&gt; in September.&lt;/p&gt;
&lt;p&gt;The Anansi cluster was launched last year with the goal to offer an interactive
service to test/develop/debug on the cluster. The CPU cores are
&lt;cite&gt;oversubscribed&lt;/cite&gt;, which means they can be used by multiple jobs at once. Of the CPU jobs,
90% started immediatly and 99% within 2 hours. For GPU jobs the resources are more limited
and the queueing time reflects this: 90% of the jobs started within 3 hours.
To improve the interactive experience on the cluster, we are limiting the amount of
resources a single user can use.
This year, we extended the Anansi cluster with &lt;a class="reference internal" href="../news/2025/anansi-new-ada-gpu/#anansi-new-ada-gpu"&gt;&lt;span class="std std-ref"&gt;2 new nodes&lt;/span&gt;&lt;/a&gt;,
each offering 4 NVIDIA L40S GPUs for interactive use.&lt;/p&gt;
&lt;p&gt;In total, &lt;strong&gt;10,513&lt;/strong&gt; interactive sessions were started in 2025 on
&lt;a class="reference external" href="https://portal.hpc.vub.be"&gt;portal.hpc.vub.be&lt;/a&gt;. Those sessions were launched by 313 unique users on both
Hydra and Anansi. This means that 52% of the total userbase has used the portal
at least once.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id28"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Top applications launched in &lt;a class="reference external" href="https://portal.hpc.vub.be"&gt;portal.hpc.vub.be&lt;/a&gt;&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Name&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Sessions&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Users&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Slicer&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;50&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;11&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;GaussView&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;91&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;18&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;ParaView&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;99&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;9&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;MATLAB&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;123&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;23&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Tensorboard&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;137&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;9&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;VNC Desktop v2&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;210&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;21&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Rstudio&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;657&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;33&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Interactive Shell&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;953&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;128&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;VS Code Tunnel&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1154&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;62&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;VS Code Server&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1552&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;76&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Jupyter lab&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;2159&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;136&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;VNC Desktop&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3098&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;134&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;The desktop environment is clearly the most popular application, which was a
surprise to the SDC team.&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id29"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;&lt;a class="reference external" href="https://portal.hpc.vub.be"&gt;portal.hpc.vub.be&lt;/a&gt; usage by user type&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Employment Type&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Users&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Of total&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Guest professor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;14.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;UZB&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;50.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Professor&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;9&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;64.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Administrative/technical staff&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;12&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;100.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Non-VUB&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;26&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;56.5%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;&lt;em&gt;Students&lt;/em&gt;&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;97&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;56.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Researcher&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;167&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;54.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;/section&gt;
&lt;section id="user-support"&gt;
&lt;span id="overview-support"&gt;&lt;/span&gt;&lt;h2&gt;User support&lt;/h2&gt;
&lt;p&gt;We received 999 support requests (&lt;em&gt;incidents&lt;/em&gt;) from users, which is again a
substantial increase compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-support"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;. The
monthly distribution follows the academic year and the system load.&lt;/p&gt;
&lt;figure class="align-default" id="id30"&gt;
&lt;img alt="https://hpc.vub.be/_images/Incidents_per_month.png" src="https://hpc.vub.be/_images/Incidents_per_month.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Support tickets handled by the Scientific Data &amp;amp; Compute team&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id31"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Distribution of HPC support tickets per service provided by the
Scientific Data &amp;amp; Compute team&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;p&gt;Business service&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Incident Count&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;Percentage of Incidents&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Pixiu Scientific Research Data&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;336&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;33.6%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;HPC Scientific Software Installation&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;176&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;17.6%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;HPC Jobs Troubleshooting&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;144&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;14.4%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;HPC Consultancy&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;129&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;12.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;HPC Data&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;111&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;11.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;HPC VSC Accounts &amp;amp; Access&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;70&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;7.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;HPC Tier-0 &amp;amp; Tier-1 Projects Advice&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;19&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.9%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Account and Access Management&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;10&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1.0%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Softweb Software &amp;amp; Licenses&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;3&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.3%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;HPC Workflow Building &amp;amp; Porting&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;1&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;0.1%&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;From the HPC side, the majority of incidents (~32%) are related to problems
with jobs or requests for new software installations. But one third of the
incidents are related to Pixiu, which is huge increase compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-support"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;. It must be noted that not all those tickets are
related to problems with the service, the majority are request for new accounts
on Pixiu.&lt;/p&gt;
&lt;p&gt;We managed to resolve the large majority of the incidents within 5 working days.&lt;/p&gt;
&lt;figure class="align-default" id="id32"&gt;
&lt;img alt="https://hpc.vub.be/_images/Time_to_solution_-_2025.png" src="https://hpc.vub.be/_images/Time_to_solution_-_2025.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Time to resolution of support tickets handled by the Scientific Data &amp;amp;
Compute team&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-support"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;, we see that on average it took
more time to get to a solution for the incidents. This is probably related to the
increase in number of incidents for the Pixiu service, where it is often impossible to provide an immediate solution.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="tier-1"&gt;
&lt;span id="overview-tier1"&gt;&lt;/span&gt;&lt;h2&gt;Tier-1&lt;/h2&gt;
&lt;p&gt;There were 3 calls for projects in 2025 for VSC Tier-1 (&lt;em&gt;Hortense&lt;/em&gt;). In total,
18 starting grants were requested by VUB researchers, and 7 full project proposals were
submitted across these 3 calls. All 7 submissions were accepted, resulting in a 100% success rate.&lt;/p&gt;
&lt;figure class="align-default" id="id33"&gt;
&lt;img alt="https://hpc.vub.be/_images/xdmod_Hortense__CPU_Usage1.png" src="https://hpc.vub.be/_images/xdmod_Hortense__CPU_Usage1.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Usage of CPU Hortense in 2025&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;VUB researchers used around 6.1% of the academic CPU compute time on Tier-1.&lt;/p&gt;
&lt;figure class="align-default" id="id34"&gt;
&lt;img alt="https://hpc.vub.be/_images/xdmod_Hortense__GPU_Usage1.png" src="https://hpc.vub.be/_images/xdmod_Hortense__GPU_Usage1.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Usage of GPU Hortense in 2025&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;VUB researchers used around 7.3% of the academic GPU compute time on Tier-1.&lt;/p&gt;
&lt;p&gt;Compared to &lt;a class="reference internal" href="../news/2025/overview-of-2024/#overview-tier1"&gt;&lt;span class="std std-ref"&gt;2024&lt;/span&gt;&lt;/a&gt;, we see an increase in the number of starting grants
requested by VUB researchers, while the number of full projects submitted dropped (11 in 2024).
Not unexpectedly, the share of CPU usage by VUB researchers decreased, while the GPU share increased slightly.&lt;/p&gt;
&lt;p&gt;A lot of time and effort was spent in 2025 promoting HPC within the VUB, and the results are
starting to show in the number active users, but not yet in Tier-1 usage. We hope this will lead to more projects for VUB’s Tier-1 system, Sofia, in 2026.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="vsc-user-survey"&gt;
&lt;span id="vsc-user-survey-results"&gt;&lt;/span&gt;&lt;h2&gt;VSC User survey&lt;/h2&gt;
&lt;p&gt;At the end of 2025, the second edition of the VSC-wide user survey was conducted.
This survey encompasses all VSC services (all Tier-1 components and all
Tier-2 systems).&lt;/p&gt;
&lt;div class="admonition seealso"&gt;
&lt;p class="admonition-title"&gt;See also&lt;/p&gt;
&lt;p&gt;The results of the 2024 survey can be found on
&lt;a class="reference internal" href="../news/2025/overview-of-2024/#vsc-user-survey-results"&gt;&lt;span class="std std-ref"&gt;Overview of 2024&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;The invitation to take part in the survey was sent out on 8 December 2025, and the survey
closed on 15 January 2026. In total, 562 people responded, and 430 of those
completed the survey in full, which is at the same level as in 2024.
There were 76 people affiliated with the VUB among the respondents, an increase
of 38% compared to 2024. Thank you for the feedback! We really appreciate it.&lt;/p&gt;
&lt;figure class="align-default" id="id35"&gt;
&lt;img alt="https://hpc.vub.be/_images/VUB-survey2.png" src="https://hpc.vub.be/_images/VUB-survey2.png" style="width: 95%;" /&gt;
&lt;figcaption&gt;
&lt;p&gt;&lt;span class="caption-text"&gt;Responses from VUB users to the VSC user survey conducted at the end of 2025&lt;/span&gt;&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Looking at the responses from those respondents who indicated that they use VUB’s
Tier-2 ystems (Hydra, Anansi), we obtain the following percentages of users who rated
the service as &lt;cite&gt;Excellent&lt;/cite&gt; or &lt;cite&gt;Good&lt;/cite&gt; (excluding responses marked &lt;cite&gt;No
Experience&lt;/cite&gt;):&lt;/p&gt;
&lt;div class="pst-scrollable-table-container"&gt;&lt;table class="table table-left" id="id36"&gt;
&lt;caption&gt;&lt;span class="caption-text"&gt;Results of the VSC user survey for VUB&lt;/span&gt;&lt;/caption&gt;
&lt;thead&gt;
&lt;tr class="row-odd"&gt;&lt;th class="head"&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;2025&lt;/p&gt;&lt;/th&gt;
&lt;th class="head"&gt;&lt;p&gt;2024&lt;/p&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Quality&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;93.5% (58/62)&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;98.1% (51/52)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Response time&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;88.7% (55/62)&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;98.0% (49/50)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Correspondence to expectations&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;94.1% (64/68)&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;88.7% (47/53)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Software installations&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;84.9% (45/53)&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;92.9% (39/42)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Availability&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;86.4% (57/66)&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;92.6% (50/54)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Maintenance communication&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;92.8% (64/69)&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;94.0% (47/50)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-even"&gt;&lt;td&gt;&lt;p&gt;Documentation&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;91.5% (65/71)&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;89.1% (49/55)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr class="row-odd"&gt;&lt;td&gt;&lt;p&gt;Getting started&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;79.5% (58/73)&lt;/p&gt;&lt;/td&gt;
&lt;td&gt;&lt;p&gt;83.6% (46/55)&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p&gt;In general, we are quite happy with these results, as they show that our
services are highly valued by our users. Compared to 2024, we observe a small
decrease in most categories, while the score for &lt;cite&gt;Correspondence to
expectations&lt;/cite&gt; improved. In particular, we are pleased to see that the score
for &lt;cite&gt;Documentation&lt;/cite&gt; has evolved in a positive direction.&lt;/p&gt;
&lt;p&gt;We note that 42% of respondents selected &lt;cite&gt;No Experience&lt;/cite&gt; for the &lt;cite&gt;Software
installations&lt;/cite&gt; category. This suggests that users are either satisfied with the
software available on the VUB clusters or prefer to install their own software. The training
on how to install software might have helped here: &lt;a class="github reference external" href="https://github.com/vscentrum/gssi-training"&gt;vscentrum/gssi-training&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Getting started on Hydra still receives the lowest score, which is expected
given the significant learning curve. In response, we have provided additional
training sessions for specific user groups and launched the &lt;a class="vsclink reference external" href="https://docs.vscentrum.be/compute/portal/ondemand/index.html?vsc-sites=vub"&gt;&lt;span class="sd-badge"&gt;&lt;span&gt;VSCdoc&lt;/span&gt;&lt;/span&gt;VUB OnDemand web
portal&lt;/a&gt;, which
significantly lowers the barrier to entry. However, these efforts have not yet
resulted in an improved &lt;cite&gt;Getting started&lt;/cite&gt; score. We will therefore continue to
invest in training opportunities and seek further ways to improve the user
experience. Any suggestions are welcome!&lt;/p&gt;
&lt;p&gt;The score for &lt;cite&gt;Availability&lt;/cite&gt; has dropped by 6%. There was only one unplanned
downtime of Hydra in 2025 to address a &lt;a class="reference internal" href="../news/2025/hydra_security_shutdown/#hydra_security_shutdown"&gt;&lt;span class="std std-ref"&gt;potential security issue&lt;/span&gt;&lt;/a&gt;.
All other updates were carried out without a full downtime. The lower score on
this item might be more related to queue times and resource availability.&lt;/p&gt;
&lt;p&gt;Satisfaction with &lt;cite&gt;Response time&lt;/cite&gt; has dropped by 9%. The higher number of users was felt at the helpdesk on several occasions, creating a backlog of support tickets. Although we managed to resolve this backlog,
it resulted in increased response times for support tickets.&lt;/p&gt;
&lt;p&gt;From the free-text comments, three main suggestions and requests emerged:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;More GPUs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perceived long queue times&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Documentation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As in 2024, GPUs remain in high demand, and queue times can occasionally exceed
acceptable limits. We continue to actively encourage heavy GPU users to migrate
to Tier-1 resources, and we have &lt;a class="reference internal" href="../news/2026/hydra-new-hopper-nodes/#hydra-new-hopper-nodes"&gt;&lt;span class="std std-ref"&gt;recently announced&lt;/span&gt;&lt;/a&gt; the
new &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;hopper_gpu&lt;/span&gt;&lt;/code&gt; partition
on Hydra.  For interactive use, users are encouraged to select GPU shards in
the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;ada_gpu&lt;/span&gt;&lt;/code&gt; partition of the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;anansi&lt;/span&gt;&lt;/code&gt; cluster.&lt;/p&gt;
&lt;p&gt;Documentation will always remain a work in progress. We regularly update and
improve our documentation and continue to integrate VUB-specific material into
the &lt;a class="vsclink reference external" href="https://docs.vscentrum.be/"&gt;&lt;span class="sd-badge"&gt;&lt;span&gt;VSCdoc&lt;/span&gt;&lt;/span&gt;VSC documentation&lt;/a&gt;. Users are encouraged to notify us whenever
documentation is unclear, incomplete, or contains errors.&lt;/p&gt;
&lt;p&gt;Some respondents provided contact details along with specific comments. We will
follow up with these users individually to help resolve their issues.&lt;/p&gt;
&lt;p&gt;Finally, we conclude this report with several very positive comments that we
received spontaneously through the survey:&lt;/p&gt;
&lt;blockquote&gt;
&lt;div&gt;&lt;p&gt;&lt;em&gt;“The ict helpdesk people are very responsive.”&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“Amazing work and support”&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“I have only had excellent interactions with the HPC staff at the VUB!
They have helped set up my software on the HPC, and even helped me
debugging some specific problems.”&lt;/em&gt;&lt;/p&gt;
&lt;/div&gt;&lt;/blockquote&gt;
&lt;/section&gt;
&lt;section id="pixiu"&gt;
&lt;span id="overview-pixiu"&gt;&lt;/span&gt;&lt;h2&gt;Pixiu&lt;/h2&gt;
&lt;p&gt;Pixiu is the storage platform of VUB dedicated to host research data.
It offers high capacity, with more than 2 petabytes of storage, and flexibility through its &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Software-defined_storage"&gt;software-defined&lt;/a&gt;
storage platform.&lt;/p&gt;
&lt;p&gt;Pixiu is owned and hosted by VUB, ensuring the security and privacy of
the data stored in it. All parts of the system are located in our own data
centres on VUB premises and administered by VUB IT staff.&lt;/p&gt;
&lt;p&gt;It stores both the home and data directories for VUB VSC users, along with general research data
stored via S3 &lt;a class="reference external" href="https://en.wikipedia.org/wiki/Object_storage"&gt;object storage&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;At the end of 2025, it was storing a total of 1.6 PB of data. The largest portion of the storage is dedicated to  HPC (hosting &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;VSC_HOME&lt;/span&gt;&lt;/code&gt; and &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;VSC_DATA&lt;/span&gt;&lt;/code&gt;) and to backups of VUB production data.&lt;/p&gt;
&lt;p&gt;For the object storage there are 106 groups with 288 buckets. In 2025, the number of buckets grew by 188, distributed across 73 groups. We created 908 credentials for using Pixiu; not all of these correspond to individual users, as some are for machines or applications. This brings the total number of credentials in use on Pixiu to 1,663.&lt;/p&gt;
&lt;p&gt;&lt;span class="fas fa-star"&gt;&lt;/span&gt; Most important changes on Pixiu in 2025:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pixiu has two identical components in two datacentres. In 2025, we moved the part that was
located in the ULB datacentre on the Solbosch campus to the Nexus datacentre in the Researchpark Zellik.&lt;/p&gt;
&lt;p&gt;This was a complex operation that required extensive planning and coordination. Thanks to the thorough
preparation, the actual move went smoothly and without any significant downtime.&lt;/p&gt;
&lt;p&gt;To prepare, an additional 2 PB of storage was purchased and installed in the Nexus datacentre.
This system contained a full backup of all the data, ensuring that there were always two copies of each bit of data on Pixiu.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pixiu was extended with 76 TB of NVMe flash storage, usable both as a cache for object
storage and as high-performance storage for HPC, storing &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;apps&lt;/span&gt;&lt;/code&gt; and &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;VSC_HOME&lt;/span&gt;&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The goal was to switch to a synchronous replication between the two datacentres. Currently, in the event of a disaster at the primary datacentre, we could lose the last few minutes of data.
With the new setup, this should be reduced to (micro)seconds. An extra benefit is that both datacentres can be used simultaneously. However, this change turned out to be more complex than
expected. The switch is planned for early 2026. Once completed, the additional 2 PB of storage will
be integrated into Pixiu, bringing the net capacity to 3 PB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In 2025 we set up a new &lt;a class="reference internal" href="../docs/data/management/#data-globus"&gt;&lt;span class="std std-ref"&gt;Globus endpoint&lt;/span&gt;&lt;/a&gt; to connect Pixiu to the Globus data
transfer service. This allows users to easily transfer data between Pixiu and other Globus-connected storage systems worldwide. The endpoint is available as &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;VUB&lt;/span&gt; &lt;span class="pre"&gt;Pixiu&lt;/span&gt;&lt;/code&gt;.
All new accounts created since September have direct access, while older accounts still need to be migrated. Through Globus, it is also easy to &lt;a class="vsclink reference external" href="https://docs.vscentrum.be/globus/sharing.html"&gt;&lt;span class="sd-badge"&gt;&lt;span&gt;VSCdoc&lt;/span&gt;&lt;/span&gt;share data&lt;/a&gt; with anybody.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/section&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/overview-of-2025/"/>
    <summary>2025 marked a year of strong growth and strategic preparation for the SDC team. While
significant effort was invested in preparing for the next VSC Tier-1
Sofia
, the Tier-2 infrastructure (Hydra and Anansi) continued to expand in usage, users, and services.</summary>
    <category term="hydra" label="hydra"/>
    <category term="survey" label="survey"/>
    <published>2026-02-23T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/hydra-firewall-migration/</id>
    <title>VUB-HPC briefly offline on 23 February</title>
    <updated>2026-02-18T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="vub-hpc-briefly-offline-on-23-february"&gt;

&lt;p&gt;Due to a scheduled firewall migration, the VUB-HPC clusters will be temporarily
inaccessible on &lt;strong&gt;23 February&lt;/strong&gt;. This should not take longer than a few
minutes. Users will be unable to access the clusters during the migration.
Running jobs will continue to run, provided they do not require external
internet access.&lt;/p&gt;
&lt;p&gt;If you have any further questions, please contact us at &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/hydra-firewall-migration/"/>
    <summary>Due to a scheduled firewall migration, the VUB-HPC clusters will be temporarily
inaccessible on 23 February. This should not take longer than a few
minutes. Users will be unable to access the clusters during the migration.
Running jobs will continue to run, provided they do not require external
internet access.</summary>
    <category term="hydra" label="hydra"/>
    <category term="motd" label="motd"/>
    <category term="outage" label="outage"/>
    <published>2026-02-18T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/ndr-ib-to-rhea/</id>
    <title>Network upgrade on scratch</title>
    <updated>2026-02-17T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="network-upgrade-on-scratch"&gt;

&lt;div class="note update admonition"&gt;
&lt;p class="admonition-title"&gt;Updated on 17/02/2026&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;14:00&lt;/strong&gt; The upgrade operation has been completed ahead of schedule. All
new components were successfully installed. The job queue in Hydra is again
accepting jobs.&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;The scratch storage is receiving an upgrade on February 17. This will not
impact any running jobs, but as a safety measure, no new jobs will start during
this intervention on Hydra. For interactive work, the cluster Anansi remains
fully operational. The expected completion time of the upgrade is 17:00 CET.&lt;/p&gt;
&lt;p&gt;During the upgrade, the head nodes of the storage will be extended with an
extra CPU, the amount of memory will double and they receive a Infiniband
NDR card. This will allow us to directly connect the &lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;zen5_mpi&lt;/span&gt;&lt;/code&gt; and
&lt;code class="docutils literal notranslate"&gt;&lt;span class="pre"&gt;hopper_gpu&lt;/span&gt;&lt;/code&gt; to the scratch and increase the performance on
those partitions.&lt;/p&gt;
&lt;p&gt;Contact us at &lt;a class="falink reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-envelope"&gt;&lt;/span&gt;VUB-HPC Support&lt;/a&gt; if you have any comments or questions about these
changes.&lt;/p&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/ndr-ib-to-rhea/"/>
    <summary>14:00 The upgrade operation has been completed ahead of schedule. All
new components were successfully installed. The job queue in Hydra is again
accepting jobs.</summary>
    <category term="hydra" label="hydra"/>
    <category term="maintenance" label="maintenance"/>
    <category term="motd" label="motd"/>
    <published>2026-02-17T00:00:00+00:00</published>
  </entry>
  <entry>
    <id>https://hpc.vub.be/news/2026/2026-march-linux-hpc-training/</id>
    <title>18,20/3/2026: Linux and HPC Introduction Trainings</title>
    <updated>2026-02-10T00:00:00+00:00</updated>
    <author>
      <name>HPC Team</name>
    </author>
    <content type="html">&lt;section id="linux-and-hpc-introduction-trainings"&gt;

&lt;p&gt;We are pleased to announce on-site Linux and HPC introductions on the VUB
Health campus in Jette, in cooperation with the Vlaams Supercomputer Centrum
(VSC).&lt;/p&gt;
&lt;div class="sd-card sd-sphinx-override sd-w-75 sd-mt-2 sd-mb-4 sd-ml-auto sd-mr-auto sd-shadow-sm docutils"&gt;
&lt;div class="sd-card-header docutils"&gt;
&lt;p class="sd-card-text"&gt;Linux Introduction&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sd-card-body docutils"&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;date&lt;/em&gt;: &lt;strong&gt;Wednesday 18 March 2026&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;time&lt;/em&gt;: 9:00-15:00&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;topic&lt;/em&gt;: hands-on introductory course with practical sessions on using Linux command line&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;organizer&lt;/em&gt;: VUB - HPC Team (DICT)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;location&lt;/em&gt;: VUB Health Campus Jette, Building R - Atrium&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p class="sd-card-text"&gt;&lt;span class="sd-d-grid"&gt;&lt;a class="sd-sphinx-override sd-btn sd-text-wrap sd-btn-primary reference external" href="https://vub.sharepoint.com/sites/PUB_PhD/SitePages/Introduction-to-HPC-and-Linux-by-HPC.aspx"&gt;&lt;span&gt;Register here&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="sd-card sd-sphinx-override sd-w-75 sd-mt-2 sd-mb-4 sd-ml-auto sd-mr-auto sd-shadow-sm docutils"&gt;
&lt;div class="sd-card-header docutils"&gt;
&lt;p class="sd-card-text"&gt;HPC Introduction&lt;/p&gt;
&lt;/div&gt;
&lt;div class="sd-card-body docutils"&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;date&lt;/em&gt;: &lt;strong&gt;Friday 20 March 2026&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;time&lt;/em&gt;: 9:00-15:00&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;topic&lt;/em&gt;: hands-on introductory course with practical sessions on supercomputing or high performance computing (HPC)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;organizer&lt;/em&gt;: VUB - HPC Team (DICT)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p class="sd-card-text"&gt;&lt;em&gt;location&lt;/em&gt;: VUB Health Campus Jette, Building R - Atrium&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p class="sd-card-text"&gt;&lt;span class="sd-d-grid"&gt;&lt;a class="sd-sphinx-override sd-btn sd-text-wrap sd-btn-primary reference external" href="https://vub.sharepoint.com/sites/PUB_PhD/SitePages/Introduction-to-HPC-and-Linux-by-HPC.aspx"&gt;&lt;span&gt;Register here&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Please check the registration page for more details about the program.&lt;/p&gt;
&lt;p&gt;&lt;a class="fabtn sd-btn sd-btn-outline-info reference external" href="mailto:hpc&amp;#37;&amp;#52;&amp;#48;vub&amp;#46;be"&gt;&lt;span class="fa-solid fa-life-ring"&gt;&lt;/span&gt;Helpdesk&lt;/a&gt; In case of problems or questions, please contact the HPC team.&lt;/p&gt;
&lt;div class="admonition seealso"&gt;
&lt;p class="admonition-title"&gt;See also&lt;/p&gt;
&lt;p&gt;Additional and more advanced HPC courses can be found in &lt;a class="reference internal" href="../docs/training-material/#training-courses"&gt;&lt;span class="std std-ref"&gt;Training courses&lt;/span&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;/section&gt;
</content>
    <link href="https://hpc.vub.be/news/2026/2026-march-linux-hpc-training/"/>
    <summary>We are pleased to announce on-site Linux and HPC introductions on the VUB
Health campus in Jette, in cooperation with the Vlaams Supercomputer Centrum
(VSC).</summary>
    <category term="event" label="event"/>
    <category term="training" label="training"/>
    <published>2026-02-10T00:00:00+00:00</published>
  </entry>
</feed>
