Kumorion https://kumorion.com/ Nordic technology provider for sovereign cloud Fri, 20 Mar 2026 14:08:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://kumorion.com/wp-content/uploads/2026/03/cropped-kumorion-icon-mark-round-on-light-background-rgb-1920px@72ppi-32x32.png Kumorion https://kumorion.com/ 32 32 How Kumorion powers a private cloud at scale for Nokia’s global R&D teams https://kumorion.com/how-kumorion-powers-a-private-cloud-at-scale-for-nokias-global-rd-teams/ Tue, 25 Nov 2025 10:31:07 +0000 https://kumorion.com/?p=919 Nokia’s R&D engineers needed a robust cloud platform for their work. Partnering with Kumorion, the team built what has become one of the world’s largest private clouds: a one-million-CPU environment that sets industry benchmarks in cost efficiency, control and automation. Back in 2011, networks giant Nokia began exploring how to modernize its digital R&D environments. […]

The post How Kumorion powers a private cloud at scale for Nokia’s global R&D teams appeared first on Kumorion.

]]>

Nokia’s R&D engineers needed a robust cloud platform for their work. Partnering with Kumorion, the team built what has become one of the world’s largest private clouds: a one-million-CPU environment that sets industry benchmarks in cost efficiency, control and automation.

Back in 2011, networks giant Nokia began exploring how to modernize its digital R&D environments. The company’s engineers use heavy computing resources to work on software builds and perform integration testing. 

At the time, these resources were running on thousands of servers and scattered across some 50 sites worldwide. The setup was difficult and costly to manage, so moving to the public cloud seemed the likely way forward.

But as the project team looked into available options, setting up a private cloud environment began to look more appealing. With open-source software for the private cloud just beginning to mature, the engineers set about creating a cloud of their own.

“Our R&D builds had grown to as large as 200 gigabytes. Just moving that amount of data to and from the public cloud can take an hour or more. Speeding up development cycles was one of our major goals, but the public cloud would have just slowed us down,” explains Janne Heino, Head of Nokia Services Cloud Architecture.

“We promised that we would fail fast if the private cloud did not work. But on the contrary, we very quickly realized it was the better path forward. This is how the private Nokia Enterprise and Services Cloud was born,” says Heino, who has managed the project from day zero.

Heino then began looking for a partner to help develop the private environment. This is how Kumorion came into the picture.

Promises kept, scale delivered

To win over internal stakeholders, Heino’s team had framed the private cloud project around three simple promises. 

The first was that Nokia would get three times more R&D capacity for the same investment. To be this cost effective, the environment would need to run extremely efficiently. The second promise was that the engineers would have full admin access to their cloud resources – allowing them to make changes and fix issues within the same day. Finally, Heino’s team promised that the team would automate whatever they could.

“Nokia was essentially looking for partners that could enable this promised environment at scale,” recalls Kumorion CTO and Founder, Timo Ahokas.

“This was about more than being a contractor providing infrastructure support. The company needed a team that understood the nuances of open-source software development and operations at the system level. We excel in these areas,” he says.

More than 13 years later, that early technical fit has grown into a deep engineering partnership. While Nokia retains responsibility for datacenter and other hardware-level processes, Kumorion has designed and implemented many of the automated managed services – including the Nokia Kubernetes Service. This platform handles everything from provisioning and upgrades to logging, metrics and self-healing of some 400 clusters.

“Kumorion’s combination of expertise is quite hard to find,” says Heino. “We need engineers who know infrastructure deeply, but who also want to code. And we need hardware specialists who are able to automate everything. These are not typical profiles, but Kumorion has been able to attract developers who are inspired by this kind of work.” 

The private cloud pays off

Heino estimates that Nokia is able to run its private cloud at less than 40% the cost of the public equivalent. The savings come from eliminating licensing fees, optimizing hardware and automating day-to-day operations. This control comes with the added benefit of strong security and efficiency – sensitive R&D workloads are kept under Nokia’s governance.

“We handle everything – including the data centers, electricity, support, security, etc. – so we have a very good understanding on the total cost of ownership. We would not be using the private cloud if it wasn’t cheaper” says Heino.

What began as an experiment has now become a core enabler of Nokia’s global R&D. Managing the environment requires frequent deployment cycles and a growing level of automation, with the team striking the right balance between current and future needs.

“There’s an ongoing conversation about what we need now and what we’ll need next. We do not want to do manual work if it can be avoided. Kumorion brings that same mindset. If something can be automated, they’ll automate it,” says Heino.

The Kumorion team has essentially built automation applications that replace the roles of system admins and operators. It’s a managed service approach that enables any private cloud environment to scale as needed – without adding complexity. 

“We’re not saying the private cloud is right for every use case. An increasing number of companies in fact have a hybrid approach, using both the public and private clouds,” explains Ahokas. “If there are workloads or data you want closer to home and under your own control, Kumorion can make that possible without it becoming an operational burden. With the right skills and tools, the private cloud absolutely makes sense at scale.”

———-

For a deeper dive into this subject, download our whitepaper: Five reasons your company should consider the private cloud. 

We explore how the private cloud delivers cost efficiency, security and compliance, while opening the door to hybrid cloud strategies and tailored automation.

Writer // KUMORION BLOG //
Team Kumorion

The post How Kumorion powers a private cloud at scale for Nokia’s global R&D teams appeared first on Kumorion.

]]>
Meet the Kumorion team: Q&A with Shankar Lal, Cloud Architect https://kumorion.com/meet-the-kumorion-team-qa-with-shankar-lal-cloud-architect/ Tue, 13 May 2025 06:25:50 +0000 https://kumorion.com/?p=886 With experience from both large companies and startups, Shankar joined Kumorion to step into a leadership role. Our important work with Hashicorp Vault is part of his team’s responsibility. What sparked your interest in cloud architecture? I originally came to Finland to study physics, but soon realized it was not my cup of tea. Then […]

The post Meet the Kumorion team: Q&A with Shankar Lal, Cloud Architect appeared first on Kumorion.

]]>

With experience from both large companies and startups, Shankar joined Kumorion to step into a leadership role. Our important work with Hashicorp Vault is part of his team’s responsibility.

What sparked your interest in cloud architecture?

I originally came to Finland to study physics, but soon realized it was not my cup of tea. Then I saw that Aalto University was offering a master’s degree in communication engineering – focusing on network technologies – so I applied and got into that program.

I already had some IT experience from back home in Pakistan, where I had been working in a support function. This gave me a good grasp of how systems work, both from the software and hardware perspectives. I built on this knowledge with Aalto’s courses in cloud engineering, which really got me interested in the field. 

In 2014, I joined Nokia Networks to do my master’s thesis. For two years I worked as part of an R&D team for cloud security solutions. This was where I started to get experience of large-scale cloud systems in the real-world environment.

 

How did you come to join Kumorion?

After Nokia, I moved to a 20-person software startup where I was responsible for managing various IT systems and tools. Then I joined a software company working on solutions that help retailers and warehouses to reduce food waste by accurate forecasting and replenishment mechanisms.

At some point, Kumorion reached out to me through a recruiter. It immediately became clear that the company was looking to fill a senior position with someone who could work independently and take ownership of projects. The technology stack also stood out, with Kubernetes central to the role.

"In addition, as I’ve been at Kumorion for a while, I support other teams when they need help with something I have experience in."

You’ve now been with Kumorion since 2022 – what does your job entail today?

I’m the team lead for our HashiCorp Vault and HashiCorp Consul services. This means my role is a mix of technical work, people management and customer collaboration. In addition to making sure everything is running as expected, I need to keep track of new requirements to ensure we’re implementing the right solutions. It’s about seeing the bigger picture.

I also help to manage our development roadmap. This means reviewing our backlog and making sure we’re staying on track with what we need to deliver. In addition, as I’ve been at Kumorion for a while, I support other teams when they need help with something I have experience in.

 

What technologies do you work with most?

I work extensively with HashiCorp Vault. Knowledge of this toolset was actually one of the requirements for my role, along with other key tools such as Terraform and ArgoCD, which we use to code and deploy our infrastructure resources.

Earlier in my time at Kumorion, I worked on improving AWX (Ansible Tower). My job was to make it more robust by moving it to a Kubernetes-based deployment. I built a managed service for AWX that is now in use with one of our customers. I also worked with GitLab CICD pipelines for building the automation workflows.

"The company strongly supports employees in taking training courses for the tools and platforms we use in our daily work."

Kumorion encourages continuous learning. How has that shaped your experience?

The company strongly supports employees in taking training courses for the tools and platforms we use in our daily work. Kumorion will pay for you to take the exam and reward you when you pass it. This is really a great company benefit.

I’ve taken multiple certification exams since joining, including the HashiCorp Vault Professional certification. This is not just a multiple-choice test; it’s a four-hour hands-on exam where you have to solve issues in a real-world environment. Passing this is one of the achievements I’m most proud of.

“Here employees can make an impact without bureaucracy slowing us down. Decisions happen much faster than in a big company environment."

How would you describe Kumorion’s culture?

It’s very open and transparent – you always know what’s happening in the company. The management team does a very good job of sharing information, including at our monthly breakfast meetings.

New ideas are strongly encouraged. If you think of a tool or an approach that could improve the way we work, then you are given the opportunity to build a prototype. If it turns out as expected, then it would likely become part of our stack.

Another thing I appreciate is the trust placed in employees. You do not need to manage layers of hierarchy to get things done. Here employees can make an impact without bureaucracy slowing us down. Decisions happen much faster than in a big company environment.

Kumorion is a great place for people who want to take ownership of their work and be free to solve problems their own way. You’re also constantly learning from talented colleagues. I really appreciate how much people here know about different technologies and are willing to share their knowledge.

Writer // KUMORION BLOG //
Team Kumorion

The post Meet the Kumorion team: Q&A with Shankar Lal, Cloud Architect appeared first on Kumorion.

]]>
Meet the Kumorion team: Q&A with Deepak Panta, Software Developer https://kumorion.com/meet-the-kumorion-team-qa-with-deepak-panta-software-developer/ Tue, 15 Apr 2025 07:00:26 +0000 https://kumorion.com/?p=877 Drawn by the opportunity to learn about DevOps and different cloud technologies, Deepak joined us in March 2023. He now works on several key projects, including serving as team lead in a customer-facing role. Please share a bit about your background and how you got into softwaredevelopment? I’m originally from Nepal and I’ve been living […]

The post Meet the Kumorion team: Q&A with Deepak Panta, Software Developer appeared first on Kumorion.

]]>

Drawn by the opportunity to learn about DevOps and different cloud technologies, Deepak joined us in March 2023. He now works on several key projects, including serving as team lead in a customer-facing role.

Please share a bit about your background and how you got into software
development?

I’m originally from Nepal and I’ve been living in Finland for almost 14 years now. I came here to study network engineering, completing my degree in the city of Turku.

During my studies I started working as a full-stack developer, mainly using Python. One of the companies I was with had an ERP system for warehouse management. I also worked on public-sector projects for different cities in Finland, developing reservation systems that
citizens could use for various purposes.

 

What brought you to Kumorion?

A recruiter contacted me to ask if I’d like to join a company working with DevOps and hybrid cloud technologies. I hadn’t even marked myself as looking for new opportunities, so it was a bit of a surprise when I got the message!
I had been doing software development for quite some time, but I wanted to learn more.

Kumorion was looking for Python developers with an interest in DevOps, so it seemed like the right fit. I saw it as an opportunity to gain hands-on experience with Kubernetes and other DevOps tools.

Geography was another factor in my decision. I had been in Turku for a long time, so the
idea of moving to Helsinki was appealing. We came to an agreement and I joined in March
2023.

"The working culture here is really flexible and open. No one micromanages you and there’s a lot of trust in the team."

Has the role lived up to your expectations?

Definitely! Now I have the opportunity to work both in software development and DevOps. The working culture here is really flexible and open. No one micromanages you and there’s a lot of trust in the team. It’s a small company, so communication is easy. You don’t need to worry about office politics – everyone is very supportive.

There’s also a strong culture of learning. If you’re interested in a particular technology or
want to work on something new, there’s usually a way to make it happen. You are not
expected to know everything about that technology before you start in a project. We often learn on the job, as I’ve been doing with the cloud technologies and DevOps.

Kumorion encourages us to spend some time each week on professional development. If you want to get a certain certification, the company provides the resources and pays for the course. If you pass, then you even get a cash bonus. It’s a big motivator for keeping up with new technologies and growing your skill set.

"It’s a good challenge and I’m learning a lot."

What does your team setup look like?

I actually work with several different project teams. One is focused on cluster resource
allocation, where I’m building an interface that monitors available cloud resources and allows them to be dynamically rented out.

Another project concerns a system that manages distributed storage accounts. I’m working with one of the company’s senior developers to maintain and improve the infrastructure for this large-scale system.

I’ve also started working as a team lead for a customer-facing project. This involves a lot of communication and occasionally supporting with system design. It’s a good challenge and I’m learning a lot.

 

What technologies and tools do you work with most frequently?

On the development side, I mainly use Python and JavaScript, along with frameworks and libraries like FastAPI, Flask and React. For DevOps and cloud work, we use Kubernetes, Prometheus, Grafana Alertmanager and lots of other open-source tools.

I’ve also started learning Go, as a lot of DevOps tools are written in it. Learning Go properly has been a goal of mine for some time. I’m now working on a project that will use it, so I have a good reason to dive in.

“This is a great place to develop your skills, as there’s a lot of flexibility to switch between projects and take on different challenges."

What advice would you give to someone considering joining Kumorion?

I would definitely say come and work with us. This is a great place to develop your skills, as there’s a lot of flexibility to switch between projects and take on different challenges. You don’t always get this kind of opportunity in a bigger company.

We have a diverse team, with colleagues from a number of different countries. This creates a great atmosphere in the office. Beyond work, the company also invests in team activities. For example, this spring we’re all going on a trip to Lapland for a few days. We have social events in the summer too.

If someone wants to learn about technologies and have a good time doing so, then Kumorion is a great place to be.

Writer // KUMORION BLOG //
Team Kumorion

The post Meet the Kumorion team: Q&A with Deepak Panta, Software Developer appeared first on Kumorion.

]]>
Meet the Kumorion team: Q&A with Milan Verešpej, Software Developer & Cloud Engineer https://kumorion.com/meet-the-kumorion-team-qa-with-milan-verespej-software-developer-cloud-engineer/ Mon, 13 Jan 2025 12:52:54 +0000 https://kumorion.com/?p=824 Milan joined us in 2022, attracted by the prospects of challenging work and continuous learning. He shares his experience of moving from a big-company IT-consultant role to the growing Kumorion team. Let’s start at the beginning of your career – how did you get into IT? It’s a bit of a roundabout story actually. In […]

The post Meet the Kumorion team: Q&A with Milan Verešpej, Software Developer & Cloud Engineer appeared first on Kumorion.

]]>

Milan joined us in 2022, attracted by the prospects of challenging work and continuous learning. He shares his experience of moving from a big-company IT-consultant role to the growing Kumorion team.

Let’s start at the beginning of your career – how did you get into IT?

It’s a bit of a roundabout story actually. In 2011, I started studying economics at Masaryk University in Brno in the Czech Republic. But it was kind of boring and I didn’t really enjoy it, so after three years I began studying IT at Brno University of Technology. I quickly realized that this field is a lot more fun for me.

While I was studying I got my first job as a software developer, basically focused on DevOps and continuous integration projects. Then I just started to work more and my studies eventually took a back seat.

 

What attracted you to Kumorion?

I was actively looking for a new role, as I felt a bit stuck in my previous job. It was a big company and I was mainly being sold as a consultant with expertise in DevOps and automation. But I wanted to focus more on the software development side.

Then one day in early 2022, a headhunter contacted me out of the blue about a role at Kumorion. I did not know anything about the company, but I was curious as it was a senior role. Back then I did not see myself as a senior! It sounded good, so I agreed to an interview to see what the opportunity was about. Now here I am.

"I really like the personal touch here. The company is very people-oriented and takes care of us."

How did you feel about joining a small company you knew nothing about?

I was intrigued, as until then I’d only worked at a large corporation. I wanted to see if a smaller company works better for me – and it does.

In my previous role I felt as though I was simply sold to client projects as a billing resource. But now I feel like I’m first-and-foremost a Kumorion employee. I really like the personal touch here. The company is very people-oriented and takes care of us.

In larger companies you can sometimes feel like just another cog in the machine, but at Kumorion it’s different. Everyone’s voice is heard and the leadership is very approachable. I can go straight to our CEO or CTO with any questions or concerns. If I feel that I’m not suited to a particular task or too busy to handle something, then our leadership really tries to accommodate that.

"We have a relaxed atmosphere. I’ve never felt any pressure to put in more hours than required."

How would you characterize the spirit at the company?

Well, Kumorion is growing! I only joined a few years ago, but since then quite a few more people have started working here. It’s exciting to be part of this growth. A lot of us work remotely – including on-site with the customer – but Thursday is the day everyone tries to come into the Kumorion office in Espoo. 

Of course everybody is responsible for their own work, but we have a relaxed atmosphere. I’ve never felt any pressure to put in more hours than required. I’m not saying I’ve never had to rush or work a bit later than normal, but this has always been my choice.

 

What does a typical workday look like for you?

It’s basically a mix of programming, cloud engineering and problem-solving. One thing I like about my role is the balance between working independently and collaborating with others. 

I’m the tech lead for a couple of projects, which means I’m involved in everything from design to customer communication. There’s a lot of discussion to decide the best way to implement something. I think customers value that we’re proactive in bringing ideas to the table. It’s important to me that they see us as more than just a resource delivering work – we really try to add value.

On the technical side, Go is my primary programming language. I also occasionally use Python for scripting, but that’s becoming less frequent. In terms of platforms and tools, I work with OpenStack, Kubernetes and HashiCorp Vault. 

“This is a place where I can really grow over the longer term."

Is there anything you find particularly exciting at the moment?

I like that I’m learning all the time. For me it’s fun to identify issues and find solutions; to get lost in figuring out a challenging task. The work I do at Kumorion allows me to dig much deeper into domains that I’m interested in. The company really encourages this kind of learning.

For example, right now I’m implementing authentication for a couple of services, so I’m really diving into the details behind some standards – how the flows work, what is possible, etc. Even though I worked with authentication before, I was not involved in the actual implementation. Now I’m really learning stuff I previously only knew a bit about. 

 

How do you see the future of your career?

When I first joined Kumorion, I thought I might stay for a few years to build my skills and then move on. But pretty soon after joining I realized this is a place where I can really grow over the longer term.

It’s exciting to see what’s going to happen to the company, as there are a lot of interesting development ideas in the pipeline.

Writer // KUMORION BLOG //
Team Kumorion

The post Meet the Kumorion team: Q&A with Milan Verešpej, Software Developer & Cloud Engineer appeared first on Kumorion.

]]>
Meet the Kumorion team: Q&A with Anna Kozlova, Data Scientist https://kumorion.com/meet-the-kumorion-team-qa-with-anna-kozlova-data-scientist/ Mon, 16 Dec 2024 14:11:22 +0000 https://kumorion.com/?p=768 An expert in natural language processing, Anna joined Kumorion in summer 2023 as generative AI was sweeping the world. She explains how the company’s culture of trust gives her the freedom to do great work. Please tell us how you were drawn to data science – specifically natural language processing. I have always liked languages […]

The post Meet the Kumorion team: Q&A with Anna Kozlova, Data Scientist appeared first on Kumorion.

]]>

An expert in natural language processing, Anna joined Kumorion in summer 2023 as generative AI was sweeping the world. She explains how the company’s culture of trust gives her the freedom to do great work.

Please tell us how you were drawn to data science – specifically natural language processing.

I have always liked languages and mathematics. This field combines both, so it’s perfect for me. 

When I went to Novosibirsk State University – in my home city in Russia – I did a bachelor’s degree in computational linguistics and a master’s in computer science. This was before generative AI became so widespread and accessible. At that time, large language models did not even exist and AI technologies were only just beginning to be adopted by industries. 

After my studies I worked at a data-science consultancy focused on chatbot development, which is closely related to my expertise in natural language processing. Then in 2019 a startup specializing in speech recognition recruited me to Finland for a role here. After that I moved to a fairly large AI solutions and consulting company before joining Kumorion in the summer of 2023.

“My role at Kumorion has given me the chance to dive deeper into the technical aspects of data science.”

What was it about the opportunity at Kumorion that appealed to you?

I was not actively looking for a new job when a recruiter contacted me on Linkedin about the role here. But the timing was good, as I had reached a plateau in my previous role. I was doing a lot of research and AI model training.

My role at Kumorion has given me the chance to dive deeper into the technical aspects of data science. Here it’s more about developing workflow pipelines, as well as delivering and deploying solutions. It’s been a great opportunity to expand my skills and I’m very happy that I joined the company.

 

Please tell us more about the work of a data scientist and your tasks at Kumorion.

The role of a data scientist can be very broad – with responsibilities varying from company to company – but at the core you develop machine-learning models or statistics-based solutions. You also need to know how to aggregate, process and query data.

Developing models is not a major part of my role at Kumorion. I’m more focused on end-to-end data pipelines, looking at how we process data, apply machine learning models and output data to different interfaces. 

I’ve also had to do some UI development, as I wanted my data visualizations to look good for when I present my findings to others in the company. I really like that there are many things I can try here.

“There’s a genuine culture of trust at Kumorion – we are not micromanaged.”

How would you characterize working here versus working in a larger company?

I think the biggest difference is in the level of support and trust. It’s a small team, but I feel that I’m heard and my input is valued. This is something about Kumorion that I truly appreciate. 

We get a lot of support directly from the leadership team. Our CEO and CTO are very approachable, so employees can really influence how things are done. There’s also a genuine culture of trust, where we are not micromanaged. I’m recognized as a professional who knows how to do the job. 

So it’s the best of both worlds really: a culture of support and a culture of trust. This really motivates me to take ownership of my work and learn even more.

 

What do you think customers appreciate about working with Kumorion?

I think they value our flexibility; how we can quickly adapt and respond to changing customer requirements. We do not just follow a textbook approach – we take the time to understand what works best for the customer. In my opinion, Kumorion succeeds very well in this.

From other team members, I see a real dedication to delivering results. There’s a strong sense of shared ownership, where people are not just responsible for their individual tasks – they also take pride in our ability to deliver as a team.

“Exposure to so many different technologies has really helped me grow as a professional.”

What software tools and domains do you work with daily?

Python is the main language we use for programming in the AI team, but I work with a variety of other tools too. Recently I’ve been using Apache Airflow for orchestrating workflows, as well as MLflow for tracking the performance of machine learning models. 

We work both with traditional SQL databases and data lakes. Thanks to the support of my colleagues, I’ve gained a much better understanding of the technical details across different cloud infrastructures.

This exposure to so many different technologies has really helped me grow as a professional. We work with a bit of everything here.

 

Is there part of your job that particularly excites you or is developing rapidly?

Lately I’ve been working on a project involving anomaly detection, which is something I’m really enjoying. We’re using machine learning models to analyze time series data and predict potential incidents in cloud services. 

What’s exciting is how this project has sparked interest in other areas. After presenting our solution to a customer’s security team, they wanted to adapt it for their own processes too. I’ll soon be customizing it for those use cases. 

It’s always rewarding to see your work expand and find new applications.

Writer // KUMORION BLOG //
Team Kumorion

The post Meet the Kumorion team: Q&A with Anna Kozlova, Data Scientist appeared first on Kumorion.

]]>
Is your company doing enough to protect passwords and other credentials? https://kumorion.com/is-your-company-doing-enough-to-protect-passwords-and-other-credentials/ Wed, 06 Nov 2024 09:23:04 +0000 https://kumorion.com/?p=742 Every login credential is a potential risk to your company’s security. We explain why HashiCorp Vault is the right way to keep your secrets safe. Passwords and other credentials are the keys to our digital kingdoms. Behind the doors they unlock lie infinite treasures of data on the things that people and companies hold valuable. […]

The post Is your company doing enough to protect passwords and other credentials? appeared first on Kumorion.

]]>

Every login credential is a potential risk to your company’s security. We explain why HashiCorp Vault is the right way to keep your secrets safe.

Passwords and other credentials are the keys to our digital kingdoms. Behind the doors they unlock lie infinite treasures of data on the things that people and companies hold valuable.

Yet these ‘secrets’ – as we call them in the IT world – are often scattered and poorly guarded. From an admin password scribbled on a sticky note to an API key left exposed in some code, lack of control over secrets has led to some staggering breaches.

According to Verizon’s 2024 Data Breach Investigation Report, some 77% of web application attacks stem from hacking with the use of stolen credentials. When a single overlooked secret falls into the wrong hands, you may be giving away the keys to your entire kingdom.

“As a company grows, you have a lot more applications and associated secrets. The connections between apps create many access points in the environment, which in turn create more attack vectors that can be exploited. The more complex your infrastructure is, the more important it is to keep your secrets under virtual lock and key,” says Kumorion Founder and Chief Technology Officer, Timo Ahokas.

Kumorion is an advocate for implementing the Vault secret-management solution from US cloud-infrastructure leader HashiCorp. The solution is designed for storing passwords, certificates, encryption keys and more, with special tools that give you complete control over these critical secrets.

While Vault is suitable for any environment, it’s widely used when security is paramount. This includes DevOps workflows, cloud-infrastructure projects and industries that handle sensitive information.

Tools to rotate, revoke and audit secrets

A common way in which secrets are compromised is through failure to rotate credentials (i.e. the periodic changing of passwords, keys or other authentication information). One of the reasons that passwords and keys may not be rotated is because it’s often challenging to track down all the places they’re used. This challenge can be overcome by making HashiCorp Vault a single source of truth for maintaining and managing secrets. Is your company doing enough to protect passwords and other credentials?

“Changing some but not all credentials can risk breaking critical systems, which is why passwords are sometimes left unchanged. HashiCorp Vault solves this issue with tools to rotate credentials and automatically synchronize them across all systems,” explains Kumorion Cloud Architect, Shankar Lal.

Failure to revoke credentials is another way in which companies are exposed to breaches. For example, an employee may leave a company yet still be able to access a specific domain. Or an external developer may have a temporary username and password to work on some code, while the person who granted the credentials may not keep track of having done so.

None of this necessarily implies malice from any party involved. But the more secrets that are in circulation, the greater the risk of exposure. We are all vulnerable to phishing attacks.

“Once you create a secret for accessing a system, it often lingers longer than is necessary. This leads to security risks if it’s not revoked. HashiCorp has addressed this with a feature for creating dynamic secrets. These are credentials that exist for only a short, predefined period – even just 10 minutes,” says Lal.

“This dynamic secrets feature is not only useful for granting access to externals, but also for controlling role-based internal access. For example, an analyst may need readonly access to see a database just long enough to pull certain numbers for a report,” he explains.

HashiCorp Vault also provides robust auditing features, allowing administrators to track who has accessed a system and with which credentials. Audit logs show unauthorized attempts to retrieve sensitive information and can be used to trigger alerts when unexpected logins take place.

Kumorion manages HashiCorp Vault for your company

While it’s possible for companies to take HashiCorp Vault into use on their own, effective management of the solution requires deep technical knowledge and careful configuration. This is where Kumorion brings significant expertise, simplifying the implementation through a managed services approach that covers all aspects of deploying and maintaining Vault.

“Kumorion has more than five years of experience running Vault for the Nokia corporate private cloud, which is one of the largest in the world. We have delivered Vault-as-a-Service to hundreds of teams and users, and also used Vault internally in multiple large scale services we have built. Now we’re bringing this knowledge and experience to other customers too,” explains Ahokas.

Kumorion hosts Vault in the customer’s cloud or on-premise servers. Managed services include automated backups, high availability configurations, and clustering to prevent any single point of failure. In case of issues with the Vault application, Kumorion’s managed service is able to perform automated health checks and trigger the recovery process.

For companies wanting to take the next step in this domain, Kumorion recommends evaluating the status of your secret protection framework through the following bestpractice model:

  • Level 1 – Unmanaged Secrets: Secrets exist but are not stored in a secure vault. There is a high risk of exposure
  • Level 2 – Vaulted Secrets: Static secrets are stored in a vault. Without regular updates, these secrets remain vulnerable to exposure – even many years later
  • Level 3 – Rotated Secrets: Automatic rotation changes secrets at monthly, weekly or even hourly intervals
  • Level 4 – Dynamic Access: Secrets are created dynamically when needed, even for a very short duration. Integration with Kubernetes, databases and public clouds – including AWS and Azure –allows for seamless management of these short-lived credentials

 

“HashiCorp Vault enables levels 2, 3 and 4 – depending on the approach you choose. This flexibility means it fulfills the needs of many businesses. You can start small and build up to level 4. Kumorion can evaluate your security setup and suggest the right approach,” says Ahokas.

“Every company is of course the master of its own secrets, but we recommend being at least on level 2 in this day and age.”

Writer // KUMORION BLOG //
Team Kumorion

The post Is your company doing enough to protect passwords and other credentials? appeared first on Kumorion.

]]>
Push for OpenStack https://kumorion.com/push-for-openstack/ Thu, 04 Jul 2024 12:23:54 +0000 https://kumorion.com/?p=544 Broadcom completed the VMware acquisition in November. Read more here. Soon after, they announced a streamlined product portfolio and a move to subscription-based licensing. Read more here. The situation with partner programs and resellers seems turbulent at the moment, as the future appears very unclear. Regarding alternative virtualization platforms, it looks like OpenStack is making […]

The post Push for OpenStack appeared first on Kumorion.

]]>

Problem Statement

I have been managing one production-grade AWX instance, hosted on Kubernetes, which is executing a couple of hundred jobs every day. After a few months of usage, it was found that the disk where the persistent data of our Ansible AWX instance is located was running out of space.

The reason for this disk clogging was that there was no clean-up configured for AWX jobs history. Around 35K AWX jobs were accumulated since its inception. After doing some research online, I found the following solutions to this issue.

 

Solution 1# Enable AWX managing job for cleaning the job history

AWX UI comes with pre-configured management jobs including one for AWX job details cleanup. This job can be configured to keep the job history for a specific time period and clean up the rest. The retention period of AWX job history depends on the use case and varies on the environment. In our environment, keeping one month of job history is sufficient which can be stored within the provided storage capacity in our environment.

Sometimes, there is an issue with the Cleanup details job that is unable to free up the disk space. In order to get the space back, the following operation is needed on the Postgres database.

 

Solution#2 Postgres DB vacuuming to retrieve disk space

After inspecting the Postgres database, it was found that one specific table namely main_jobevent was around 16GB in size and still growing. This table stores the ansible logs for each job runs on the AWX. If there isn’t any need to keep the AWX job history at all, the Postgres database comes with a vacuum operation that needs to run on main_jobevent table.

				
					SELECT pg_size_pretty( pg_total_relation_size('main_jobevent') );

# pg_size_pretty
#----------------
# 16 GB
# (1 row)
				
			

After vacuuming: Note: To return space to the OS, use VACUUM FULL

				
					VACUUM FULL main_jobevent;
SELECT pg_size_pretty( pg_total_relation_size('main_jobevent') );

# pg_size_pretty
#----------------
# 4915 MB
# (1 row)
				
			

Bonus tip# Backup dump by excluding jobevent table

Postgres database backup takes a snapshot of the running database which also includes the main_jobevent table. If there isn’t any need to backup AWX job history, this table can be excluded from the Postgres database dump. There is an option --exclude-table in pg_dump tool which excludes any given table from the dump by matching on a pattern (i.e. it allows wildcards):

 

				
					pg_dump — clean — create -h $host -U $username -d $database -p $port
-F custom — exclude-table-data=’main_jobevent_*’ | gzip > $BACKUP.gz

				
			

This exclude option reduced the backup size of our AWX size by 90%.

Writer // KUMORION BLOG //
Shankar Lal

Enthusiastic DevOps learner and sometimes like to write about his experiences for community awareness.

The post Push for OpenStack appeared first on Kumorion.

]]>
HashiCorp Vault: Comparison of OSS, Enterprise and HCP editions https://kumorion.com/hashicorp-vault-comparison-of-oss-enterprise-and-hcp-editions/ Tue, 21 May 2024 08:03:00 +0000 https://kumorion.com/?p=520 HashiCorp Vault is a tool for secrets management, data encryption, and identity-based access. It is designed to help organisations securely store and manage sensitive information such as tokens, passwords, certificates, encryption keys, etc. In this article, the main feature differences and offerings in three different Vault editions are discussed.   HashiCorp Vault Offerings Currently, Hashicorp […]

The post HashiCorp Vault: Comparison of OSS, Enterprise and HCP editions appeared first on Kumorion.

]]>

HashiCorp Vault is a tool for secrets management, data encryption, and identity-based access. It is designed to help organisations securely store and manage sensitive information such as tokens, passwords, certificates, encryption keys, etc.

In this article, the main feature differences and offerings in three different Vault editions are discussed.

 

HashiCorp Vault Offerings

Currently, Hashicorp offers Vault service in three different editions. Below is the explanation of each edition along with the supported features.

 

Vault OpenSource (OSS)

The open source edition is self-managed so it can be hosted anywhere in the desired platform. Vault application can be installed on various supported operating systems. The pre-compiled Vault binary can be downloaded from HashiCorp website.

HashiCorp Vault OpenSource edition supports the most common use cases such as:

  • Storing secrets in KV Engines
  • Configuring widely used authentication methods such as AWS, Azure, LDAP, OIDC, etc.
  • Setting up Vault in highly available cluster mode
  • Data encryption and decryption etc.

The open-source version of Vault is usually sufficient for organisations with a small number of Vault users and only utilises the basic functionalities of Vault. The main features of the Vault open-source editions are:
HashiCorp Vault OpenSource edition supports the most common use cases such as:

  • Storing static secrets in key value engine
  • Performing data encryption and decryption via transit engine
  • Dynamic secrets engines such as AWS, database, SSH key engines
  • Vault plugins support
  • Vault clients statistics in UI dashboard

 

Vault Enterprise

Like Vault open-source, the Enterprise edition is also self-managed. The Vault Enterprise contains all that is included in the open-source editions, and it offers dozens of extra features that could add value to the Vault usage in the organisations. It’s important to understand and evaluate those features to decide if your organisation could benefit from them.

The most notable enterprise features include:

  • Vault Namespaces

    A namespace in Vault Enterprise is like a Vault within a Vault and is used for logical separation of the Vault instances per team or division in the organisation. With namespaces, organisations can create logical partitions within Vault to separate policies, authentication methods, and secrets across different teams or departments. This feature helps in organising and securing access to sensitive data.

  • Vault Disaster Recovery (DR) and Performance Replication

    Vault DR capability ensures that in the event of a disaster (such as data centre outages, hardware failures, or other catastrophic events), your Vault data remains safe and accessible, minimising downtime and data loss. For this to work, Vault is configured in multiple locations as a Primary and secondary cluster. Real-time replication is set up so that changes made in the primary cluster (such as adding or updating secrets, policies, and configurations) are replicated to the secondary clusters in real-time. This ensures that the secondary clusters are always up-to-date and can take over immediately if the primary cluster becomes unavailable.

    Vault’s Performance Replication feature is designed to enhance the scalability and performance of HashiCorp Vault across geographically distributed data centres or within large, dispersed organisations. It aims to optimise the response time and load-handling capabilities of Vault by replicating secrets and configurations across multiple clusters and ensures that users and applications can access Vault services with low latency, regardless of their geographical location.

  • Sentinel Policy as Code

    Vault Enterprise includes HashiCorp Sentinel, a policy-as-code framework that allows organisations to define fine-grained access control policies using a domain-specific language. This gives administrators greater control over how policies are defined and enforced. Sentinel policies can be enforced at different levels: advisory (warnings), soft mandatory (overridable), and hard mandatory (non-overridable). This flexibility allows organisations to enforce policies according to their risk management strategy and operational practices.

  • Integrated Storage Snapshots

    The automated storage snapshot feature simplifies the process of taking periodic snapshots of the data stored within Vault’s Integrated Storage backend. This is crucial for disaster recovery and operational durability, ensuring that data can be restored to a known good state in case of data corruption, loss, or a disaster scenario.

  • Enterprise support

    Vault Enterprise also comes with support from HashiCorp in case anything goes wrong. Depending on the support plan, HashiCorp support responds within the given time frame as agreed in the SLAs.

  • Control Groups

    Vault Control Groups are an advanced security feature within HashiCorp Vault, designed to implement an additional layer of authorisation through a manual approval process for accessing secrets or performing sensitive operations.

  • Lease Count Quotas

    Vault Lease Count Quotas, introduced in HashiCorp Vault, is a feature designed to improve the operational safety and resource management of Vault by limiting the number of leases that can be generated within a specified time frame or scope.

 

 

Vault HCP

Vault (HCP) Cloud, a SaaS solution, is a fully managed implementation of Vault which is operated by HashiCorp, allowing organisations to get started with Vault in no time. Vault HCP clusters are equipped with Vault Enterprise edition leveraging the additional enterprise features.

Vault HCP vault clusters are hosted only on AWS and Azure clouds with multiple supported regions across the globe. It is possible to expose Vault clusters in HCP to the public network or restrict access to a private network via VPN peering. The below image depicts the peering connection between the HCP virtual network and a private VPC network for a public cloud provider.

The benefits of using Vault HCP include:

  • Easier Vault deployments with few clicks
  • Managed Vault upgrades
  • Ability to take Vault snapshots and restore from the HCP console
  • Infrastructure reliability provided by HashiCorp
  • Secure access via private network
  • Pre-configured Auto-unseal
  • Dynamic Vault cluster scaling
  • Multiple tier sizing and pricing i.e. Development, Standard, Plus tier

 

 

Comparison Table

To make decisions easy regarding the vault edition which would best suit the needs of the organisations, the below table provides a comparison of major Vault features among three different vault editions.

 

Vault at Kumorion

We have been managing and supporting HashiCorp Vault (both open source and Enterprise edition) for many years. Our experience with Vault is quite extensive in terms of automated deployment of Vault in HA cluster. We have been dynamically managing the Vault configurations such as auto-unsealing, automated recovery from lost quorum and performance and DR replications (Enterprise feature) etc.

Our way of provisioning the various Vault component is via code. We use automated pipelines to configure Vault secret engines, setup authentication methods such LDAP, OIDC, JWT and approle, manage the Vault policies and vault group configurations. Vault monitoring and logs shipping to the logs aggregator is another important aspect that we have dealt within our automation.

We see Vault as key component in modern distributed environments and being able to self host and/or use a managed solution like HCP is crucial. We work in various multi-cloud, hybrid, on-prem and edge environments and Vault is suitable for all.

Writer // KUMORION BLOG //
Shankar Lal

Enthusiastic DevOps learner and sometimes like to write about his experiences for community awareness.

The post HashiCorp Vault: Comparison of OSS, Enterprise and HCP editions appeared first on Kumorion.

]]>
AWX: Create AWX Execution Environment using GitLab CICD https://kumorion.com/awx-create-awx-execution-environment-using-gitlab-cicd/ Fri, 30 Sep 2022 10:24:00 +0000 https://kumorion.com/?p=402 GitLab is nowadays a widely used CICD tool. Ansible Tower (AWX) uses an execution environment image for environment consistency and security to run the ansible jobs in AWX. This article presents an example GitLab pipeline setup for building and pushing the AWX EE image to the image registry.   EE image manifest Docker file: FROM […]

The post AWX: Create AWX Execution Environment using GitLab CICD appeared first on Kumorion.

]]>

GitLab is nowadays a widely used CICD tool. Ansible Tower (AWX) uses an execution environment image for environment consistency and security to run the ansible jobs in AWX. This article presents an example GitLab pipeline setup for building and pushing the AWX EE image to the image registry.

 

EE image manifest

Docker file:

requirements.txt file:

				
					ansible==2.9.27
ansible-runner
				
			

requirements.yml file:

				
					---
collections:
  - awx.awx
  - cloud.common
				
			

Configuring GitLab pipeline for automated EE image

Below is an example of a .gitlab-ci.yml file for building and pushing an EE image t to the docker image registry. The pipeline consists of two stages i.e. build-image and push-image.

The build-image stage builds the docker image from Dockerfile and outputs the ansible and python versions installed in the image. This stage will only run if any changes are made to any of the Dockefile, requirements, or .gitlab-ci.yml files.

The push-image stage will apply the latest tag to the image, and authenticate to the docker registry. This authentication step requires that DOCKER_USER and DOCKER_PASSWORD are defined in the GitLab CICD variables. This stage will only run on the main branch of the GitLab repository.

				
					stages:
  - build-image
  - push-image

variables:
  REGISTRY: your_image_registry_url
  AWX_REG_IMAGE_CUSTOM: $REGISTRY/custom-ee
build-Custom-EE-image:
  stage: build-image
  script:
    - docker build -t "$AWX_REG_IMAGE_CUSTOM:$CI_COMMIT_REF_SLUG" . -f Dockerfile
    - docker run --rm $AWX_REG_IMAGE_CUSTOM:$CI_COMMIT_REF_SLUG ansible --version
    - docker run --rm $AWX_REG_IMAGE_CUSTOM:$CI_COMMIT_REF_SLUG python3 --version
  rules:
    - changes:
        - Dockerfile
        - requirements.txt
        - requirements.yml
        - .gitlab-ci.yml
push-CUSTOM-EE-image:
  stage: push-image
  script:
    - docker build -t "$AWX_REG_IMAGE_CUSTOM:$CI_COMMIT_REF_SLUG" . -f Dockerfile
    - docker tag "$AWX_REG_IMAGE_CUSTOM:$CI_COMMIT_REF_SLUG" "$AWX_REG_IMAGE_CUSTOM:latest"
    - echo $DOCKER_PASSWORD | docker login -u $DOCKER_USER --password-stdin $REGISTRY
    - docker push "$AWX_REG_IMAGE_CUSTOM:latest"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
				
			

Summary

In this article presents an example of an automated GitLab pipeline setup for building and pushing AWX EE image to the docker image registry. It is possible to install the python-pip packages and ansible collections using the requirement files and also control the ansible and python versions as specified in the configuration files.

The build-image stage builds the docker image from Dockerfile and outputs the ansible and python versions installed in the image. This stage will only run if any changes are made to any of the Dockefile, requirements, or .gitlab-ci.yml files.

The push-image stage will apply the latest tag to the image, and authenticate to the docker registry. This authentication step requires that DOCKER_USER and DOCKER_PASSWORD are defined in the GitLab CICD variables. This stage will only run on the main branch of the GitLab repository.

Writer // KUMORION BLOG //
Shankar Lal

Enthusiastic DevOps learner and sometimes like to write about his experiences for community awareness.

The post AWX: Create AWX Execution Environment using GitLab CICD appeared first on Kumorion.

]]>
AWX: Storage optimization for Postgres Database https://kumorion.com/awx-storage-optimization-for-postgres-database/ Fri, 30 Sep 2022 06:05:39 +0000 https://kumorion.com/?p=315 Problem Statement I have been managing one production-grade AWX instance, hosted on Kubernetes, which is executing a couple of hundred jobs every day. After a few months of usage, it was found that the disk where the persistent data of our Ansible AWX instance is located was running out of space. The reason for this […]

The post AWX: Storage optimization for Postgres Database appeared first on Kumorion.

]]>

Problem Statement

I have been managing one production-grade AWX instance, hosted on Kubernetes, which is executing a couple of hundred jobs every day. After a few months of usage, it was found that the disk where the persistent data of our Ansible AWX instance is located was running out of space.

The reason for this disk clogging was that there was no clean-up configured for AWX jobs history. Around 35K AWX jobs were accumulated since its inception. After doing some research online, I found the following solutions to this issue.

 

Solution 1# Enable AWX managing job for cleaning the job history

AWX UI comes with pre-configured management jobs including one for AWX job details cleanup. This job can be configured to keep the job history for a specific time period and clean up the rest. The retention period of AWX job history depends on the use case and varies on the environment. In our environment, keeping one month of job history is sufficient which can be stored within the provided storage capacity in our environment.

Sometimes, there is an issue with the Cleanup details job that is unable to free up the disk space. In order to get the space back, the following operation is needed on the Postgres database.

 

Solution#2 Postgres DB vacuuming to retrieve disk space

After inspecting the Postgres database, it was found that one specific table namely main_jobevent was around 16GB in size and still growing. This table stores the ansible logs for each job runs on the AWX. If there isn’t any need to keep the AWX job history at all, the Postgres database comes with a vacuum operation that needs to run on main_jobevent table.

				
					SELECT pg_size_pretty( pg_total_relation_size('main_jobevent') );

# pg_size_pretty
#----------------
# 16 GB
# (1 row)
				
			

After vacuuming: Note: To return space to the OS, use VACUUM FULL

				
					VACUUM FULL main_jobevent;
SELECT pg_size_pretty( pg_total_relation_size('main_jobevent') );

# pg_size_pretty
#----------------
# 4915 MB
# (1 row)
				
			

Bonus tip# Backup dump by excluding jobevent table

Postgres database backup takes a snapshot of the running database which also includes the main_jobevent table. If there isn’t any need to backup AWX job history, this table can be excluded from the Postgres database dump. There is an option --exclude-table in pg_dump tool which excludes any given table from the dump by matching on a pattern (i.e. it allows wildcards):

 

				
					pg_dump — clean — create -h $host -U $username -d $database -p $port
-F custom — exclude-table-data=’main_jobevent_*’ | gzip > $BACKUP.gz

				
			

This exclude option reduced the backup size of our AWX size by 90%.

Writer // KUMORION BLOG //
Shankar Lal

Enthusiastic DevOps learner and sometimes like to write about his experiences for community awareness.

The post AWX: Storage optimization for Postgres Database appeared first on Kumorion.

]]>