CloudRay ArticlesList of articles, guides, and resources to help you get started with CloudRayhttps://cloudray.ioTop Cron Job Alternatives for Modern Engineering Team in 2026https://cloudray.io/articles/cron-job-alternativehttps://cloudray.io/articles/cron-job-alternativeExplore the best cron job alternatives in 2026, including modern task schedulers and automation tools for multi-server infrastructure teamsSun, 01 Mar 2026 00:00:00 GMT<p>Cron has long been the default Linux task scheduler, but many teams now look for reliable cron alternatives as infrastructure grows. Cron is easy to use, dependable for simple automation, and effective for handling <a href="https://cloudray.io/docs/schedules">scheduled tasks</a> on a single server. However, cron jobs frequently become more difficult to monitor, troubleshoot, and oversee as infrastructure expands, particularly when dealing with numerous machines or production settings. As a result, a lot of teams start looking into cron job alternatives or newer automation scheduling tools that offer better observability and coordination across multiple servers.</p> <p>In this article, we’ll explore the best cron alternatives based on operational needs in 2026, as well as when cron still makes sense and why teams move past it.</p> <h2>Contents</h2> <ul> <li><a href="#when-cron-still-works-perfectly">When Cron Still Works Perfectly</a></li> <li><a href="#built-in-cron-unix-like-alternative-tools">Built-in Cron Unix-like Alternative Tools</a></li> <li><a href="#top-modern-cron-job-alternatives-in-2026">Top Modern Cron Job Alternatives in 2026</a> <ul> <li><a href="#1-activebatch">1. ActiveBatch</a></li> <li><a href="#2-cloudray">2. CloudRay</a></li> <li><a href="#3-apache-airflow">3. Apache Airflow</a></li> <li><a href="#4-rundeck">4. Rundeck</a></li> <li><a href="#5-stonebranch">5. Stonebranch</a></li> <li><a href="#6-visualcron">6. VisualCron</a></li> <li><a href="#7-celery">7. Celery</a></li> <li><a href="#8-sidekiq">8. Sidekiq</a></li> <li><a href="#9-kubernetes-cronjobs">9. Kubernetes CronJobs</a></li> <li><a href="#10-jenkins">10. Jenkins</a></li> <li><a href="#11-github-actions">11. GitHub Actions</a></li> <li><a href="#12-hashicorp-nomad">12. HashiCorp Nomad</a></li> </ul> </li> <li><a href="#how-to-choose-the-right-cron-alternative">How to Choose the Right Cron Alternative</a></li> <li><a href="#conclusion">Conclusion</a></li> </ul> <h2>When Cron Still Works Perfectly</h2> <p>Cron is still a great tool for many automation tasks, even though it has some limitations. When used in the right way, it’s still one of the easiest and most reliable schedulers. For single-server environments or light automation, cron does exactly what it was built for — without extra complexity.</p> <p>Cron works well when:</p> <ul> <li>Tasks run on one machine</li> <li>Jobs are simple and predictable</li> <li>Failures are easy to detect manually</li> <li>There aren’t many logging requirements</li> <li>No central coordination is needed</li> <li>Scripts don’t rely on external environments or shared infrastructure</li> </ul> <p>Common examples include simple backups, log cleanups, temporary file removal, and other basic automation tasks.</p> <p>However, when automation extends beyond one server or when reliability and visibility become critical, the problems usually start. At that point, teams start looking for tools that improve visibility, enable coordination, and support operational control.</p> <p><strong>See also:</strong> <a href="https://cloudray.io/articles/why-cron-job-fails-silently-in-production">Why Cron Jobs Fail Silently in Production and How to Fix It</a></p> <h2>Built-in Cron Unix-like Alternative Tools</h2> <p>Before we look at modern cron alternative platforms, it’s important to note that some Unix-like systems already have built-in alternatives to cron. These tools were designed to address specific scheduling limitations, such as missed jobs, system downtime, or limited control over task execution.</p> <table><thead><tr><th>Tool</th><th>Description</th><th>Persistent Jobs</th><th>Missed Jobs Recovery</th></tr></thead><tbody><tr><td><strong>systemd timers</strong></td><td>A modern scheduling mechanism built into systemd-based Linux systems, offering dependency control, logging integration, and better service management than cron.</td><td><span>Yes</span></td><td><span>Yes</span></td></tr><tr><td><strong>Anacron</strong></td><td>Designed for systems that are not always running. Executes scheduled jobs once the system becomes available after downtime.</td><td><span>No</span></td><td><span>Yes</span></td></tr><tr><td><strong>Cronie</strong></td><td>An enhanced cron implementation commonly used in modern Linux distributions, adding security and reliability improvements.</td><td><span>Yes</span></td><td><span>No</span></td></tr><tr><td><strong>fcron</strong></td><td>Combines features of cron and anacron, allowing flexible scheduling even when systems are intermittently offline.</td><td><span>Yes</span></td><td><span>Yes</span></td></tr><tr><td><strong>bcron</strong></td><td>A lightweight cron replacement focused on simplicity, security, and predictable execution behavior.</td><td><span>Yes</span></td><td><span>No</span></td></tr></tbody></table> <p>These tools fix some of cron’s problems, but they are mostly used as local task schedulers instead of full cron replacements for distributed infrastructure. As infrastructure grows, teams often need more visibility, coordination, and automation than local schedulers can give them.</p> <h2>Top Modern Cron Job Alternatives in 2026</h2> <p>When infrastructure grows beyond single servers, teams often need more than just scheduling based on time. Modern cron job alternatives extend traditional cron by adding centralized execution, distributed scheduling, observability, and operational control across environments.</p> <p>Here is a quick side-by-side comparison of the most widely used modern cron job alternatives across four key factors: self-hosting capability, SaaS availability, multi-server support, and observability.</p> <table><thead><tr><th>Tool</th><th>Type</th><th>Self-Hosted</th><th>SaaS</th><th>Multi-Server</th><th>Observability</th></tr></thead><tbody><tr><td><strong>ActiveBatch</strong></td><td>Enterprise scheduler</td><td><span>No</span></td><td><span>Yes</span></td><td><span>Yes</span></td><td><span>Yes</span></td></tr><tr><td><strong>CloudRay</strong></td><td><a href="https://cloudray.io/articles/script-automation-guide">Script automation platform</a></td><td><span>No</span></td><td><span>Yes</span></td><td><span>Yes</span></td><td><span>Yes</span></td></tr><tr><td><strong>Airflow</strong></td><td>Workflow orchestration</td><td><span>Yes</span></td><td><span>Partial</span></td><td><span>Yes</span></td><td><span>Yes</span></td></tr><tr><td><strong>Rundeck</strong></td><td>Job automation</td><td><span>Yes</span></td><td><span>Yes</span></td><td><span>Yes</span></td><td><span>Yes</span></td></tr><tr><td><strong>Stonebranch</strong></td><td>Enterprise automation</td><td><span>No</span></td><td><span>Yes</span></td><td><span>Yes</span></td><td><span>Yes</span></td></tr><tr><td><strong>VisualCron</strong></td><td>Windows automation</td><td><span>Yes</span></td><td><span>No</span></td><td><span>Limited</span></td><td><span>Yes</span></td></tr><tr><td><strong>Celery</strong></td><td>Distributed task queue</td><td><span>Yes</span></td><td><span>No</span></td><td><span>Yes</span></td><td><span>Partial</span></td></tr><tr><td><strong>Sidekiq</strong></td><td>Background job processor</td><td><span>Yes</span></td><td><span>No</span></td><td><span>Yes</span></td><td><span>Partial</span></td></tr><tr><td><strong>Kubernetes CronJobs</strong></td><td>Container scheduler</td><td><span>Yes</span></td><td><span>No</span></td><td><span>Yes</span></td><td><span>Partial</span></td></tr><tr><td><strong>Jenkins</strong></td><td>CI/CD automation</td><td><span>Yes</span></td><td><span>Partial</span></td><td><span>Yes</span></td><td><span>Yes</span></td></tr><tr><td><strong>GitHub Actions</strong></td><td>CI/CD scheduler</td><td><span>Partial</span></td><td><span>Yes</span></td><td><span>Limited</span></td><td><span>Yes</span></td></tr><tr><td><strong>HashiCorp Nomad</strong></td><td>Workload orchestration</td><td><span>Yes</span></td><td><span>Partial</span></td><td><span>Yes</span></td><td><span>Yes</span></td></tr></tbody></table> <p>The following is a useful guide to when each tool is best for a given situation, based on the size of the team, the complexity of the infrastructure, and the automation need.</p> <h3>1. ActiveBatch</h3> <p><a href="https://www.advsyscon.com/activebatch/">ActiveBatch</a> is a full-featured enterprise scheduler that provides a centralized platform for managing and automating tasks across multiple servers and environments. It offers a wide range of features, including job scheduling, task automation, and workflow orchestration. ActiveBatch is a great choice for teams that need to manage complex workflows and automate tasks across multiple servers and environments.</p> <p><strong>Best for:</strong> Large enterprises with heavy compliance requirements, batch processing, and cross-department automation needs.</p> <p><strong>Strengths:</strong></p> <ul> <li>Advanced workflow orchestration</li> <li>Deep enterprise integrations</li> <li>Strong SLA and dependency management</li> <li>Centralized control of automation</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Enterprise licensing costs can be significant</li> <li>Requires ongoing operational maintenance</li> <li>Overkill for small DevOps teams</li> </ul> <p><strong>When to choose it:</strong></p> <p>Use ActiveBatch when you need to automate across several business systems and need strict rules, approvals, and audit compliance.</p> <h3>2. CloudRay</h3> <p><a href="https://cloudray.io">CloudRay</a> is a centralized script automation platform that centralizes how teams run, schedule, and monitor operational scripts across servers without managing cron jobs individually.</p> <p><strong>Best for:</strong> DevOps teams, developers, and system administrators who automate tasks across many servers or environments.</p> <p><strong>Strengths:</strong></p> <ul> <li>Execution first with scripts (no need for a workflow DSL)</li> <li>Centralized scheduling for all machines</li> <li>Execution history and run logs in real time</li> <li>Works across cloud and on-prem servers</li> <li>Lightweight compared to full orchestration platforms</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Not designed for complex DAG workflows</li> <li>Not a <a href="https://cloudray.io/articles/configuration-management-versus-infrastructure-as-code#what-is-configuration-management">configuration management system</a></li> </ul> <p><strong>When to choose it:</strong></p> <p>If your teams already use Bash scripts but need more visibility, reliability, and centralized scheduling than cron can provide, choose CloudRay.</p> <h3>3. Apache Airflow</h3> <p><a href="https://airflow.apache.org/">Apache Airflow</a> is a free and open-source platform that lets you write, schedule, and monitor workflows programmatically. It is widely used for data engineering pipelines but also supports general-purpose job scheduling.</p> <p><strong>Best for:</strong> Data engineering teams that run ETL pipelines, ML workflows, or automation with complex dependencies.</p> <p><strong>Strengths:</strong></p> <ul> <li>Strong dependency management</li> <li>Python-based workflow definitions</li> <li>Distributed execution across multiple workers</li> <li>Rich ecosystem integrations</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Operationally complex</li> <li>Requires ongoing infrastructure maintenance</li> <li>Not great for automating simple systems</li> </ul> <p><strong>When to choose it:</strong></p> <p>Use Airflow when automation requires more than one pipeline to work together, rather than just scheduled scripts that run on their own.</p> <h3>4. Rundeck</h3> <p><a href="https://www.rundeck.com/">Rundeck</a> is an open-source automation platform that lets teams automate tasks across many servers and environments. It provides a single place to schedule jobs, coordinate workflows, and control access across servers.</p> <p><strong>Best for:</strong> Operations teams that need to control the execution of operational tasks across servers.</p> <p><strong>Strengths:</strong></p> <ul> <li>Role-based access control</li> <li>Running jobs on multiple nodes</li> <li>Automation driven by APIs</li> <li>Mature ecosystem</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Requires setup and maintenance</li> <li>Job definition abstractions add operational complexity</li> </ul> <p><strong>When to choose it:</strong></p> <p>Choose Rundeck when teams need structured operational runbooks and limited access to infrastructure automation.</p> <h3>5. Stonebranch</h3> <p><a href="https://www.stonebranch.com/">Stonebranch</a> is an enterprise-grade automation platform that focuses on managing workloads in mixed IT environments.</p> <p><strong>Best for:</strong></p> <p>Companies automating their work across mainframes, cloud platforms, and enterprise apps.</p> <p><strong>Strengths:</strong></p> <ul> <li>Hybrid infrastructure automation</li> <li>Enterprise scalability</li> <li>Advanced monitoring and reporting</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Enterprise pricing model</li> <li>Complex onboarding</li> </ul> <p><strong>When to choose it:</strong></p> <p>Best for businesses that need to replace legacy workload schedulers or coordinate automation across different types of environments.</p> <h3>6. VisualCron</h3> <p><a href="https://www.visualcron.com/">VisualCron</a> is a Windows-based platform for automating and scheduling tasks that uses a graphical user interface (GUI).</p> <p><strong>Best for:</strong></p> <p>Windows-heavy environments that need automation without requiring scripting expertise</p> <p><strong>Strengths:</strong></p> <ul> <li>Visual workflow interface</li> <li>Strong Windows integration</li> <li>Built-in task monitoring</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Windows-focused and not designed for Linux/Unix environments</li> <li>Limited large-scale distributed automation</li> </ul> <p><strong>When to choose it:</strong></p> <p>If most of your automation runs in Windows Server environments with little Linux infrastructure, choose VisualCron.</p> <h3>7. Celery</h3> <p><a href="https://docs.celeryq.dev/en/stable/">Celery</a> is an open-source distributed task queue for running background tasks and asynchronous jobs in Python applications. It is commonly used with message brokers like Redis or RabbitMQ, enabling teams to distribute, schedule, and monitor tasks across multiple workers.</p> <p><strong>Best for:</strong></p> <p>Python-based applications that need background processing, distributed task execution, or application-level scheduling.</p> <p><strong>Strengths:</strong></p> <ul> <li>Distributed task execution across multiple workers</li> <li>Integrates well with Python frameworks like Django and Flask</li> <li>Supports message brokers like Redis and RabbitMQ</li> <li>Built-in retry logic and task monitoring</li> <li>Scales horizontally with worker nodes</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Requires message broker infrastructure</li> <li>Operational complexity grows with scale</li> <li>Primarily tied to Python ecosystems</li> </ul> <p><strong>When to choose it:</strong></p> <p>If your application is written in Python and you need to run background jobs, scheduled tasks, or distributed workloads right inside the application architecture, choose Celery.</p> <h3>8. Sidekiq</h3> <p><a href="https://sidekiq.org/">Sidekiq</a> is a background job processor for Ruby applications. It uses Redis as a message broker and lets developers offload jobs outside the main application process. Ruby on Rails applications often use Sidekiq to send emails, run scheduled jobs, and process data in the background.</p> <p><strong>Best for:</strong></p> <p>Ruby and Ruby on Rails applications that require background processing and scheduled job execution.</p> <p><strong>Strengths:</strong></p> <ul> <li>High-performance job processing using Redis</li> <li>Seamless integration with Ruby on Rails</li> <li>Simple setup for background jobs and scheduled tasks</li> <li>Supports retries, job prioritization, and concurrency</li> <li>Well-established ecosystem in the Ruby community</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Primarily tied to the Ruby ecosystem</li> <li>Requires Redis infrastructure</li> <li>Scheduling capabilities are basic compared to dedicated automation platforms</li> </ul> <p><strong>When to choose it:</strong></p> <p>If your application is built with Ruby or Rails and you need a reliable way to run background tasks or scheduled jobs, choose Sidekiq.</p> <h3>9. Kubernetes CronJobs</h3> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cronjob/">Kubernetes CronJobs</a> allow scheduled jobs to run inside Kubernetes clusters using cron-style scheduling syntax. They create Kubernetes Jobs on a defined schedule and run them inside containers, making them suitable for cloud-native environments.</p> <p><strong>Best for:</strong></p> <p>Teams running containerized workloads in Kubernetes that need to schedule batch or recurring jobs.</p> <p><strong>Strengths:</strong></p> <ul> <li>Native scheduling inside Kubernetes clusters</li> <li>Uses familiar cron syntax</li> <li>Runs jobs as containerized workloads</li> <li>Integrates with Kubernetes observability and logging</li> <li>Scales with cluster infrastructure</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Only works inside Kubernetes environments</li> <li>Requires Kubernetes operational knowledge</li> <li>Monitoring and failure visibility may require additional tooling</li> </ul> <p><strong>When to choose it:</strong></p> <p>If your workloads are already running in Kubernetes and you need to run scheduled tasks as containerized jobs in the cluster, use Kubernetes CronJobs.</p> <h3>10. Jenkins</h3> <p><a href="https://www.jenkins.io/">Jenkins</a> is an open-source automation server widely used for continuous integration and continuous delivery (CI/CD). While Jenkins is not a dedicated job scheduler, many teams use it to automate recurring tasks such as running infrastructure scripts, generating reports, and performing maintenance.</p> <p><strong>Best for:</strong></p> <p>Engineering teams that already use Jenkins for CI/CD and want to schedule automated tasks on the same platform.</p> <p><strong>Strengths:</strong></p> <ul> <li>Mature ecosystem with thousands of plugins</li> <li>Built-in job scheduling using cron-style triggers</li> <li>Strong integration with CI/CD pipelines</li> <li>Extensive automation capabilities across systems</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Designed primarily for CI/CD workflows rather than task scheduling</li> <li>Requires ongoing maintenance and plugin management</li> <li>Can become operationally heavy for simple scheduled tasks</li> </ul> <p><strong>When to choose it:</strong></p> <p>Choose Jenkins when scheduled tasks are closely tied to CI/CD pipelines or when your organization already relies on Jenkins as a central automation platform.</p> <h3>11. GitHub Actions</h3> <p><a href="https://docs.github.com/en/actions">GitHub Actions</a> lets developers set up automated workflows directly inside GitHub repositories. Teams can use scheduled workflows with cron syntax to run maintenance tasks, dependency updates, or automation scripts at defined intervals.</p> <p><strong>Best for:</strong></p> <p>Teams that want to set up automation tasks right in their GitHub-based development process.</p> <p><strong>Strengths:</strong></p> <ul> <li>Built directly into the GitHub platform</li> <li>Uses cron-style scheduling syntax</li> <li>Easy integration with repositories and CI/CD pipelines</li> <li>No infrastructure management required</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>Primarily tied to repository-based workflows</li> <li>Not designed for running large operational workloads</li> <li>Limited visibility for complex operational automation</li> </ul> <p><strong>When to choose it:</strong></p> <p>Use GitHub Actions when scheduled jobs are closely tied to repository operations such as CI tasks, maintenance scripts, or automated checks.</p> <h3>12. HashiCorp Nomad</h3> <p><a href="https://www.nomadproject.io/">HashiCorp Nomad</a> is a platform for managing and scheduling workloads, both containerized and non-containerized, across clusters. Nomad supports periodic jobs, which let tasks run on a schedule. This makes it a more flexible option than cron in distributed environments.</p> <p><strong>Best for:</strong></p> <p>Organizations using HashiCorp tooling to manage workloads distributed across clusters.</p> <p><strong>Strengths:</strong></p> <ul> <li>Can schedule both containerized and non-containerized workloads</li> <li>Supports cron-style scheduling for batch jobs</li> <li>Integrates with other HashiCorp tools like Consul and Vault</li> <li>Scalable and fault-tolerant cluster architecture</li> <li>Flexible job specification format</li> </ul> <p><strong>Limitations:</strong></p> <ul> <li>More complex than traditional cron for simple scheduling needs</li> <li>Requires cluster infrastructure and management</li> <li>Not specifically designed for cron-style task scheduling</li> <li>Learning curve for Nomad’s job specification format</li> </ul> <p><strong>When to choose it:</strong></p> <p>Choose Nomad when you need to schedule batch jobs or recurring tasks as part of a larger container orchestration or workload management strategy, especially if you’re already using other HashiCorp tools.</p> <h2>How to Choose the Right Cron Alternative</h2> <p>Not all cron alternatives solve the same problem. Some tools are built for application-level background jobs, while others are designed for multi-server workload automation or infrastructure scheduling. The best option for you will depend on your team size, environment, and operational needs.</p> <p>Here are a few practical guidelines:</p> <p><strong>Use Celery or Sidekiq if:</strong></p> <ul> <li>Your application needs background job processing</li> <li>Tasks are tightly integrated with your application code</li> <li>You already run Python or Ruby services</li> </ul> <p><strong>Use Kubernetes CronJobs if:</strong></p> <ul> <li>Your workloads run in Kubernetes</li> <li>Jobs are containerized</li> <li>Scheduling needs to happen inside the cluster</li> </ul> <p><strong>Use Jenkins or GitHub Actions if:</strong></p> <ul> <li>Tasks are related to CI/CD workflows</li> <li>Automation is tied to repository activity</li> <li>Infrastructure automation is already handled by CI pipelines</li> </ul> <p><strong>Use Airflow if:</strong></p> <ul> <li>Workflows have complex dependencies</li> <li>Tasks must run in a defined sequence</li> <li>You are building data pipelines or ETL processes</li> </ul> <p><strong>Use infrastructure automation platforms (CloudRay, Rundeck, ActiveBatch, Stonebranch) if:</strong></p> <ul> <li>Scripts run across multiple servers</li> <li>Teams need centralized scheduling and execution visibility</li> <li>Operational automation must be audited and monitored</li> </ul> <p>Choosing the right tool is less about replacing cron and more about understanding what kind of automation problem you are trying to solve.</p> <h2>Conclusion</h2> <p>Cron is still a reliable way to schedule simple tasks on a single server. But as infrastructure becomes more distributed and operational reliability becomes more important, many teams need more visibility and centralized control than traditional cron can provide. There are many modern cron alternatives, from Unix-native schedulers to enterprise automation platforms — each best suited to a different level of complexity. The right solution will depend on whether you need lightweight scheduling, workflow orchestration, or centralized automation across multiple servers.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>Why Cron Jobs Fail Silently in Production and How to Fix Ithttps://cloudray.io/articles/why-cron-job-fails-silently-in-productionhttps://cloudray.io/articles/why-cron-job-fails-silently-in-productionLearn the most common reasons why cron jobs fail silently in production and how to debug, fix, and prevent themThu, 12 Feb 2026 00:00:00 GMT<p>Cron jobs are one of the easiest tools we use in production, but they can also be one of the most annoying when something goes wrong. You set up a <a href="https://cloudray.io/articles/script-automation-guide">script</a>, schedule it, and then forget about it, thinking it will work. Then, after a few days or weeks, you find out that it didn’t run at all. No warnings. No records. No errors. Nothing but silence.</p> <p>This guide will show the most common reasons why cron jobs fail silently in production. It will give you real-life examples of what went wrong, and show you exactly how to fix each problem.</p> <h2>Contents</h2> <ul> <li><a href="#common-causes-of-silent-cron-failures-and-how-to-fix-them">Common Causes of Silent Cron Failures and How to Fix Them</a> <ul> <li><a href="#1-cron-doesnt-use-your-shell-environment">1. Cron Doesn’t Use Your Shell Environment</a></li> <li><a href="#2-cron-uses-binsh-not-bash">2. Cron Uses <code>/bin/sh</code> Not Bash</a></li> <li><a href="#3-cron-output-is-being-discarded">3. Cron Output Is Being Discarded</a></li> <li><a href="#4-permissions-and-ownership-issues">4. Permissions and Ownership Issues</a></li> <li><a href="#5-relative-paths-break-under-cron">5. Relative Paths Break Under Cron</a></li> <li><a href="#6-missing-environment-variables">6. Missing Environment Variables</a></li> <li><a href="#7-youre-editing-the-wrong-crontab">7. You’re Editing the Wrong Crontab</a></li> <li><a href="#8-the-cron-daemon-isnt-running">8. The Cron Daemon Isn’t Running</a></li> <li><a href="#9-long-running-jobs-overlap-and-kill-each-other">9. Long-Running Jobs Overlap and Kill Each Other</a></li> <li><a href="#10-cron-jobs-inside-docker-arent-running-at-all">10. Cron Jobs Inside Docker Aren’t Running at All</a></li> <li><a href="#11-failure-to-add-error-logging">11. Failure to Add Error Logging</a></li> </ul> </li> <li><a href="#how-cloudray-helps-you-avoid-silent-cron-failures">How CloudRay Helps You Avoid Silent Cron Failures</a></li> </ul> <h2>Common Causes of Silent Cron Failures and How to Fix Them</h2> <p>Here are some of the reasons why your cron jobs might be failing silently in production and how to fix them:</p> <h3>1. Cron Doesn’t Use Your Shell Environment</h3> <p>This is the most common reason cron jobs fail silently. When you run a script manually on your terminal, it uses the following:</p> <ul> <li>Your shell environment (this could be Bash, Zsh, etc. depending on your Linux system)</li> <li>Your path (that is $PATH)</li> <li>Your environment variables, aliases, and functions.</li> </ul> <p>However, cron jobs do not use any of these. Instead, it runs with limited environment, which means that a command that works well in your terminal might not work when cron runs it.</p> <p>For example, this command works when run manually on the terminal:</p> <pre><code><span><span>aws</span><span> s3</span><span> sync</span><span> /data</span><span> s3://my-bucket</span></span></code><span></span><span></span></pre> <p>But when run as a cron job, it fails because <code>aws</code> is not in cron’s PATH.</p> <pre><code><span><span>/bin/sh:</span><span> 1:</span><span> aws:</span><span> not</span><span> found</span></span></code><span></span><span></span></pre> <p><strong>Fix:</strong> Always put the full paths to the command inside the cron script:</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> /usr/local/bin/aws s3 sync /data s3://my-bucket</span></span></code><span></span><span></span></pre> <p>If you’re not sure of the location of the command, check the path with the <code>which</code> command:</p> <pre><code><span><span>which</span><span> aws</span></span></code><span></span><span></span></pre> <p>Additionally, you can also log the PATH cron is using to confirm it’s correct:</p> <pre><code><span><span>echo</span><span> $PATH </span><span>&gt;&gt;</span><span> /tmp/cron_path.log</span></span></code><span></span><span></span></pre> <h3>2. Cron Uses <code>/bin/sh</code> Not Bash</h3> <p>Cron jobs use <code>/bin/sh</code> by default. It does not use Bash. This means if you use Bash specific syntax on your cron job, it will fail.</p> <p>Here are common syntax elements that will break when used in cron jobs:</p> <ul> <li><code>[[...]]</code></li> <li><code>source</code></li> <li><code>arrays</code></li> <li><code>set -o pipefail</code></li> <li><code>set -e</code></li> </ul> <p>For example, this command will work interactively on your terminal but fail when run as a cron job:</p> <pre><code><span><span>[[ </span><span>-f</span><span> /tmp/file ]] &amp;&amp; </span><span>echo</span><span> "Exists"</span></span></code><span></span><span></span></pre> <p><strong>Fix:</strong> You need to tell cron explicitly which shell to use. You can do this by adding a shebang (<code>#!</code>) at the top of your script:</p> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>[[ </span><span>-f</span><span> /tmp/file ]] &amp;&amp; </span><span>echo</span><span> "Exists"</span></span></code><span></span><span></span></pre> <p>Also, you can force Bash as default directly in your crontab file:</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> /bin/bash /opt/scripts/cleanup.sh</span></span></code><span></span><span></span></pre> <h3>3. Cron Output Is Being Discarded</h3> <p>Cron sends output of the cron jobs to the local mailer. Most modern servers do not have email configured as a result, the outputs from cron jobs are discarded. Due to this, you often have jobs that actually failed but you never saw them.</p> <p><strong>Fix:</strong> One of the most effective ways is to redirect stdout and stderr to a log file:</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> /opt/scripts/cleanup.sh </span><span>&gt;&gt;</span><span> /var/log/cleanup.log </span><span>2&gt;&amp;1</span></span></code><span></span><span></span></pre> <p>To make it even better, always rotate the log using <code>logrotate</code> to prevent it from growing endlessly. It’s important to note that cron jobs should be run with explicit logging.</p> <h3>4. Permissions and Ownership Issues</h3> <p>Cron jobs often fail because the user running the job doesn’t have permission to run the script or get to the files it needs.</p> <p>This usually happens when:</p> <ul> <li>The script is not executable</li> <li>The user can’t write to a directory</li> <li>The job relies on Docker or system-level commands</li> <li>The script was created by another user</li> </ul> <p><strong>Fix:</strong> Make sure your script is executable and owned correctly:</p> <pre><code><span><span>chmod</span><span> +x</span><span> /opt/scripts/cleanup.sh</span></span> <span><span>chown</span><span> cronuser:cronuser</span><span> /opt/scripts/cleanup.sh</span></span></code><span></span><span></span></pre> <p>Always do a quick test as the cron user to confirm permissions are correct:</p> <pre><code><span><span>su</span><span> -</span><span> cronuser</span><span> -c</span><span> "/opt/scripts/cleanup.sh"</span></span></code><span></span><span></span></pre> <p>If it fails, it will fail in cron too.</p> <h3>5. Relative Paths Break Under Cron</h3> <p>You have to tell Cron where your script is. It doesn’t run from your Git repo, project folder, or home directory.</p> <p>For example, this command will fail silently if run as a cron job:</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> cd scripts &amp;&amp; </span><span>./cleanup.sh</span></span></code><span></span><span></span></pre> <p>This is because cron has no idea what <code>scripts</code> directory is.</p> <p><strong>Fix:</strong> You should always use absolute paths in your cron jobs:</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> /opt/scripts/cleanup.sh</span></span></code><span></span><span></span></pre> <p>You can also explicitly set the working directory:</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> cd /opt/scripts &amp;&amp; </span><span>./cleanup.sh</span></span></code><span></span><span></span></pre> <p>Absolute paths are safer and far more predictable in production.</p> <h3>6. Missing Environment Variables</h3> <p>If you have environment variables in your script, cron does not load them by default. Cron does not load shell profile files like <code>.bashrc</code>, <code>.zshrc</code>, or <code>.profile</code>. This is one major reason why backup jobs, cloud uploads, or API calls fail silently in production.</p> <p>For example, this command will work manually on the terminal:</p> <pre><code><span><span>export</span><span> AWS_ACCESS_KEY_ID</span><span>=</span><span>xxx</span></span> <span><span>export</span><span> AWS_SECRET_ACCESS_KEY</span><span>=</span><span>yyy</span></span> <span><span>./backup.sh</span></span></code><span></span><span></span></pre> <p>But will fail under cron because those variables are not defined in cron’s environment.</p> <p><strong>Fix:</strong> There are three options to fix this.</p> <p>Option 1: Source a known environment file</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> .</span><span> /etc/profile &amp;&amp; </span><span>/opt/scripts/backup.sh</span></span></code><span></span><span></span></pre> <p>Option 2: Load variables inside the script</p> <pre><code><span><span>export</span><span> AWS_ACCESS_KEY_ID</span><span>=</span><span>"xxx"</span></span> <span><span>export</span><span> AWS_SECRET_ACCESS_KEY</span><span>=</span><span>"yyy"</span></span></code><span></span><span></span></pre> <p>Option 3: Use a .env file with strict permissions and source it explicitly</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> .</span><span> /opt/scripts/.env &amp;&amp; </span><span>/opt/scripts/backup.sh</span></span></code><span></span><span></span></pre> <h3>7. You’re Editing the Wrong Crontab</h3> <p>Even experienced engineers fall for this. The crontab for root user is different from the crontab for other users. So the job exists, but it’s not running because it’s in the wrong crontab.</p> <p><strong>Fix:</strong> When setting up cron jobs, always verify which crontab you’re editing:</p> <pre><code><span><span>crontab</span><span> -l</span></span> <span><span>sudo</span><span> crontab</span><span> -l</span></span> <span><span>sudo</span><span> crontab</span><span> -u</span><span> devops</span><span> -l</span></span></code><span></span><span></span></pre> <p>If the job isn’t listed there, it’s not running there.</p> <h3>8. The Cron Daemon Isn’t Running</h3> <p>There are times when the cron daemon itself isn’t running. There are many reasons why this could happen, like the service crashing or being taken offline for maintenance. This will stop all of your cron jobs from running, and you might not even know about it.</p> <p><strong>Fix:</strong> You can check if the cron daemon is running:</p> <pre><code><span><span>systemctl</span><span> status</span><span> cron</span></span> <span><span># or</span></span> <span><span>systemctl</span><span> status</span><span> crond</span></span></code><span></span><span></span></pre> <p>If the cron daemon is not running, you can start the daemon:</p> <pre><code><span><span>sudo</span><span> systemctl</span><span> start</span><span> cron</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> cron</span></span></code><span></span><span></span></pre> <h3>9. Long-Running Jobs Overlap and Kill Each Other</h3> <p>Cron doesn’t care if your last job is still running.</p> <p>A job can overlap with itself and fail in ways that are hard to predict if it takes longer than its scheduled time.</p> <p>For example, if you have an active cron job:</p> <pre><code><span><span>*</span><span> *</span><span> *</span><span> *</span><span> *</span><span> /opt/scripts/report.sh</span></span></code><span></span><span></span></pre> <p>If <code>report.sh</code> takes 90 seconds, multiple instances stack up.</p> <p><strong>Fix:</strong> Use a lock file to prevent multiple instances from running at the same time:</p> <pre><code><span><span>#!/usr/bin/env bash</span></span> <span></span> <span><span>LOCKDIR</span><span>=</span><span>"/tmp/report.lock"</span></span> <span></span> <span><span>if</span><span> !</span><span> mkdir</span><span> "</span><span>$LOCKDIR</span><span>"</span><span> 2&gt;</span><span>/dev/null</span><span>; </span><span>then</span></span> <span><span> echo</span><span> "Report is already running"</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span> <span></span> <span><span>trap</span><span> "rmdir '</span><span>$LOCKDIR</span><span>'"</span><span> EXIT</span></span> <span></span> <span><span># Run the report</span></span> <span><span>/opt/scripts/report.sh</span></span></code><span></span><span></span></pre> <p>This ensures only one instance runs at a time.</p> <h3>10. Cron Jobs Inside Docker Aren’t Running at All</h3> <p>Cron does not run inside a Docker container by default. You need to explicitly install cron, start the cron daemon and make sure it supports foreground mode.</p> <p><strong>Fix:</strong> Make sure your cron runs in foreground:</p> <pre><code><span><span># Install cron</span></span> <span><span>RUN</span><span> apt-get</span><span> update</span><span> &amp;&amp; </span><span>apt-get</span><span> install</span><span> -y</span><span> cron</span></span> <span></span> <span><span># Start cron in foreground</span></span> <span><span>CMD</span><span> [</span><span>"cron"</span><span>, </span><span>"-f"]</span></span></code><span></span><span></span></pre> <p>Also verify crontab is present in the container:</p> <pre><code><span><span>cat</span><span> /etc/crontab</span></span></code><span></span><span></span></pre> <h3>11. Failure to Add Error Logging</h3> <p>Some scripts fail, exit, and don’t leave a log of what happened. When you do not implement proper error handling, you will never have visibility into what went wrong.</p> <p><strong>Fix:</strong> It’s important to add strict mode to every cron script:</p> <pre><code><span><span>#!/usr/bin/env bash</span></span> <span><span>set</span><span> -euo</span><span> pipefail</span></span> <span></span> <span><span># Your script logic here</span></span></code><span></span><span></span></pre> <p>And write down important details about the execution:</p> <pre><code><span><span>LOGFILE</span><span>=</span><span>"/var/log/cron-debug.log"</span></span> <span></span> <span><span>{</span></span> <span><span> echo</span><span> "----- $(</span><span>date</span><span>) -----"</span></span> <span><span> echo</span><span> "User: $(</span><span>whoami</span><span>)"</span></span> <span><span> echo</span><span> "Running: </span><span>$0</span><span>"</span></span> <span><span> echo</span><span> "PATH: </span><span>$PATH</span><span>"</span></span> <span><span>} </span><span>&gt;&gt;</span><span> "</span><span>$LOGFILE</span><span>"</span><span> 2&gt;&amp;1</span></span></code><span></span><span></span></pre> <p>This one change fixes most silent failures.</p> <h2>How CloudRay Helps You Avoid Silent Cron Failures</h2> <p>Cron jobs fail without anyone knowing because they were never meant to be seen, audited, or debugged. Once a job is scheduled, you have no way of knowing if it ran, failed halfway through, or never ran at all.</p> <p>CloudRay has a different way of doing scheduled automation.</p> <p>CloudRay lets you <a href="https://cloudray.io/docs/schedules">schedule scripts</a> from one place instead of having to deal with crontabs on different servers. It also keeps a full history of when each script was run. You can see everything that happened when a script ran: the start time, the end time, the exit status, and the output in real time.</p> <p><a href="https://cloudray.io/docs/runlogs">CloudRay’s Run Logs</a> keep track of every scheduled run, making it easy to check on past runs, fix problems, and make sure a job really ran. You don’t have to manually send output to log files or SSH into servers to find out what went wrong.</p> <p>CloudRay also fixes a lot of the common problems with cron that we talked about earlier:</p> <ul> <li> <p>Scripts run in a controlled environment for execution.</p> </li> <li> <p>Logs are automatically saved</p> </li> <li> <p>Failures are easy to see right away</p> </li> <li> <p>Execution history is kept on all servers</p> </li> </ul> <p>You still write Bash scripts the same way. Instead of just hoping that a cron job worked, you can check to see if it did and know exactly what went wrong when it didn’t.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>What is Configuration Drift and How to Eliminate Ithttps://cloudray.io/articles/configuration-drifthttps://cloudray.io/articles/configuration-driftConfiguration drift creates security gaps and wastes engineering time. Learn proven strategies to detect, prevent, and eliminate it.Thu, 25 Sep 2025 00:00:00 GMT<p>Infrastructure teams manage thousands of servers and applications, but keeping configurations consistent across environments becomes increasingly difficult as systems scale. Configuration drift occurs when systems gradually diverge from their intended state, creating security vulnerabilities, operational inefficiencies, and troubleshooting nightmares that consume valuable engineering time.</p> <p>In this article, you’ll learn what configuration drift is, how it impacts your infrastructure operations, and proven strategies to detect, prevent, and eliminate configuration inconsistencies across your server environments.</p> <h2>Contents</h2> <ul> <li><a href="#what-is-configuration-drift">What is configuration drift?</a></li> <li><a href="#common-configuration-drift-scenarios">Common configuration drift scenarios</a></li> <li><a href="#causes-of-configuration-drift">Causes of configuration drift</a></li> <li><a href="#strategies-for-eliminating-configuration-drift">Strategies for eliminating configuration drift</a></li> <li><a href="#wrapping-up">Wrapping up</a></li> </ul> <h2>What is configuration drift?</h2> <img src="/_astro/configuration-drift-illustration._lv7KKrh.svg" alt="Illustration of configuration drift" /> <p>Configuration drift is the gradual divergence of system configurations from their original, intended state over time. When servers, applications, or infrastructure components are initially deployed, they start with identical configurations designed to ensure consistent behaviour across environments. However, through manual changes, emergency fixes, software updates, and routine maintenance, these systems slowly develop unique configurations that no longer match the original specifications.</p> <p>Understanding different approaches to managing infrastructure consistency becomes crucial for organisations experiencing drift across their server environments. <a href="https://cloudray.io/articles/configuration-management-versus-infrastructure-as-code">The comparison between configuration management and infrastructure as code</a> reveals various strategies for preventing and detecting configuration inconsistencies before they impact operations. Configuration drift creates operational challenges including difficult troubleshooting when identical systems behave differently, security vulnerabilities when patches are inconsistently applied, and complex compliance audits when security controls aren’t uniformly implemented.</p> <h2>Common configuration drift scenarios</h2> <p>Configuration drift shows up in predictable ways across most infrastructure environments. Knowing these common scenarios helps teams catch drift early and prevent it.</p> <ol> <li><strong>Emergency Security Patches</strong></li> </ol> <p>During a security incident, admins patch production servers individually rather than through standard processes. A database server gets a MySQL security update at 2am, while identical servers in other environments are unpatched. 6 months later, vulnerability scans show inconsistent patch levels across systems that should be the same, creating compliance gaps and attack vectors.</p> <ol> <li><strong>Performance Tuning Under Pressure</strong></li> </ol> <p>Production performance issues drive quick fixes that bypass change management. An engineer tweaks Apache config on a web server to handle traffic spikes, changes <code>MaxRequestWorkers</code> and <code>KeepAliveTimeout</code>. The fix works, but similar servers are still running default config. When load balancing sends traffic to these servers, response times are unpredictable because each server handles requests differently.</p> <ol> <li><strong>Developer Testing and Debugging</strong></li> </ol> <p>Dev teams modify production-like environments during troubleshooting sessions. A dev enables verbose logging on an app server to debug integration issues, changes database connection pool sizes, or modifies environment variables for testing. These temporary changes often become permanent when devs forget to revert them, and systems behave differently than documented.</p> <ol> <li><strong>Automated Updates with Timing</strong></li> </ol> <p>Differences in package managers and auto-update systems can cause drift when systems update at different times or encounter different dependency conflicts. <a href="https://cloudray.io/articles/script-automation-guide">Automation strategies</a> vary across environments and systems configured for auto patching might install different package versions based on when they check for updates. A server updated on Tuesday might install a different library version than one updated on Friday, and behave slightly differently.</p> <ol> <li><strong>Manual Configuration Workarounds</strong></li> </ol> <p>Operations teams create quick fixes for recurring issues without updating standard operating procedures. When backup scripts fail due to permission issues, an admin might manually adjust file permissions or modify crontab entries. These undocumented changes solve immediate problems but create unique config that doesn’t match other systems running the same workload.</p> <p>These scenarios show how drift accumulates through seemingly reasonable operational decisions. Knowing the root causes behind these patterns helps teams prevent drift instead of react to it.</p> <p>See also: <a href="https://cloudray.io/articles/bash-vs-python">Best language for DevOps Automation</a></p> <h2>Causes of configuration drift</h2> <p>The above scenarios show how drift happens, but understanding the underlying causes helps teams fix the root problems not just the symptoms.</p> <ol> <li><strong>No Standardized Change Management</strong></li> </ol> <p>Most organisations don’t have processes for implementing changes across environments. Teams skip formal change approval for “quick fixes” or emergency situations and make undocumented changes. Without standardized procedures each admin handles changes differently and variations accumulate over time.</p> <ol> <li><strong>Insufficient Automation Coverage</strong></li> </ol> <p>Manual configuration processes inherently introduce inconsistencies. When admins configure services, install packages or modify system settings manually, human error, and personal preference create variations. Organisations automate initial deployment but manual processes for maintenance, updates and troubleshooting.</p> <ol> <li><strong>Time Pressure and Operational Urgency</strong></li> </ol> <p>Production incidents create pressure to fix things now without following process. During outages or performance issues teams prioritize getting service back up over configuration consistency. These emergency changes often bypass documentation and change management workflows and create permanent drift.</p> <ol> <li><strong>Knowledge Gaps and Tool Limitations</strong></li> </ol> <p>Team members with different skill levels and tool preferences handle configurations differently. Some admins prefer command line tools while others use graphical interfaces and create subtle configuration differences. Many teams don’t have complete knowledge of all system dependencies and when making changes unintended side effects happen.</p> <ol> <li><strong>No Monitoring and Detection</strong></li> </ol> <p>Most environments don’t have configuration monitoring so drift accumulates undetected. Without automated tools to compare actual config to desired state teams only discover inconsistencies during troubleshooting or audits. This reactive approach means drift goes undetected for extended periods.</p> <p>Now that we know the causes of drift we can implement systematic ways to eliminate existing drift and prevent future inconsistencies.</p> <h2>Strategies for eliminating configuration drift</h2> <p>Eliminating configuration drift requires systematic approaches to address existing inconsistencies and prevent future ones. These work best together not separately.</p> <ol> <li><strong>Use Configuration Management Tools</strong></li> </ol> <p>Configuration management tools like Ansible, Puppet or Chef enforce desired system states across your infrastructure. They monitor server configurations and correct deviations from defined standards automatically. When a server’s SSH config drifts from security policies, configuration management tools detect the change and fix the approved settings without human intervention.</p> <ol> <li><strong>Adopt Infrastructure as Code</strong></li> </ol> <p>Infrastructure as Code treats server configurations as code in a repository. Teams define system configurations in declarative files that can be reviewed, tested and deployed across environments. This way all infrastructure changes go through the same review process as application code, no manual modifications.</p> <ol> <li><strong>Automate Configuration Monitoring</strong></li> </ol> <p>Deploy monitoring that compares actual server configurations to approved baselines. Tools like AWS Config, Azure Policy or open-source solutions will alert the team when configurations drift from standards. This way you catch drift within hours not months and can remediate quickly before problems compound.</p> <ol> <li><strong>Standardize Change Management</strong></li> </ol> <p>Create mandatory workflows for all config changes, including emergency ones. Require documentation, approval and testing for every change, no matter the urgency. Emergency changes should include automatic tickets for post-incident review and standardization. This way even crisis driven fixes will eventually align to your organization’s standards.</p> <ol> <li><strong>Automate Routine Tasks</strong></li> </ol> <p>Replace manual config tasks with automated scripts and tools. <a href="https://cloudray.io/articles/bash-vs-python">Choose the right scripting approach</a> depends on your team and infrastructure, but automation eliminates human error and ensures consistency across environments. Automated deployments, updates and maintenance tasks reduce the opportunities for drift causing manual interventions.</p> <p>For organisations managing distributed infrastructure, centralized script management platforms like <a href="https://cloudray.io">CloudRay</a> provide unified control over configuration automation across multiple servers and environments. This approach enables teams to deploy consistent configuration scripts, monitor execution results, and maintain audit trails without manually accessing individual systems.</p> <p>These strategies create a systematic defense against configuration drift and frameworks for long term infrastructure consistency. The key is to do multiple of these together not just one.</p> <h2>Wrapping up</h2> <p>Configuration drift isn’t just a technical problem — it’s a business risk that adds up over time. Every inconsistent configuration increases your attack surface, makes troubleshooting harder and wastes engineering hours that could be spent innovating instead of firefighting.</p> <p>The solution isn’t perfect discipline or heroic manual effort. It’s building systems that make consistency automatic and drift impossible. Start with configuration management tools for your most critical systems, automate one task per sprint and have change workflows that work at emergency periods.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Deploy Your VitePress Sitehttps://cloudray.io/articles/how-to-deploy-vitepress-sitehttps://cloudray.io/articles/how-to-deploy-vitepress-siteLearn how to deploy your VitePress site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Wed, 17 Sep 2025 00:00:00 GMT<p><a href="https://vitepress.dev/">VitePress</a> is a fast, lightweight static site generator powered by <a href="https://vite.dev/">Vite</a>. It’s often used for documentation websites but can be customized for many other use cases.</p> <p>In this guide, we’ll walk through deploying a VitePress site to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a platform for managing infrastructure and running automation Bash scripts at scale. This guide assumes you already have a VitePress project created. If not, check out the <a href="https://vitepress.dev/guide/getting-started">VitePress Getting Started Guide</a> to create one.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.BBp3815x.png" alt="Screenshot for deployment script of VitePress site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy VitePress Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>echo</span><span> "📦 Updating system and installing dependencies..."</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> \</span></span> <span><span> curl</span><span> git</span><span> nginx</span></span> <span></span> <span><span>echo</span><span> "🧰 Installing Node.js LTS..."</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_lts.x</span><span> |</span><span> sudo</span><span> -E</span><span> bash</span><span> -</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span></span> <span><span>echo</span><span> "🚀 Installing global packages..."</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> pm2</span><span> serve</span></span> <span></span> <span><span>echo</span><span> "📁 Cloning or updating VitePress repo..."</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> /var/www</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> "</span><span>$USER</span><span>":"</span><span>$USER</span><span>"</span><span> /var/www</span></span> <span><span>cd</span><span> /var/www</span></span> <span><span>if</span><span> [ </span><span>-d</span><span> "{{app_name}}/.git"</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "🔁 Existing repo detected — fetching latest changes..."</span></span> <span><span> cd</span><span> "</span><span>$app_name</span><span>"</span></span> <span><span> git</span><span> fetch</span><span> --all</span><span> --prune</span></span> <span><span> git</span><span> reset</span><span> --hard</span><span> origin/main</span></span> <span><span>else</span></span> <span><span> git</span><span> clone</span><span> -b</span><span> main</span><span> "{{repo_url}}"</span><span> "{{app_name}}"</span></span> <span><span> cd</span><span> "{{app_name}}"</span></span> <span><span>fi</span></span> <span></span> <span><span>echo</span><span> "📦 Installing project dependencies..."</span></span> <span><span>npm</span><span> install</span></span> <span></span> <span><span>echo</span><span> "🏗️ Building the VitePress site..."</span></span> <span><span>npm</span><span> run</span><span> build</span></span> <span></span> <span><span>echo</span><span> "📡 Serving with PM2 on port </span><span>$port</span><span>..."</span></span> <span><span>cd</span><span> .vitepress</span></span> <span><span>pm2</span><span> delete</span><span> "{{app_name}}"</span><span> &gt;</span><span>/dev/null</span><span> 2&gt;&amp;1</span><span> ||</span><span> true</span></span> <span><span>pm2</span><span> start</span><span> "serve -s dist -l {{port}}"</span><span> --name</span><span> "{{app_name}}"</span></span> <span></span> <span><span># Configure PM2 to start on boot for the current user and persist the process list</span></span> <span><span>sudo</span><span> env</span><span> PATH=</span><span>$PATH </span><span>pm2</span><span> startup</span><span> systemd</span><span> -u</span><span> "</span><span>$USER</span><span>"</span><span> --hp</span><span> "</span><span>$HOME</span><span>"</span><span> &gt;</span><span>/dev/null</span></span> <span><span>pm2</span><span> save</span></span> <span></span> <span><span>echo</span><span> "🌐 Configuring Nginx reverse proxy..."</span></span> <span><span>NGINX_CONF</span><span>=</span><span>"/etc/nginx/sites-available/{{app_name}}"</span></span> <span><span>sudo</span><span> tee</span><span> "</span><span>$NGINX_CONF</span><span>"</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain}};</span></span> <span></span> <span><span> location / {</span></span> <span><span> proxy_pass http://localhost:{{port}};</span></span> <span><span> proxy_http_version 1.1;</span></span> <span><span> proxy_set_header Upgrade </span><span>\$</span><span>http_upgrade;</span></span> <span><span> proxy_set_header Connection 'upgrade';</span></span> <span><span> proxy_set_header Host </span><span>\$</span><span>host;</span></span> <span><span> proxy_cache_bypass </span><span>\$</span><span>http_upgrade;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "🔗 Enabling Nginx config and restarting..."</span></span> <span><span>sudo</span><span> ln</span><span> -sf</span><span> "</span><span>$NGINX_CONF</span><span>"</span><span> /etc/nginx/sites-enabled/</span></span> <span><span>sudo</span><span> rm</span><span> -f</span><span> /etc/nginx/sites-enabled/default</span></span> <span><span>sudo</span><span> nginx</span><span> -t</span><span> &amp;&amp; </span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span><span>echo</span><span> "✅ Deployment complete! Access your site at http://{{domain}}"</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Deploy VitePress Site</code> does:</p> <ul> <li>Install Node.js and dependencies</li> <li>Clone your VitePress repo (or update if it exists)</li> <li>Install dependencies and build the VitePress project</li> <li>Serve the built files using PM2 and serve</li> <li>Configure Nginx as a reverse proxy</li> </ul> <h2>Define the Variable Group</h2> <p>Now, before running the deployment script, you need to define values for the placeholders <code>{{repo_url}}</code>, <code>{{port}}</code>, <code>{{app_name}}</code> and <code>{{domain}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.1logBFzC.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>app_name</code>:</strong> Name of your VitePress app (used as directory + PM2 process name)</li> <li><strong><code>port</code>:</strong> Internal port to serve your app (e.g., 3000)</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your VitePress site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy VitePress Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.DS3vUgra.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.Ccq19_9F.png" alt="Screenshot of the output of the VitePress deployment script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy Vite Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your-server-ip&gt;</code>. You should see your VitePress site up and running</p> <img src="/_astro/vitepress-site-running.feHAmhOx.png" alt="Screenshot showing VitePress Site running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/cd</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your VitePress site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>Configuration Management vs Infrastructure as Code Explainedhttps://cloudray.io/articles/configuration-management-versus-infrastructure-as-codehttps://cloudray.io/articles/configuration-management-versus-infrastructure-as-codeLearn the differences between Configuration Management and Infrastructure as Code, their benefits, and how they work together in modern IT.Sat, 06 Sep 2025 00:00:00 GMT<p>Managing IT infrastructure and applications is hard, especially as things scale. Two practices that help make it easier are Configuration Management (CM) and Infrastructure as Code (IaC). While they’re similar, they serve different purposes in IT ops and DevOps.</p> <p>Configuration Management enforces system states automatically, while Infrastructure as Code treats infrastructure like software development. Knowing when to use each will save your team a ton of time and reduce operational complexity.</p> <p>In this post we’ll go into both, the practical differences and how to choose the right one for your infrastructure.</p> <h2>Contents</h2> <ul> <li><a href="#what-is-configuration-management">What is Configuration Management?</a> <ul> <li><a href="#benefits-of-configuration-management">Benefits of Configuration Management</a></li> </ul> </li> <li><a href="#what-is-infrastructure-as-code-iac">What is Infrastructure as Code (IaC)?</a> <ul> <li><a href="#benefits-of-infrastructure-as-code">Benefits of Infrastructure as Code</a></li> </ul> </li> <li><a href="#differences-between-iac-and-configuration-management">Differences Between IaC and Configuration Management</a></li> <li><a href="#when-to-use-each-approach">When to Use Each Approach</a></li> <li><a href="#the-middle-ground-centralised-script-management">The Middle Ground: Centralised Script Management</a></li> </ul> <h2>What is Configuration Management?</h2> <p>Configuration Management is the practice of managing changes to a system so it stays consistent and doesn’t drift over time. It ensures servers, applications and infrastructure components stay in the desired state, automatically correcting any deviations that occur from manual changes, software updates or system failures.</p> <p>Think of Configuration Management as a blueprint enforcement system. When you define how a web server should be configured, which packages are installed, what services are running, and how security is applied. CM tools monitor and enforce those specifications across all your servers.</p> <p>For example, imagine you have 50 web servers that should all run the same version of Apache with the same security configuration. Without Configuration Management, updating all servers when you need to change the SSL certificate or apply a security patch is a time-consuming and error-prone process. With Configuration Management you update the configuration once and the system applies the changes across all servers.</p> <p>Popular Configuration Management tools are Puppet, Chef, Ansible and SaltStack. These platforms use different approaches, Puppet uses a declarative language, Chef uses Ruby based “recipes” while Ansible uses simple YAML playbooks that don’t require agents on target machines. Some teams prefer simpler approaches using standard scripting languages, and <a href="https://cloudray.io/articles/bash-vs-python">choosing the scripting language for automation tasks</a> often depends on your team’s expertise and infrastructure requirements.</p> <p>The core principle remains the same across all tools which is define the desired state once and let the system maintain that state across your entire infrastructure.</p> <h3>Benefits of Configuration Management</h3> <p>Configuration Management offers several key benefits:</p> <ol> <li><strong>Consistency Across Environments</strong></li> </ol> <p>Configuration Management gets rid of the “works on my machine” problem by ensuring the same configuration across dev, staging and production environments. When configurations are defined as code and enforced automatically, you can be sure your production environment matches your testing environment exactly.</p> <ol> <li><strong>Reduced Configuration Drift</strong></li> </ol> <p>Systems drift from their intended state over time due to manual changes, security patches or application updates. Configuration Management monitors and corrects these deviations automatically, no human intervention required.</p> <ol> <li><strong>Faster Recovery and Scaling</strong></li> </ol> <p>When servers fail or you need to provision new infrastructure, Configuration Management allows you to deploy properly configured systems in minutes. Instead of setting up each server manually, you can spin up new instances that configure themselves according to your specs.</p> <ol> <li><strong>Audit Trail and Compliance</strong></li> </ol> <p>Every configuration change is tracked and versioned, so you can see what changed, when and who changed it. This audit capability is key for compliance and troubleshooting config issues.</p> <ol> <li><strong>Reduced Human Error</strong></li> </ol> <p>Manual config processes are error prone - typos, missed steps or inconsistent implementations. Automated config enforcement eliminates these human errors, so you get more reliable and predictable infrastructure.</p> <h2>What is Infrastructure as Code (IaC)?</h2> <p>Infrastructure as Code (IaC) is the practice of managing and provisioning your computing infrastructure through code instead of physical hardware configuration or interactive configuration tools. Instead of setting up servers, networks and storage through web consoles or command line interfaces, you write code that describes your entire infrastructure architecture.</p> <p>IaC treats infrastructure like software development. You write code, version control it, test it and deploy it using automated processes. The infrastructure definitions are stored as text files that can be shared, reviewed and modified just like application code.</p> <p>Consider this example, instead of logging into AWS console to manually create 10 virtual machines, configure load balancers, set up databases and establish networking rules, you write a single configuration file that describes all these resources. When you run this file through an IaC tool, it creates your entire infrastructure stack in minutes.</p> <p>Leading IaC tools are Terraform, AWS CloudFormation, Azure Resource Manager and Google Cloud Deployment Manager. Terraform uses HashiCorp Configuration Language (HCL) to define infrastructure across multiple cloud providers, while cloud specific tools like CloudFormation use JSON or YAML templates to provision resources within their respective platforms.</p> <p>The key difference from Configuration Management is the scope. While Configuration Management is about configuring existing systems, IaC is about creating, modifying and destroying the infrastructure itself.</p> <h3>Benefits of Infrastructure as Code</h3> <p>IaC gives you operational benefits that go beyond automation. These are especially valuable for companies managing complex infrastructure across multiple environments at scale.</p> <ol> <li><strong>Version Control and Collaboration</strong></li> </ol> <p>Infrastructure code can be stored in version control systems like Git, so teams can track changes, collaborate on infrastructure changes and roll back to previous versions when things go wrong. It brings software development best practices to infrastructure management, code reviews and change approval for infrastructure changes.</p> <ol> <li><strong>Reproducible Environments</strong></li> </ol> <p>IaC lets you spin up identical environments on demand. Whether you need to replicate production for testing, create disaster recovery environments or spin up temporary development instances, the same code produces the same results every time, no environment specific bugs and configuration drift.</p> <ol> <li><strong>Cost Management and Resource Optimisation</strong></li> </ol> <p>With IaC you can tear down and recreate environments when they’re not needed, reducing cloud costs. Development and testing environments can be destroyed at night and recreated in the morning, production environments get consistent resource allocation and optimisation.</p> <ol> <li><strong>Faster Deployment and Scaling</strong></li> </ol> <p>Infrastructure that took days or weeks to provision can be done in minutes through automated deployment pipelines. As business requirements change, scaling infrastructure up or down is as simple as modifying a few parameters in your infrastructure code and redeploy.</p> <ol> <li><strong>Documentation and Knowledge Sharing</strong></li> </ol> <p>Your infrastructure code is living documentation of your system architecture. New team members can understand the infrastructure design by reading the code and institutional knowledge is preserved when team members leave the company.</p> <h2>Differences Between IaC and Configuration Management</h2> <p>While Configuration Management and Infrastructure as Code both automate infrastructure operations, they are very different in scope, timing and implementation. Understanding the differences helps you choose the right tool for the job.</p> <ol> <li><strong>Scope and Purpose</strong></li> </ol> <p>Configuration Management is about maintaining and configuring existing infrastructure. It keeps servers that already exist in their desired state, managing software installations, service configurations and system settings. IaC is about creating and destroying infrastructure resources themselves - virtual machines, networks, storage and cloud services.</p> <ol> <li><strong>Implementation Timing</strong></li> </ol> <p>Configuration Management runs continuously after infrastructure is provisioned. It monitors systems and applies corrections when configurations drift from desired states. IaC runs during specific deployment events - when you need to create new environments, scale resources or modify infrastructure architecture.</p> <ol> <li><strong>State Management</strong></li> </ol> <p>Configuration Management uses agents or agentless connections to check current system states against desired configurations. It remediates differences automatically and continuously. IaC tools maintain state files that track which resources exist and their current configuration, and only make changes when you explicitly run deployment commands.</p> <ol> <li><strong>Learning Curve and Complexity</strong></li> </ol> <p>Configuration Management often requires learning domain-specific languages or complex frameworks. Teams need to understand concepts like manifests, playbooks or recipes. IaC tools use more familiar formats like JSON, YAML or simple declarative languages but require understanding of infrastructure architecture and cloud service relationships.</p> <ol> <li><strong>Use Case Overlap</strong></li> </ol> <p>Some scenarios blur the lines between these approaches. Tools like Terraform can provision infrastructure and do basic configuration, while Ansible can manage both infrastructure provisioning and ongoing configuration management, so modern tools are increasingly combining both capabilities.</p> <h2>When to Use Each Approach</h2> <p>Choosing between Configuration Management and Infrastructure as Code depends on your infrastructure challenges, team capabilities and operational requirements. Each approach is good for different scenarios and organisational contexts.</p> <p><strong>Use Configuration Management When:</strong></p> <p>You have existing infrastructure that needs to be maintained and configured. Configuration Management is good at managing long running servers, applying security patches consistently and maintaining compliance across distributed systems. It’s particularly useful for teams managing traditional server environments, hybrid cloud setups or infrastructure where manual configuration changes happen frequently.</p> <p>Teams with limited cloud expertise but strong sysadmin skills often find Configuration Management more approachable. If your infrastructure doesn’t change often but needs ongoing maintenance, Configuration Management provides continuous oversight without the complexity of full infrastructure provisioning workflows.</p> <p><strong>Use Infrastructure as Code When:</strong></p> <p>Your infrastructure changes frequently or you need to create and destroy environments regularly. IaC is good in cloud native environments where resources are treated as disposable and environments are created on demand. It’s essential for teams doing continuous delivery, managing multiple staging environments or operating in rapidly scaling organisations.</p> <p>Organisations prioritising disaster recovery, cost optimisation through environment automation or teams that need to replicate infrastructure across different cloud providers benefit from IaC approaches. The ability to version control infrastructure changes and apply software development practices to infrastructure management makes IaC particularly useful for DevOps mature organisations.</p> <p><strong>The Hybrid Reality:</strong></p> <p>Many teams use both approaches, IaC for infrastructure provisioning and Configuration Management for system maintenance. Others find success with simpler approaches, especially when starting their automation journey. Simple <a href="https://cloudray.io/articles/script-automation-guide">script automation</a> can provide immediate value for teams not ready for complex frameworks, allowing gradual progression to more advanced infrastructure management practices.</p> <h2>The Middle Ground: Centralised Script Management</h2> <p>Between full Infrastructure as Code and traditional Configuration Management lies a middle ground, a centralised script management. This approach combines the simplicity of shell scripts with central control and monitoring.</p> <p>Many teams start with scattered bash scripts across different servers, managed individually through cron jobs or manual execution. While these scripts solve immediate problems, they become hard to manage and monitor as infrastructure grows. Centralised script management solves these problems without requiring teams to learn complex frameworks or rewrite existing automation logic.</p> <img src="/_astro/homepage-of-cloudray.D1mm6u9N.png" alt="CloudRay homepage" loading="lazy" /> <p><a href="https://cloudray.io">CloudRay</a> provides a centralised platform for bash script automation across cloud and server infrastructure. Through the <a href="https://cloudray.io/docs/agent">CloudRay Agent</a>, teams can connect their instances and servers securely and execute scripts in real-time from a single control panel. The <a href="https://cloudray.io/docs/schedules">Schedules feature</a> automates script execution at custom intervals—hourly, daily or event triggered—so you can have reliable DevOps workflows without manual intervention.</p> <p>This is a quick win for teams not ready for complex frameworks and a clear path to more advanced infrastructure management as you grow.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Deploy Your Jekyll Sitehttps://cloudray.io/articles/how-to-deploy-your-jekyll-sitehttps://cloudray.io/articles/how-to-deploy-your-jekyll-siteLearn how to deploy your Jekyll site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Thu, 28 Aug 2025 00:00:00 GMT<p>Jekyll is a classic static site generator built on Ruby. In this guide, you’ll deploy a Jekyll project to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a platform for managing servers and running automation Bash scripts at scale.</p> <p>This guide assumes you already have a Jekyll project. However, if you’ve not created a Jekyll project, check out the <a href="https://jekyllrb.com/docs/">Jekyll getting started guide</a> to learn more and create your own.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.CxPVvoUR.png" alt="Screenshot for deployment script of Jekyll site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy Jekyll Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/usr/bin/env bash</span></span> <span><span>set</span><span> -euo</span><span> pipefail</span></span> <span></span> <span><span>echo</span><span> "🔧 Updating system packages..."</span></span> <span><span>apt-get</span><span> update</span><span> -y</span></span> <span></span> <span><span>echo</span><span> "🌱 Installing dependencies (Ruby, build tools, Nginx, Git)..."</span></span> <span><span>apt-get</span><span> install</span><span> -y</span><span> ruby-full</span><span> build-essential</span><span> zlib1g-dev</span><span> git</span><span> nginx</span></span> <span></span> <span><span># Optional: Remove default Nginx site</span></span> <span><span>rm</span><span> -f</span><span> /etc/nginx/sites-enabled/default</span><span> ||</span><span> true</span></span> <span></span> <span><span># Use local gem path to avoid global installs (keeps things tidy)</span></span> <span><span>export</span><span> GEM_HOME</span><span>=</span><span>"/opt/jekyll/gems"</span></span> <span><span>export</span><span> PATH</span><span>=</span><span>"</span><span>$GEM_HOME</span><span>/bin:</span><span>$PATH</span><span>"</span></span> <span><span>mkdir</span><span> -p</span><span> "</span><span>$GEM_HOME</span><span>"</span></span> <span></span> <span><span>echo</span><span> "💎 Installing Jekyll and Bundler..."</span></span> <span><span>gem</span><span> install</span><span> bundler</span><span> jekyll</span></span> <span></span> <span><span># Prepare workspace</span></span> <span><span>WORKDIR</span><span>=</span><span>"/opt/jekyll/src"</span></span> <span><span>rm</span><span> -rf</span><span> "</span><span>$WORKDIR</span><span>"</span></span> <span><span>mkdir</span><span> -p</span><span> "</span><span>$WORKDIR</span><span>"</span></span> <span><span>cd</span><span> "</span><span>$WORKDIR</span><span>"</span></span> <span></span> <span><span>echo</span><span> "⬇️ Cloning repository..."</span></span> <span><span>git</span><span> clone</span><span> "{{repo_url}}"</span><span> "{{app_name}}"</span></span> <span><span>cd</span><span> "{{app_name}}"</span></span> <span></span> <span><span>echo</span><span> "📦 Installing project gems..."</span></span> <span><span># Install gems into vendor/bundle so the build is self-contained</span></span> <span><span>bundle</span><span> config</span><span> set</span><span> path</span><span> 'vendor/bundle'</span></span> <span><span>bundle</span><span> install</span></span> <span></span> <span><span>echo</span><span> "🏗️ Building the Jekyll site..."</span></span> <span><span># JEKYLL_ENV=production ensures production build (e.g., minification if configured)</span></span> <span><span>JEKYLL_ENV</span><span>=</span><span>production</span><span> bundle</span><span> exec</span><span> jekyll</span><span> build</span></span> <span></span> <span><span>echo</span><span> "📂 Preparing deploy directory..."</span></span> <span><span>mkdir</span><span> -p</span><span> "{{deploy_dir}}"</span></span> <span><span># Copy the generated static site</span></span> <span><span>rsync</span><span> -a</span><span> --delete</span><span> "_site/"</span><span> "{{deploy_dir}}/"</span></span> <span></span> <span><span>echo</span><span> "🧭 Writing Nginx server config..."</span></span> <span><span>cat</span><span> &gt;</span><span>/etc/nginx/sites-available/jekyll-site</span><span> &lt;&lt;</span><span>'NGINX_CONF'</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain}};</span></span> <span></span> <span><span> root {{deploy_dir}};</span></span> <span><span> index index.html;</span></span> <span></span> <span><span> location / {</span></span> <span><span> try_files $uri $uri/ =404;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>NGINX_CONF</span></span> <span></span> <span><span>ln</span><span> -sfn</span><span> /etc/nginx/sites-available/jekyll-site</span><span> /etc/nginx/sites-enabled/jekyll-site</span></span> <span></span> <span><span>echo</span><span> "🔁 Testing and reloading Nginx..."</span></span> <span><span>nginx</span><span> -t</span></span> <span><span>systemctl</span><span> enable</span><span> --now</span><span> nginx</span></span> <span><span>systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span><span>echo</span><span> "✅ Deployment complete. Visit: http://{{domain}}"</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Deploy Jekyll Site</code> does:</p> <ul> <li>Installs Ruby, Bundler, Jekyll, Git, and Nginx</li> <li>Clones your Jekyll repo and installs its gems</li> <li>Builds the site to <code>_site/</code> and syncs it to <code>{{deploy_dir}}</code></li> <li>Creates an Nginx server block and reloads Nginx</li> </ul> <h2>Define the Variable Group</h2> <p>To make this script reusable across multiple projects or servers, you’ll define a <a href="https://cloudray.io/docs/variable-groups">variable group</a> inside CloudRay. These variables dynamically fill in the placeholders like <code>{{repo_url}}</code>, <code>{{app_name}}</code>, <code>{{deploy_dir}}</code> and <code>{{domain}}</code>. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>.</p> <img src="/_astro/variables.B6ODEMGW.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a variable Group:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>app_name</code>:</strong> Your Jekyll deployment directory</li> <li><strong><code>domain</code>:</strong> This is the name of your domain, e.g., <code>myapp.example.com</code></li> <li><strong><code>deploy_dir</code>:</strong> Final directory Nginx will serve from (e.g., <code>/var/www/jekyll-site</code>)</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your Jekyll site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy Jekyll Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.DRUmHRWd.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.Dg6lLr-K.png" alt="Screenshot of the output of the Jekyll deployment script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy Jekyll Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your_domain&gt;</code>. You should see your Jekyll site up and running</p> <img src="/_astro/jekyll-site-running.C-aoB2z_.png" alt="Screenshot showing Jekyll Site running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/CD</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your Jekyll site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>Bash vs Python for DevOps - Which is Better for Automationhttps://cloudray.io/articles/bash-vs-pythonhttps://cloudray.io/articles/bash-vs-pythonDiscover whether Bash or Python is better for DevOps automation by comparing their strengths, weaknesses, and best use cases.Mon, 25 Aug 2025 00:00:00 GMT<p>Automation is the backbone of modern DevOps and IT operations. Choosing the right scripting language can significantly improve efficiency, scalability, and maintainability in your workflows. Bash and Python are two of the most popular choices for <a href="https://cloudray.io/articles/script-automation-guide">script automation in DevOps</a>.</p> <p>In this article, you’ll learn what each brings to the table, their strengths and weaknesses, and which to choose for your DevOps automation tasks.</p> <h2>Contents</h2> <ul> <li><a href="#what-is-bash-in-devops">What is Bash in DevOps</a></li> <li><a href="#what-is-python-in-devops">What is Python in DevOps</a></li> <li><a href="#head-to-head-comparison-bash-vs-python">Head-to-head comparison: Bash vs Python</a></li> <li><a href="#when-to-use-bash-in-devops-automation">When to use Bash in DevOps automation</a></li> <li><a href="#when-to-use-python-in-devops-automation">When to use Python in DevOps automation</a></li> <li><a href="#conclusion">Conclusion</a></li> </ul> <h2>What is Bash in DevOps</h2> <p>Bash (Bourne Again SHell) is a command line scripting language widely used in Unix and Linux based systems. In DevOps, Bash is often the first choice for automating administrative tasks, deployment processes, and system management. It lets engineers quickly automate Unix/Linux tasks using simple shell commands.</p> <p>What makes Bash unique is its lightweight nature and the ability to interact directly with the operating system. Since Bash ships with most Unix/Linux systems, you can start writing automation scripts on the fly without extra setup. This makes it ideal for tasks like <a href="https://cloudray.io/docs/schedules">managing schedules</a>, <a href="https://cloudray.io/docs/script-playlists">chaining multiple commands</a>, or automating deployments.</p> <p>However, Bash can become harder to maintain as scripts grow. It also lacks advanced programming features like object-oriented, making it less suitable for complex automation.</p> <h2>What is Python in DevOps</h2> <p>Python is a high‑level programming language that has become widely used in DevOps for automation tasks. Unlike Bash, which is tailored to system‑level scripting, Python is a general‑purpose language with a rich ecosystem of frameworks and libraries, making it a good choice for larger and more complex automation across multiple operating systems.</p> <p>In DevOps, Python is used for configuration management, cloud automation, CI/CD pipelines, and many other tasks. With libraries such as Ansible, Fabric, and SaltStack, Python can automate server provisioning, network operations, and application deployment across environments.</p> <p>What makes Python stand out is its readability, maintainability, and extensive community support. You can write scripts that are easy to understand, debug, and maintain. Additionally, Python supports advanced concepts like object‑oriented programming (OOP), error handling, and modular code.</p> <p>However, Python is not as lightweight as Bash and often requires additional setup and environments to run tasks, which can be overkill for quick system‑administrative work.</p> <h2>Head-to-head comparison: Bash vs Python</h2> <table><thead><tr><th>Feature</th><th>Bash</th><th>Python</th></tr></thead><tbody><tr><td>Primary use case</td><td>Best suited for script automation, task scheduling, and managing Unix/Linux operations</td><td>Best suited for complex automation, cross‑platform DevOps workflows, and integrations</td></tr><tr><td>Learning curve</td><td>Lightweight and easy to learn for sysadmins with Unix/Linux experience</td><td>Beginner‑friendly with readable syntax; widely adopted by developers and DevOps engineers</td></tr><tr><td>Platform support</td><td>Widely used in Unix/Linux systems and available by default</td><td>Cross‑platform; works well in containerised and cloud environments</td></tr><tr><td>Script complexity</td><td>Excellent for short, quick commands and chaining system utilities; harder to maintain for larger scripts</td><td>Handles both small and large scripts for automation projects</td></tr><tr><td>Error handling &amp; debugging</td><td>Limited error handling; debugging Bash scripts can be time‑consuming</td><td>Good error handling and built‑in debugging libraries</td></tr><tr><td>Integration with tools</td><td>Strong integration with system utilities like <code>grep</code>, <code>awk</code>, <code>sed</code>, and package managers</td><td>Strong ecosystem of libraries such as <code>boto3</code> (AWS), <code>paramiko</code> (SSH), and <code>requests</code> (HTTP), enabling seamless integration with cloud, APIs, and automation pipelines</td></tr><tr><td>Performance</td><td>Very fast for small, shell‑level tasks such as process automation</td><td>Slightly slower than Bash but optimised for larger automation projects</td></tr><tr><td>Syntax</td><td>Shell commands; less readable</td><td>Readable, high‑level syntax</td></tr><tr><td>Community &amp; support</td><td>Large community support and extensive resources available</td><td>Massive, active community; rich DevOps‑focused libraries; continuous ecosystem growth</td></tr><tr><td>Best fit</td><td>When speed and simplicity for system‑level tasks are priorities</td><td>When scalability and integration with modern DevOps pipelines are required</td></tr></tbody></table> <p>Both Bash and Python have their strengths and weaknesses, and the choice between them depends on the specific requirements of your DevOps automation tasks.</p> <h2>When to use Bash in DevOps automation</h2> <p>Bash is a great choice when simplicity and speed are required to automate DevOps workflows, especially in Unix/Linux environments. Here are scenarios where Bash shines:</p> <ul> <li> <p><strong>Direct system‑level tasks:</strong> When you need to automate system‑level tasks like file clean‑ups, restarting services, or running immediate system checks, Bash is a good choice. You can write quick scripts without dependencies or additional setup. Since Bash is pre‑installed on most Unix/Linux systems, there are no setup delays.</p> </li> <li> <p><strong>CI/CD pipeline orchestration:</strong> Bash is often used in CI/CD workflows to orchestrate tasks like running tests, building applications, and deploying code. It can chain multiple commands together, making it easy to automate the pipeline. Bash integrates with tools like <a href="https://www.jenkins.io/">Jenkins</a>, GitLab CI, and <a href="https://www.travis-ci.com/">Travis CI</a>.</p> </li> <li> <p><strong>Infrastructure bootstrapping:</strong> Bash can be invoked by tools like Terraform, Ansible, or CloudFormation to bootstrap and provision infrastructure. You can use Bash scripts to install dependencies, configure servers, and set up environments before deploying applications. For example, you can use Bash to <a href="https://cloudray.io/articles/setting-up-postgres-database">automate deployment of a PostgreSQL database</a> or <a href="https://cloudray.io/articles/deploy-sonarqube">deploy a SonarQube server</a>.</p> </li> <li> <p><strong>Monitoring and maintenance automation:</strong> Bash excels at scheduling system checks, backups, log rotation, and other routine maintenance tasks. For example, you can use Bash to <a href="https://cloudray.io/articles/automate-wordpress-multi-site-backups">back up a WordPress site</a> or monitor CPU and memory usage.</p> </li> </ul> <p>Generally, Bash shines when you need speed, direct system access, and minimal setup for quick system‑automation tasks.</p> <h2>When to use Python in DevOps automation</h2> <p>Python is a universal programming language and therefore can be useful for more complex DevOps workflow tasks. Here are some cases where Python is a better choice for DevOps automation:</p> <ul> <li> <p><strong>Cloud infrastructure automation:</strong> Python is widely used for creating SDKs for major cloud providers such as AWS, Azure, and Google Cloud. For example, you can use the <code>boto3</code> library to automate AWS resources such as EC2 instances, S3 buckets, or even Lambda functions. Python has extensive libraries and frameworks that allow you to interact with cloud APIs, manage resources, and automate cloud‑based workflows.</p> </li> <li> <p><strong>Configuration management and orchestration:</strong> The most widely used configuration managements tools like Ansible and SaltStack are built with Python. These tools allow you to define and enforce the desired state of your infrastructure, automate deployments, and bring consistency across environments. Python also integrates seamlessly with orchestration tools like Kubernetes.</p> </li> <li> <p><strong>Advanced scheduling:</strong> Python has libraries such as <code>schedule</code> and <code>APScheduler</code> that help with complex schedules. It also powers tools like Apache Airflow and RQ (Redis Queue) for orchestrating complex jobs. Python is used for complex and robust DevOps workflows.</p> </li> <li> <p><strong>Cross‑platform automation:</strong> Python is cross‑platform, so you can run scripts on Windows, macOS, and Linux. This portability makes Python a good choice for automating tasks that need to run on different platforms. For example, you can use Python to automate deployment of applications across environments.</p> </li> </ul> <p>Python is a better choice for more advanced, cross‑environment automation tasks.</p> <h2>Conclusion</h2> <p>Both Bash and Python have their place in your DevOps toolkit, but you don’t need to choose just one. The most successful DevOps teams leverage both Bash for quick system tasks and lightweight automation and Python for complex workflows and cross-platform solutions.</p> <p>The real challenge isn’t choosing between Bash or Python. It’s managing, executing, and monitoring your automation scripts across multiple servers and cloud environments efficiently.</p> <p><strong>Ready to streamline your script automation?</strong> <a href="https://cloudray.io">CloudRay</a> transforms how you manage automation across your entire infrastructure. With our <a href="https://cloudray.io/docs/agent">CloudRay Agent</a>, you can securely execute Bash scripts from a centralised dashboard, schedule them to run automatically with <a href="https://cloudray.io/docs/schedules">CloudRay Schedules</a>, and monitor their execution in real-time no matter if you’re managing 5 servers or 500.</p> <p>Stop juggling multiple terminals and SSH connections. Start automating smarter with CloudRay’s unified platform that supports both your Bash efficiency.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Deploy Your Eleventy Sitehttps://cloudray.io/articles/how-to-deploy-your-eleventy-websitehttps://cloudray.io/articles/how-to-deploy-your-eleventy-websiteLearn how to deploy your Eleventy site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Thu, 14 Aug 2025 00:00:00 GMT<p><a href="https://www.11ty.dev/">Eleventy</a> is a fast, flexible static site generator written in JavaScript. It allows you to create modern, highly performant static websites using templates, Markdown, and data files. In this guide, we’ll walk you through deploying an Eleventy project to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a platform that helps you manage infrastructure and automate deployments with reusable Bash scripts.</p> <p>This guide assumes you already have an Eleventy project set up. If not, you can clone the <a href="https://github.com/11ty/eleventy-base-blog">Eleventy Base Blog starter</a> to get started.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.B4QdnORq.png" alt="Screenshot for deployment script of Eleventy site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy Eleventy Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>echo</span><span> "📦 Updating system and installing dependencies..."</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> \</span></span> <span><span> curl</span><span> git</span><span> nginx</span></span> <span></span> <span><span>echo</span><span> "🧰 Installing Node.js LTS..."</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_lts.x</span><span> |</span><span> sudo</span><span> -E</span><span> bash</span><span> -</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span></span> <span><span>echo</span><span> "🚀 Installing global packages..."</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> pm2</span><span> serve</span></span> <span></span> <span></span> <span><span>echo</span><span> "📁 Cloning or updating Eleventy repo..."</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> /var/www</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> "</span><span>$USER</span><span>":"</span><span>$USER</span><span>"</span><span> /var/www</span></span> <span><span>cd</span><span> /var/www</span></span> <span><span>if</span><span> [ </span><span>-d</span><span> "{{app_name}}/.git"</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "🔁 Existing repo detected — fetching latest changes..."</span></span> <span><span> cd</span><span> "{{app_name}}"</span></span> <span><span> git</span><span> fetch</span><span> --all</span><span> --prune</span></span> <span><span> # Adjust branch if your default branch is not 'main'</span></span> <span><span> git</span><span> reset</span><span> --hard</span><span> origin/main</span></span> <span><span>else</span></span> <span><span> git</span><span> clone</span><span> "{{repo_url}}"</span><span> "{{app_name}}"</span></span> <span><span> cd</span><span> "{{app_name}}"</span></span> <span><span>fi</span></span> <span></span> <span><span>echo</span><span> "📦 Installing project dependencies..."</span></span> <span><span>npm</span><span> install</span></span> <span></span> <span><span>echo</span><span> "🏗️ Building the Eleventy site..."</span></span> <span><span>npx</span><span> @11ty/eleventy</span></span> <span></span> <span><span>echo</span><span> "📡 Serving with PM2 on port {{port}}..."</span></span> <span><span>pm2</span><span> start</span><span> "serve _site -l {{port}}"</span><span> --name</span><span> "{{app_name}}"</span></span> <span></span> <span><span># Configure PM2 to start on boot for the current user and persist the process list</span></span> <span><span>sudo</span><span> env</span><span> PATH=</span><span>$PATH </span><span>pm2</span><span> startup</span><span> systemd</span><span> -u</span><span> "</span><span>$USER</span><span>"</span><span> --hp</span><span> "</span><span>$HOME</span><span>"</span><span> &gt;</span><span>/dev/null</span></span> <span><span>pm2</span><span> save</span></span> <span></span> <span><span>echo</span><span> "🌐 Configuring Nginx reverse proxy..."</span></span> <span><span>NGINX_CONF</span><span>=</span><span>"/etc/nginx/sites-available/{{app_name}}"</span></span> <span><span>sudo</span><span> tee</span><span> "</span><span>$NGINX_CONF</span><span>"</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain}};</span></span> <span></span> <span><span> location / {</span></span> <span><span> proxy_pass http://localhost:{{port}};</span></span> <span><span> proxy_http_version 1.1;</span></span> <span><span> proxy_set_header Upgrade </span><span>\$</span><span>http_upgrade;</span></span> <span><span> proxy_set_header Connection 'upgrade';</span></span> <span><span> proxy_set_header Host </span><span>\$</span><span>host;</span></span> <span><span> proxy_cache_bypass </span><span>\$</span><span>http_upgrade;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "🔗 Enabling Nginx config and restarting..."</span></span> <span><span>sudo</span><span> ln</span><span> -sf</span><span> "</span><span>$NGINX_CONF</span><span>"</span><span> /etc/nginx/sites-enabled/</span></span> <span><span>sudo</span><span> rm</span><span> -f</span><span> /etc/nginx/sites-enabled/default</span></span> <span><span>sudo</span><span> nginx</span><span> -t</span><span> &amp;&amp; </span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span><span>echo</span><span> "✅ Deployment complete! Access your site at http://{{domain}}"</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Deploy Eleventy Site</code> does:</p> <ul> <li>Installs system dependencies and Node.js</li> <li>Clone your Eleventy project from a Git repository</li> <li>Build the project</li> <li>Serves the site with PM2</li> <li>Sets up Nginx as a reverse proxy</li> </ul> <h2>Define the Variable Group</h2> <p>To make this script reusable across multiple projects or servers, you’ll define a variable group inside CloudRay. These variables dynamically fill in the placeholders like <code>{{repo_url}}</code>, <code>{{app_name}}</code>, <code>{{port}}</code> and <code>{{domain}}</code>. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>.</p> <img src="/_astro/variables.CHssZhQe.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>app_name</code>:</strong> Your Eleventy deployment directory</li> <li><strong><code>domain</code>:</strong> This is the name of your domain, e.g., <code>myapp.example.com</code></li> <li><strong><code>port</code>:</strong> The port your Eleventy application listens on (e.g., <code>3000</code>)</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your Eleventy site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy Eleventy Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.Db4IulsN.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.BWgtCfIf.png" alt="Screenshot of the output of the Eleventy deployment script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy Eleventy Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your_domain&gt;</code>. You should see your Eleventy site up and running</p> <img src="/_astro/eleventy-site-running.D8_ce5Xa.png" alt="Screenshot showing Eleventy Site running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/CD</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your Eleventy site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>Script automation explained – what it is, tools, benefits, and real exampleshttps://cloudray.io/articles/script-automation-guidehttps://cloudray.io/articles/script-automation-guideLearn what script automation is, its benefits, the best tools, and practical examples using Bash scripting to boost your DevOps and IT workflows.Tue, 12 Aug 2025 00:00:00 GMT<p>Script automation is the use of code, written in languages such as <a href="https://cloudray.io/articles/bash-vs-python">Bash or Python</a> to automate repetitive or time‑consuming tasks in IT operations, system administration, and software development. Instead of performing tasks manually, teams can <a href="https://cloudray.io/docs/scripts">run scripts</a> to trigger processes such as application deployment, <a href="https://cloudray.io/articles/automate-wordpress-multi-site-backups">data backups</a>, file transfers, or system monitoring.</p> <p>Many businesses adopt script automation to reduce human error, save time, improve efficiency, and accelerate DevOps workflows. As the demand for faster software delivery and continuous integration/continuous deployment (CI/CD) grows, script automation has become an essential part of modern DevOps strategies.</p> <p>In this article, we explore the key benefits of script automation, popular scripting languages, top tools, and real‑world examples to help you apply it effectively in your DevOps and IT operations.</p> <h2>Contents</h2> <ul> <li><a href="#best-scripting-languages-for-automation">Best Scripting Languages for Automation</a></li> <li><a href="#benefits-of-script-automation">Benefits of Script Automation</a></li> <li><a href="#top-5-script-automation-tools">Top 5 Script Automation Tools</a> <ul> <li><a href="#cloudray">CloudRay</a></li> <li><a href="#scriptrunner">ScriptRunner</a></li> <li><a href="#ansible">Ansible</a></li> <li><a href="#attuneops">AttuneOps</a></li> <li><a href="#jenkins">Jenkins</a></li> </ul> </li> <li><a href="#examples-of-script-automation-in-bash">Examples of Script Automation in Bash</a></li> <li><a href="#wrapping-up">Wrapping up</a></li> </ul> <h2>Best Scripting Languages for Automation</h2> <p>There are several programming languages for script automation with unique strengths and characteristics. However, Bash scripting and Python remain the most widely used for system and DevOps automation.</p> <ul> <li> <p><strong>Bash scripting:</strong> This is a shell command language known for its integration with Unix-based systems. It’s ideal for automating administrative tasks such as package installation, server bootstrapping, or even deployments.</p> </li> <li> <p><strong>Python:</strong> This is a high-level and general purpose language widely used for infrastructure automation, API scripting, and test pipelines in modern DevOps workflows.</p> </li> </ul> <p>Both Bash and Python have strengths and weaknesses depending on the use case. Below is a brief comparison:</p> <table><thead><tr><th>Feature</th><th>Bash</th><th>Python</th></tr></thead><tbody><tr><td>Best for</td><td>Shell scripting, Linux/Unix system tasks</td><td>Cross-platform automation, APIs, DevOps Workflows</td></tr><tr><td>Ease of Use</td><td>Simple for basic tasks; can get complex for logic-heavy work</td><td>Readable and maintainable, especially for large scripts</td></tr><tr><td>Tooling &amp; Ecosystem</td><td>Native to Unix/Linux; tightly integrated with CLI tools</td><td>Rich library ecosystem for HTTP, automation, DevOps, etc.</td></tr><tr><td>Performance</td><td>Fast for command chaining and shell operations</td><td>Slightly slower but better for complex logic and data parsing</td></tr><tr><td>Error Handling</td><td>Primitive error handling (exit codes)</td><td>Built‑in exception handling</td></tr><tr><td>Learning Curve</td><td>Easier for those familiar with Linux shell</td><td>Easier for general-purpose programming and logic-heavy tasks</td></tr></tbody></table> <p>Aside from Bash and Python, there are several other scripting languages for automation with each with it’s unique use cases. Here are other scripting languages widely used for IT operations and DevOps automation:</p> <ul> <li> <p><strong>Go (Golang):</strong> It’s unique for task concurrent executions</p> </li> <li> <p><strong>PowerShell:</strong> Designed for windows automation which is great for managing system configurations and registry task</p> </li> <li> <p><strong>Java:</strong> Often used for DevOps pipeline automation and integrations</p> </li> <li> <p><strong>Ruby:</strong> popular for automating configuration management</p> </li> <li> <p><strong>Perl:</strong> Powerful for file processing and text manipulation</p> </li> <li> <p><strong>JavaScript:</strong> Useful for automating web APIs or build processes</p> </li> </ul> <p>Each of these scripting languages offers unique strengths and limitations with each one best known for specific automation.</p> <h2>Benefits of Script Automation</h2> <p>The benefits of script automation are significant. As businesses scale their infrastructure and software operations, manual processes become time‑consuming, tedious, and error‑prone. Script automation boosts productivity, reduces human error, and accelerates software delivery and deployments.</p> <p>Here are some benefits of using script automation in DevOps workflows and IT environments:</p> <ul> <li> <p><strong>Cost Optimisation:</strong> Generally, script automation reduces the need for human input which eventually cuts down human work hours and minimize costly mistake. Teams saves both time and money by automating routine tasks.</p> </li> <li> <p><strong>Faster Tasks Execution:</strong> Automation can execute tasks in minutes even seconds compared to manual efforts which can take hours or days to execute. Script automation leads to faster incident recovery, deployments, and even overall performance.</p> </li> <li> <p><strong>Improved Accuracy and consistency:</strong> Manual operation is error prone. However, with script automation, operations are executed with consistency across various environments.</p> </li> <li> <p><strong>Enhanced Productivity:</strong> Script automation frees up teams from repetitive tasks. This allows team to focus on higher priority work such as innovation, workflow optimisation, and security hardening.</p> </li> <li> <p><strong>Seamless Integration with DevOps Tools</strong> Scripts can be easily integrated with CI/CD and configuration management tools. They can work with this tools efficiently to trigger deployments, automate test run, and so on.</p> </li> </ul> <h2>Top 5 Script Automation Tools</h2> <p>Script automation tools has become important for teams that want to scale faster, increase productivity, and optimise costs. These tools empowers DevOps engineers, system admins, and cloud engineers to streamline operations.</p> <p>Here are top five best script automation tool used by modern engineering teams:</p> <h3>CloudRay</h3> <p>CloudRay is a centralised Bash script automation platform that allows team to manage cloud and hybrid infrastructure. It allows team to run Bash scripts securely across hybrid or cloud infrastructure with the help of an <a href="https://cloudray.io/docs/agent">Agent</a>. This unique feature makes it suitable for automating repetitive infrastructure tasks such as Installations, deployments, server maintenance, backups and other infrastructure tasks.</p> <p>Additionally, <a href="https://cloudray.io/docs/schedules">CloudRay’s schedules</a> allow teams to schedules scripts across multiple environments. It also supports <a href="https://cloudray.io/docs/incoming-webhooks">webhook triggers</a>, making automation repeatable and event-driven. CloudRay stands out by combining the flexibility of scripting with the governance enterprise teams need to scale securely.</p> <h3>ScriptRunner</h3> <p><a href="https://www.scriptrunner.com/">ScriptRunner</a> is an automation platform specifically designed for PowerShell. It’s used by Windows admins to automate routine admin tasks with full auditability and governance. It provides a centralised environment for storing, managing, and executing PowerShell scripts with control and traceability. This tool also supports approvals, Active Directory integration, and delegated execution and logging.</p> <h3>Ansible</h3> <p><a href="https://docs.ansible.com/">Ansible</a> is an open source configuration management tool developed by Red Hat. It uses YAML-based playbooks for infrastructure wide automation and is popular for managing complex infrastructure at scale. Ansible unique characteristics is its agentless nature in which it can operate over SSH allowing easier adoption.</p> <h3>AttuneOps</h3> <p><a href="https://attuneops.io/">AttuneOps</a> is a script automation tool that provides advanced orchestration, scheduling and workflow management. It supports multiple scripting languages such as PowerShell, Bash, and Python.</p> <p>It provides a centralised engine that allows team to automate across different OS environment consistency. It is used heavily in IT operations allowing teams to manage routine and repetitive tasks.</p> <h3>Jenkins</h3> <p><a href="https://www.jenkins.io/">Jenkins</a> is the widely used automation platform for CI/CD in DevOps workflow. It excels at executing automation scripts and scheduled jobs. Jenkins supports multiple scripts such as shell and Python script. This integrates with the source control tools, and offers robust scheduling workflows.</p> <h2>Examples of Script Automation in Bash</h2> <p>Bash script can be used to automate routine system administration tasks. These tasks can be installation of packages, database backups, provisioning of servers, and deployments.</p> <p>Let’s look at some practical examples of Bash script automation that improves IT operations and DevOps workflows:</p> <p><strong>1. Automating LAMP Stack Installations</strong></p> <p>Bash script can be used to set up web servers quickly. You can <a href="https://cloudray.io/articles/automate-installation-of-lamp-stack-on-ubuntu-using-bash-script">automate the installation of LAMP stack (Linux, Apache, MySQL, PHP)</a>.</p> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update package list and install LAMP stack</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> apache2</span><span> mysql-server</span><span> php</span><span> libapache2-mod-php</span><span> -y</span></span> <span></span> <span><span># Start services</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> apache2</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> mysql</span></span></code><span></span><span></span></pre> <p>This script will install all the required components needed and ensures Apache and MySQL automatically starts on system reboot.</p> <p><strong>2. Automating MySQL Backups to S3</strong></p> <p>Bash script can be used to automate and schedule routine database backup. For example, you can <a href="https://cloudray.io/articles/automate-mysql-backup-to-amazon-s3">automate MySQL backups to Amazon S3</a> to ensure your data is consistently available offsite.</p> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Variables</span></span> <span><span>DB_NAME</span><span>=</span><span>"mydb"</span></span> <span><span>USER</span><span>=</span><span>"root"</span></span> <span><span>PASSWORD</span><span>=</span><span>"yourpassword"</span></span> <span><span>BACKUP_PATH</span><span>=</span><span>"/tmp/mysql-backup.sql"</span></span> <span><span>DATE</span><span>=</span><span>$(</span><span>date</span><span> +%F</span><span>)</span></span> <span></span> <span><span>mysqldump</span><span> -u</span><span> $USER </span><span>-p</span><span>$PASSWORD</span><span> $DB_NAME </span><span>&gt;</span><span> $BACKUP_PATH</span></span> <span><span>aws</span><span> s3</span><span> cp</span><span> $BACKUP_PATH </span><span>s3://your-s3-bucket/</span><span>$DB_NAME</span><span>-</span><span>$DATE</span><span>.sql</span></span></code><span></span><span></span></pre> <p>You can also use it to <a href="https://cloudray.io/articles/automate-postgres-backup-to-amazon-s3">automate backup of PostgreSQL to S3</a>.</p> <p><strong>3. Automating Installation of WordPress</strong></p> <p>You can use Bash script to streamline and automate the deployment of CMS platform like WordPress. For example, you can <a href="https://cloudray.io/articles/deploy-multi-wordpress-sites-on-one-server">automate the deployment of multiple WordPress site on a single server</a> with the use of Bash script.</p> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update package list and install LAMP stack</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> apache2</span><span> mysql-server</span><span> php</span><span> php-mysql</span><span> -y</span></span> <span></span> <span><span># Create directories for multiple sites</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> /var/www/site1.com</span><span> /var/www/site2.com</span></span> <span></span> <span><span># Set permissions</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> $USER</span><span>:</span><span>$USER </span><span>/var/www/site1.com</span><span> /var/www/site2.com</span></span> <span></span> <span><span># Download and extract WordPress</span></span> <span><span>wget</span><span> https://wordpress.org/latest.tar.gz</span></span> <span><span>tar</span><span> -xvzf</span><span> latest.tar.gz</span></span> <span><span>cp</span><span> -r</span><span> wordpress/</span><span>*</span><span> /var/www/site1.com</span></span></code><span></span><span></span></pre> <p>Additionally, you can also use Bash script to <a href="https://cloudray.io/articles/automate-wordpress-multi-site-backups">automate the backup process of your WordPress site</a>.</p> <p><strong>4. Deploy a Database Server</strong></p> <p>infrastructure engineers use Bash scipt to save time during infrastructure setup. You can use Bash script to <a href="https://cloudray.io/articles/deploy-mysql-server">automate the deployment of MySQL server</a> on a Linux host.</p> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update package list and install MySQL server</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> mysql-server</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> mysql</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> mysql</span></span></code><span></span><span></span></pre> <h2>Wrapping up</h2> <p>Script automation is an efficient way for DevOps and IT professionals to streamline DevOps tasks, IT operations, and reduce the likelihood for human error. Whether you’re automating deployments, backups, or security checks, having a well-structured approach enhances productivity and reliability. As infrastructure grow, so does the complexity to manage these growth. Script automation becomes not just a convenience but a necessity.</p> <p><a href="https://cloudray.io">CloudRay</a> is a leading platform for centralised Bash script automation across your cloud and server infrastructure. With <a href="https://cloudray.io/docs/agent">CloudRay Agent</a>, you can securely connect your cloud instances and on-premise servers, enabling real-time execution and monitoring of scripts from a single control panel. Our powerful <a href="https://cloudray.io/docs/schedules">Schedules</a> feature allows you to automate scripts at custom intervals, whether hourly, daily, or triggered by specific events ensuring your DevOps workflows run reliably without manual intervention. CloudRay simplifies script management, increases operational efficiency, and gives teams full control over infrastructure automation, all from one unified interface.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Deploy Your Gridsome Sitehttps://cloudray.io/articles/how-to-deploy-your-gridsome-sitehttps://cloudray.io/articles/how-to-deploy-your-gridsome-siteLearn how to deploy your Gridsome site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Thu, 31 Jul 2025 00:00:00 GMT<p><a href="https://gridsome.org/">Gridsome</a> is a modern Vue-powered static site generator that helps you build fast, SEO-friendly websites using GraphQL and Vue components. In this guide, we’ll walk you through deploying a Gridsome project to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a platform that helps you manage your infrastructure and automate deployments using reusable Bash scripts.</p> <p>This guide assumes you already have a Gridsome project set up. If not, check out the <a href="https://gridsome.org/docs/">Getting Started Guide</a> to scaffold your project.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.C6M5Nqlb.png" alt="Screenshot for deployment script of Gridsome site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy Gridsome Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>echo</span><span> "📦 Updating system and installing dependencies..."</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> \</span></span> <span><span> curl</span><span> git</span><span> nginx</span><span> build-essential</span><span> make</span><span> gcc</span><span> g++</span><span> python3</span><span> libvips-dev</span></span> <span></span> <span><span>echo</span><span> "🧰 Installing Node.js LTS..."</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_lts.x</span><span> |</span><span> sudo</span><span> -E</span><span> bash</span><span> -</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span></span> <span><span>echo</span><span> "🚀 Installing global packages..."</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> pm2</span></span> <span></span> <span><span>echo</span><span> "📁 Cloning Gridsome repo..."</span></span> <span><span>git</span><span> clone</span><span> "{{repo_url}}"</span><span> "{{app_name}}"</span></span> <span><span>cd</span><span> "{{app_name}}"</span></span> <span></span> <span><span>echo</span><span> "📦 Installing project dependencies..."</span></span> <span><span>npm</span><span> install</span></span> <span></span> <span><span>echo</span><span> "🏗️ Building the Gridsome app..."</span></span> <span><span>npm</span><span> run</span><span> build</span></span> <span></span> <span><span>echo</span><span> "📡 Serving with PM2 on port </span><span>$PORT</span><span>..."</span></span> <span><span>pm2</span><span> start</span><span> "npx serve dist -l {{port}}"</span><span> --name</span><span> "{{app_name}}"</span></span> <span><span>pm2</span><span> save</span></span> <span><span>pm2</span><span> startup</span></span> <span></span> <span><span>echo</span><span> "🌐 Configuring Nginx reverse proxy..."</span></span> <span><span>NGINX_CONF</span><span>=</span><span>"/etc/nginx/sites-available/{{app_name}}"</span></span> <span><span>sudo</span><span> tee</span><span> "</span><span>$NGINX_CONF</span><span>"</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain}};</span></span> <span></span> <span><span> location / {</span></span> <span><span> proxy_pass http://localhost:{{port}};</span></span> <span><span> proxy_http_version 1.1;</span></span> <span><span> proxy_set_header Upgrade </span><span>\$</span><span>http_upgrade;</span></span> <span><span> proxy_set_header Connection 'upgrade';</span></span> <span><span> proxy_set_header Host </span><span>\$</span><span>host;</span></span> <span><span> proxy_cache_bypass </span><span>\$</span><span>http_upgrade;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "🔗 Enabling Nginx config and restarting..."</span></span> <span><span>sudo</span><span> ln</span><span> -sf</span><span> "</span><span>$NGINX_CONF</span><span>"</span><span> /etc/nginx/sites-enabled/</span></span> <span><span>sudo</span><span> nginx</span><span> -t</span><span> &amp;&amp; </span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span><span>echo</span><span> "✅ Deployment complete! Access your site at http://{{domain}}"</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Deploy Gridsome Site</code> does:</p> <ul> <li>Installs system dependencies and Node.js</li> <li>Clone your Gridsomeproject from a Git repository</li> <li>Build the project</li> <li>Serves the site with PM2</li> <li>Sets up Nginx as a reverse proxy</li> </ul> <h2>Define the Variable Group</h2> <p>To make this script reusable across multiple projects or servers, you’ll define a variable group inside CloudRay. These variables dynamically fill in the placeholders like <code>{{repo_url}}</code>, <code>{{app_name}}</code>, <code>{{port}}</code> and <code>{{domain}}</code>. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>.</p> <img src="/_astro/variables.DPQCEMA5.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>deploy_dir</code>:</strong> Your Gridsome deployment directory</li> <li><strong><code>clone_dir</code>:</strong> This is the name of the GitHub repository</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your Gridsome site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy Gridsome Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.BNi1ZcC0.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.Cpwux9tT.png" alt="Screenshot of the output of the Gridsome deployment script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy Gridsome Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your_domain&gt;</code>. You should see your Gridsome site up and running</p> <img src="/_astro/gridsome-site-running.BG2sSQiN.png" alt="Screenshot showing Gridsome Site running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/CD</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your Gridsome site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>How to Deploy Your Hexo Sitehttps://cloudray.io/articles/how-to-deploy-your-hexo-sitehttps://cloudray.io/articles/how-to-deploy-your-hexo-siteLearn how to deploy your Hexo site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Sat, 26 Jul 2025 00:00:00 GMT<p><a href="https://hexo.io/">Hexo</a> is a fast, simple, and powerful static site generator powered by Node.js. Many developers use Hexo to run personal blogs or documentation sites. In this guide, you’ll learn how to deploy your Hexo project to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a centralised platform that helps you automate server tasks through Bash scripting, manage infrastructure, and execute repeatable deployments.</p> <p>This guide assumes you have a basic Hexo project already created. However, If you’re new to Hexo, you can follow the <a href="https://hexo.io/docs/">Hexo Getting Started Guide</a> to scaffold your first site.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.CmH8I7Xz.png" alt="Screenshot for deployment script of Hexo site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy Hexo Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># =============================</span></span> <span><span># Hexo Deployment Script</span></span> <span><span># =============================</span></span> <span></span> <span><span># =============================</span></span> <span><span># 1. Install Dependencies</span></span> <span><span># =============================</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> curl</span><span> git</span><span> nginx</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_lts.x</span><span> |</span><span> sudo</span><span> -E</span><span> bash</span><span> -</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> pm2</span><span> serve</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> hexo-cli</span></span> <span></span> <span><span># =============================</span></span> <span><span># 2. Clone Repo</span></span> <span><span># =============================</span></span> <span><span>cd</span><span> /var/www/</span></span> <span><span>git</span><span> clone</span><span> "{{repo_url}}"</span><span> "{{app_name}}"</span></span> <span><span>cd</span><span> "/var/www/{{app_name}}"</span></span> <span></span> <span><span># =============================</span></span> <span><span># 3. Install &amp; Build Hexo</span></span> <span><span># =============================</span></span> <span><span>npm</span><span> install</span></span> <span><span>npx</span><span> hexo</span><span> generate</span></span> <span></span> <span><span># =============================</span></span> <span><span># 4. Serve with PM2</span></span> <span><span># =============================</span></span> <span><span>pm2</span><span> start</span><span> "serve -s public -l 3000"</span><span> --name</span><span> hexo-site</span></span> <span><span>pm2</span><span> save</span></span> <span><span>pm2</span><span> startup</span></span> <span></span> <span><span># =============================</span></span> <span><span># 5. Configure Nginx</span></span> <span><span># =============================</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; /etc/nginx/sites-available/hexo-site"</span><span> &lt;&lt;</span><span>EOL</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain}};</span></span> <span></span> <span><span> location / {</span></span> <span><span> proxy_pass http://localhost:3000;</span></span> <span><span> proxy_http_version 1.1;</span></span> <span><span> proxy_set_header Upgrade </span><span>\$</span><span>http_upgrade;</span></span> <span><span> proxy_set_header Connection 'upgrade';</span></span> <span><span> proxy_set_header Host </span><span>\$</span><span>host;</span></span> <span><span> proxy_cache_bypass </span><span>\$</span><span>http_upgrade;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOL</span></span> <span></span> <span><span>sudo</span><span> ln</span><span> -s</span><span> /etc/nginx/sites-available/hexo-site</span><span> /etc/nginx/sites-enabled/</span></span> <span><span>sudo</span><span> nginx</span><span> -t</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span><span># =============================</span></span> <span><span># Done!</span></span> <span><span># =============================</span></span> <span><span>echo</span><span> "✅ Hexo site deployed at: http://{{domain}}"</span></span></code><span></span><span></span></pre> <p>This script handles the entire process of installing dependencies, building your Hexo project, and running it behind Nginx using PM2.</p> <h2>Define the Variable Group</h2> <p>To make this script reusable across multiple projects or servers, you’ll define a variable group inside CloudRay. These variables dynamically fill in the placeholders like <code>{{repo_url}}</code>, <code>{{app_name}}</code>and <code>{{domain}}</code>. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>.</p> <img src="/_astro/variables.DG1RV3cI.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>app_name</code></strong>: The name of the app folder</li> <li><strong><code>domain</code>:</strong> The domain pointing to the server</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your Hexo site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy Hexo Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.BHcmrpBb.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.Iqy11G6_.png" alt="Screenshot of the output of Deploy Hexo script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy Hexo Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your_domain&gt;</code>. You should see your Hexo site up and running</p> <img src="/_astro/hexo-site-running.BXJK4udH.png" alt="Screenshot showing Hexo Site running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/CD</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your Hexo site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>How to Deploy Your VuePress Sitehttps://cloudray.io/articles/how-to-deploy-your-vuepress-sitehttps://cloudray.io/articles/how-to-deploy-your-vuepress-siteLearn how to deploy your VuePress site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Tue, 22 Jul 2025 00:00:00 GMT<p><a href="https://vuepress.vuejs.org/">VuePress</a> is a static site generator built with Vue that’s great for technical documentation and blogs. In this guide, you’ll learn how to deploy your VuePress project to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a centralised platform that helps you automate server tasks through Bash scripting, manage infrastructure, and execute repeatable deployments.</p> <p>This guide assumes you have a basic VuePress project already created. However, If you’re new to VuePress, you can follow the <a href="https://vuepress.vuejs.org/guide/getting-started.html">VuePress Getting Started Guide</a> to scaffold your first site.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.ChvPmT3H.png" alt="Screenshot for deployment script of VuePress site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy VuePress Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># =============================</span></span> <span><span># VuePress Deployment Script</span></span> <span><span># =============================</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># =============================</span></span> <span><span># 1. Install Dependencies</span></span> <span><span># =============================</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> curl</span><span> git</span><span> nginx</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_lts.x</span><span> |</span><span> sudo</span><span> -E</span><span> bash</span><span> -</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> pm2</span><span> serve</span></span> <span></span> <span><span># =============================</span></span> <span><span># 2. Clone Repo</span></span> <span><span># =============================</span></span> <span><span>cd</span><span> /var/www/</span></span> <span><span>git</span><span> clone</span><span> "{{repo_url}}"</span><span> "{{app_name}}"</span></span> <span><span>cd</span><span> "/var/www/{{app_name}}"</span></span> <span></span> <span><span># =============================</span></span> <span><span># 3. Build VuePress Site</span></span> <span><span># =============================</span></span> <span><span>npm</span><span> install</span></span> <span><span>npm</span><span> run</span><span> build</span></span> <span></span> <span><span># =============================</span></span> <span><span># 4. Serve with PM2</span></span> <span><span># =============================</span></span> <span><span>cd</span><span> docs/.vuepress</span></span> <span><span>pm2</span><span> start</span><span> "serve -s dist -l 3000"</span><span> --name</span><span> vuepress-site</span></span> <span><span>pm2</span><span> save</span></span> <span><span>pm2</span><span> startup</span></span> <span></span> <span><span># =============================</span></span> <span><span># 5. Configure Nginx</span></span> <span><span># =============================</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; /etc/nginx/sites-available/vuepress-site"</span><span> &lt;&lt;</span><span>EOL</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain_name}};</span></span> <span></span> <span><span> location / {</span></span> <span><span> proxy_pass http://localhost:3000;</span></span> <span><span> proxy_http_version 1.1;</span></span> <span><span> proxy_set_header Upgrade </span><span>\$</span><span>http_upgrade;</span></span> <span><span> proxy_set_header Connection 'upgrade';</span></span> <span><span> proxy_set_header Host </span><span>\$</span><span>host;</span></span> <span><span> proxy_cache_bypass </span><span>\$</span><span>http_upgrade;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOL</span></span> <span></span> <span><span>sudo</span><span> ln</span><span> -s</span><span> /etc/nginx/sites-available/vuepress-site</span><span> /etc/nginx/sites-enabled/</span></span> <span><span>sudo</span><span> nginx</span><span> -t</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span><span># =============================</span></span> <span><span># Done!</span></span> <span><span># =============================</span></span> <span><span>echo</span><span> "✅ VuePress deployed at: http://{{domain_name}}"</span></span></code><span></span><span></span></pre> <p>This script handles the entire process of installing dependencies, building your VuePress project, and running it behind Nginx using PM2.</p> <h2>Define the Variable Group</h2> <p>To make this script reusable across multiple projects or servers, you’ll define a variable group inside CloudRay. These variables dynamically fill in the placeholders like <code>{{repo_url}}</code>, and <code>{{domain_name}}</code>. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>.</p> <img src="/_astro/variables.DDJtpKMl.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>domain_name</code>:</strong> The domain pointing to the server</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your VuePress site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy VuePress Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.CAamKqiV.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.wKcpbyaW.png" alt="Screenshot of the output of Deploy VuePress script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy VuePress Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your_domain&gt;</code>. You should see your VuePress site up and running</p> <img src="/_astro/vuepress-site-running.iLtG5AES.png" alt="Screenshot showing VuePress Site running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/CD</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your VuePress site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>How to Deploy Your RedwoodJS Applicationhttps://cloudray.io/articles/how-to-deploy-redwoodjs-application-on-ubuntuhttps://cloudray.io/articles/how-to-deploy-redwoodjs-application-on-ubuntuLearn how to deploy your RedwoodJS Application to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Thu, 17 Jul 2025 00:00:00 GMT<p><a href="https://redwoodjs.com/">RedwoodJS</a> is a full-stack JavaScript framework that integrates frontend and backend development into a single unified workflow. It’s designed for building and deploying modern web applications using React, GraphQL, and Prisma.</p> <p>In this guide, we’ll walk through deploying a RedwoodJS application on an Ubuntu server using CloudRay. This article assumes you already have a RedwoodJS project hosted in a Git repository. If you’re just getting started with RedwoodJS, check out the <a href="https://docs.redwoodjs.com/docs/quick-start">Quick Start Guide</a> to scaffold your first app.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.CmgQQwvQ.png" alt="Screenshot for deployment script of Astro site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy RedwoodJS Application</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Update and install system packages</span></span> <span><span># -----------------------------</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> curl</span><span> git</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Install Corepack for Yarn</span></span> <span><span># -----------------------------</span></span> <span><span>npm</span><span> install</span><span> -g</span><span> corepack</span></span> <span><span>COREPACK_ENABLE_DOWNLOAD_PROMPT</span><span>=</span><span>0</span><span> yarn</span><span> init</span><span> -2</span><span> --yes</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Install Node.js</span></span> <span><span># -----------------------------</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_lts.x</span><span> |</span><span> sudo</span><span> -E</span><span> bash</span><span> -</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Clone RedwoodJS App</span></span> <span><span># -----------------------------</span></span> <span><span>git</span><span> clone</span><span> {{repo_url}}</span><span> redwood-app</span></span> <span><span>cd</span><span> redwood-app</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Install dependencies and build</span></span> <span><span># -----------------------------</span></span> <span><span>yarn</span><span> install</span></span> <span><span>yarn</span><span> rw</span><span> g</span><span> page</span><span> home</span><span> /</span></span> <span><span>yarn</span><span> rw</span><span> build</span><span> web</span></span> <span></span> <span><span># Install PM2 globally</span></span> <span><span>npm</span><span> install</span><span> -g</span><span> pm2</span></span> <span></span> <span><span># Start the RedwoodJS frontend (served from web/dist) on port 3000</span></span> <span><span>pm2</span><span> start</span><span> "serve -s web/dist -l 3000"</span><span> --name</span><span> redwood-frontend</span></span> <span></span> <span><span># Save the PM2 process list to be resurrected on reboot</span></span> <span><span>pm2</span><span> save</span></span> <span></span> <span><span># Generate and configure system startup script</span></span> <span><span>pm2</span><span> startup</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Deploy RedwoodJS Application</code> script does:</p> <ul> <li>Installs required dependencies (Node.js, Git, Yarn)</li> <li>Clone your Astro project from a Git repository</li> <li>Installs dependencies and builds the app</li> <li>Serves the frontend from the <code>web/dist</code> directory</li> </ul> <h2>Define the Variable Group</h2> <p>Now, before running the deployment script, you need to define value for the placeholder <code>{{repo_url}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.Dds9Weqd.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your Astro site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy RedwoodJS Application”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.CKTwilKt.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.BjXzvMsl.png" alt="Screenshot of the output of the setup Nextjs script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy RedwoodJS Application</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your-server-ip&gt;:3000</code>. You should see your RedwoodJS site up and running.</p> <h2>Next Step</h2> <p>To productionise your deployment:</p> <ul> <li>You can configure a <a href="https://docs.redwoodjs.com/docs/how-to/self-hosting-redwood/#nginx">Nginx reverse proxy</a> to serve on port 80</li> <li>Set up HTTPS with Let’s Encrypt</li> </ul> <h2>Summary</h2> <p>Deploying a RedwoodJS app with CloudRay streamlines server setup and project provisioning using Bash scripts and dynamic variables. You gain the ability to manage infrastructure and scale your app across environments more easily.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>How to Deploy Your Gatsby Sitehttps://cloudray.io/articles/how-to-deploy-your-gatsby-sitehttps://cloudray.io/articles/how-to-deploy-your-gatsby-siteLearn how to deploy your Gatsby site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Wed, 16 Jul 2025 00:00:00 GMT<p><a href="https://www.gatsbyjs.com/">Gatsby</a> is a fast and modern React-based static site generator built for performance. In this guide, we’ll walk you through deploying a Gatsby project to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a centralised platform that helps you automate server tasks through Bash scripting, manage infrastructure, and execute repeatable deployments.</p> <p>This guide assumes you have a basic Gatsby project already created. However, if you don’t have a Gatsby project set up yet, you can follow the <a href="https://www.gatsbyjs.com/docs/quick-start/">Gatsby Quick Start Guide</a> to get started.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.DMgCiRVd.png" alt="Screenshot for deployment script of Gatsby site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy Gatsby Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># =============================</span></span> <span><span># Gatsby Deployment Script</span></span> <span><span># =============================</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># =============================</span></span> <span><span># 1. Install Dependencies</span></span> <span><span># =============================</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> curl</span><span> git</span><span> nginx</span></span> <span></span> <span><span># Install Node.js</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_lts.x</span><span> |</span><span> sudo</span><span> -E</span><span> bash</span><span> -</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span></span> <span><span># Install PM2 and Serve</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> pm2</span><span> serve</span></span> <span></span> <span><span># =============================</span></span> <span><span># 2. Clone the Repository</span></span> <span><span># =============================</span></span> <span><span>cd</span><span> /var/www</span></span> <span><span>sudo</span><span> git</span><span> clone</span><span> "{{repo_url}}"</span><span> "{{app_name}}"</span></span> <span><span>cd</span><span> "/var/www/{{app_name}}"</span></span> <span></span> <span><span># =============================</span></span> <span><span># 3. Install and Build Gatsby App</span></span> <span><span># =============================</span></span> <span><span>npm</span><span> install</span><span> &amp;&amp; </span><span>npm</span><span> run</span><span> build</span></span> <span></span> <span><span># =============================</span></span> <span><span># 4. Start with PM2</span></span> <span><span># =============================</span></span> <span><span>pm2</span><span> start</span><span> "serve -s public -l 3000"</span><span> --name</span><span> "{{app_name}}"</span></span> <span><span>pm2</span><span> save</span></span> <span><span>pm2</span><span> startup</span></span> <span></span> <span><span># =============================</span></span> <span><span># 5. Configure Nginx</span></span> <span><span># =============================</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; /etc/nginx/sites-available/{{app_name}}"</span><span> &lt;&lt;</span><span>EOL</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain_name}};</span></span> <span></span> <span><span> location / {</span></span> <span><span> proxy_pass http://localhost:3000;</span></span> <span><span> proxy_http_version 1.1;</span></span> <span><span> proxy_set_header Upgrade </span><span>\$</span><span>http_upgrade;</span></span> <span><span> proxy_set_header Connection 'upgrade';</span></span> <span><span> proxy_set_header Host </span><span>\$</span><span>host;</span></span> <span><span> proxy_cache_bypass </span><span>\$</span><span>http_upgrade;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOL</span></span> <span></span> <span><span># Enable Nginx site</span></span> <span><span>sudo</span><span> ln</span><span> -sf</span><span> "/etc/nginx/sites-available/{{app_name}}"</span><span> "/etc/nginx/sites-enabled/{{app_name}}"</span></span> <span><span>sudo</span><span> nginx</span><span> -t</span><span> &amp;&amp; </span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span><span># =============================</span></span> <span><span># Done!</span></span> <span><span># =============================</span></span> <span><span>echo</span><span> "✅ Deployment complete. Visit: http://{{domain_name}}"</span></span></code><span></span><span></span></pre> <p>This script handles the entire process of installing dependencies, building your Gatsby project, and running it behind Nginx using PM2.</p> <h2>Define the Variable Group</h2> <p>To make this script reusable across multiple projects or servers, you’ll define a variable group inside CloudRay. These variables dynamically fill in the placeholders like <code>{{repo_url}}</code>, <code>{{app_name}}</code>, and <code>{{domain_name}}</code>. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>.</p> <img src="/_astro/variables.DAOHZoKP.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>app_name</code>:</strong> The name of the app folder</li> <li><strong><code>domain_name</code>:</strong> The domain pointing to the server</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your Gatsby site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy Gatsby Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.P84U2xSX.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.BbhFnfDl.png" alt="Screenshot of the output of Deploy Gatsby script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy Gatsby Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your_domain&gt;</code>. You should see your Gatsby site up and running</p> <img src="/_astro/gatsby-site-running.CFc89t7C.png" alt="Screenshot showing Gatsby Site running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/CD</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your Gatsby site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>How to Deploy Your Hugo Sitehttps://cloudray.io/articles/deploy-hugo-sitehttps://cloudray.io/articles/deploy-hugo-siteLearn how to deploy your Hugo site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Mon, 07 Jul 2025 00:00:00 GMT<p><a href="https://gohugo.io/">Hugo</a> is a fast and flexible static site generator that enables developers to build modern websites quickly. In this guide, we’ll walk through how to deploy a Hugo site to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a centralised platform for managing infrastructure and automating deployment using Bash scripts.</p> <p>This guide assumes you already have a Hugo project stored in a Git repository. If not, you can start by following <a href="https://gohugo.io/getting-started/quick-start/">Hugo’s Quick Start</a> guide to create one.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.CXM1rZwB.png" alt="Screenshot for deployment script of Hugo site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy Hugo Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span><span> </span></span> <span></span> <span><span># === Commands ===</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> hugo</span><span> nginx</span><span> git</span><span> -y</span></span> <span></span> <span><span>sudo</span><span> mkdir</span><span> {{project_dir}}</span></span> <span><span>cd</span><span> {{project_dir}}</span></span> <span></span> <span><span>sudo</span><span> git</span><span> clone</span><span> --recurse-submodules</span><span> "{{repo}}"</span><span> .</span></span> <span></span> <span><span>sudo</span><span> hugo</span><span> -D</span></span> <span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> {{project_dir}}/public</span></span> <span></span> <span><span>sudo</span><span> tee</span><span> /etc/nginx/sites-available/{{domain}}</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain}} www.{{domain}};</span></span> <span></span> <span><span> root {{project_dir}}/public;</span></span> <span><span> index index.html;</span></span> <span></span> <span><span> access_log /var/log/nginx/{{domain}}_access.log;</span></span> <span><span> error_log /var/log/nginx/{{domain}}_error.log;</span></span> <span></span> <span><span> location / {</span></span> <span><span> try_files </span><span>\$</span><span>uri </span><span>\$</span><span>uri/ =404;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span>sudo</span><span> ln</span><span> -s</span><span> /etc/nginx/sites-available/{{domain}}</span><span> /etc/nginx/sites-enabled/</span></span> <span></span> <span><span>sudo</span><span> nginx</span><span> -t</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span><span>echo</span><span> "✅ Hugo site deployed and available at http://${{</span><span>domain</span><span>}}"</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Deploy Hugo Site</code> does:</p> <ul> <li>Install Hugo, Git, and Nginx</li> <li>Clone your Hugo project from a Git repository</li> <li>Build the project</li> <li>Configure Nginx to serve the generated site</li> </ul> <h2>Define the Variable Group</h2> <p>Now, before running the deployment script, you need to define values for the placeholders <code>{{domain}}</code>, <code>{{repo}}</code>, and <code>{{project_dir}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.DJ9C_Jyf.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>domain</code>:</strong> Your Hugo site domain name</li> <li><strong><code>repo</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>project_dir</code>:</strong> Your Hugo deployment directory</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your Hugo site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy Hugo Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.DFVytlMZ.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.CrV4YK9l.png" alt="Screenshot of the output of the setup Nextjs script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy Hugo Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your_domain&gt;</code>. You should see your Hugo site up and running.</p> <img src="/_astro/hugo-site-running.BtkZvwP8.png" alt="Screenshot showing Hugo Site running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/CD</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your Hugo site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>How to Install Statamic on Ubuntuhttps://cloudray.io/articles/install-statamic-on-ubuntuhttps://cloudray.io/articles/install-statamic-on-ubuntuLearn how to install Statamic to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Fri, 04 Jul 2025 00:00:00 GMT<p><a href="https://statamic.com/">Statamic</a> is a flat‑file CMS built on Laravel that lets you publish dynamic content without a traditional database. On Ubuntu you typically install PHP, Composer, the Statamic CLI, and then configure Nginx by hand.</p> <p>This guide shows how to turn that repeatable setup into a single Bash script you can run from <a href="https://cloudray.io">CloudRay</a> so every new server starts with an identical, working Statamic site.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-installation-script">Run the Installation Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the installation process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/install-script.BQ8IlYch.png" alt="Screenshot for installation script for statamic" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install Statamic Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># === Exit on error ===</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># === UPDATE SYSTEM ===</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> -y</span></span> <span></span> <span><span># === CREATE USER NON-INTERACTIVELY ===</span></span> <span><span>sudo</span><span> useradd</span><span> -m</span><span> -s</span><span> /bin/bash</span><span> "{{username}}"</span></span> <span><span>echo</span><span> "{{username}}:{{password}}"</span><span> |</span><span> sudo</span><span> chpasswd</span></span> <span><span>sudo</span><span> usermod</span><span> -aG</span><span> sudo</span><span> "{{username}}"</span></span> <span></span> <span><span># === TEMPORARILY ALLOW passwordLESS SUDO FOR USER ===</span></span> <span><span>echo</span><span> "{{username}} ALL=(ALL) NOPASSWD:ALL"</span><span> |</span><span> sudo</span><span> tee</span><span> "/etc/sudoers.d/{{username}}"</span></span> <span><span>sudo</span><span> chmod</span><span> 0440</span><span> "/etc/sudoers.d/{{username}}"</span></span> <span></span> <span><span># === EXECUTE THE REST AS THE NEW USER ===</span></span> <span><span>sudo</span><span> -i</span><span> -u</span><span> "{{username}}"</span><span> bash</span><span> &lt;&lt;</span><span> 'EOF'</span></span> <span><span># Configurable domain and PHP version inside user context</span></span> <span></span> <span><span># === INSTALL REQUIRED PACKAGES ===</span></span> <span><span>sudo apt install -y php-common php-fpm php-json php-mbstring zip unzip php-zip php-cli php-xml php-tokenizer php-curl git nginx</span></span> <span></span> <span><span># === INSTALL COMPOSER ===</span></span> <span><span>curl -sS https://getcomposer.org/installer | php</span></span> <span><span>sudo mv composer.phar /usr/local/bin/composer</span></span> <span></span> <span><span># === CHECK COMPOSER VERSION ===</span></span> <span><span>composer --version</span></span> <span></span> <span><span># === INSTALL STATAMIC CLI ===</span></span> <span><span>composer global require statamic/cli</span></span> <span></span> <span><span># === SET PATH MANUALLY FOR CURRENT SESSION ===</span></span> <span><span>export PATH="$HOME/.config/composer/vendor/bin:$HOME/.composer/vendor/bin:$PATH"</span></span> <span></span> <span><span># === VERIFY STATAMIC INSTALLATION ===</span></span> <span><span>which statamic</span></span> <span><span>statamic -V</span></span> <span></span> <span><span># === CREATE PROJECT DIRECTORY ===</span></span> <span><span>cd /var/www</span></span> <span><span>sudo chown $USER:www-data /var/www</span></span> <span></span> <span><span># === CREATE NEW STATAMIC SITE ===</span></span> <span><span>script -q -c "statamic new --no-interaction {{domain}}"</span></span> <span></span> <span><span># === PERSIST PATH FOR FUTURE SESSIONS ===</span></span> <span><span>echo 'export PATH="$HOME/.config/composer/vendor/bin:$PATH"' &gt;&gt; ~/.bashrc</span></span> <span><span>echo 'export PATH="$HOME/.composer/vendor/bin:$PATH"' &gt;&gt; ~/.bashrc</span></span> <span><span>EOF</span></span> <span></span> <span><span># === SET CORRECT PERMISSIONS ===</span></span> <span><span>sudo</span><span> chmod</span><span> -R</span><span> 755</span><span> /var/www/{{domain}}</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> /var/www/{{domain}}</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> /var/www/{{domain}}/{storage,content,bootstrap/cache}</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> /var/www/{{domain}}/storage</span><span> /var/www/{{domain}}/bootstrap/cache</span></span> <span><span>sudo</span><span> chmod</span><span> -R</span><span> ug+rw</span><span> /var/www/{{domain}}/storage</span><span> /var/www/{{domain}}/bootstrap/cache</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> /var/www/{{domain}}/storage</span><span> /var/www/{{domain}}/bootstrap/cache</span></span> <span></span> <span><span># === SETUP NGINX CONFIGURATION ===</span></span> <span><span>sudo</span><span> tee</span><span> /etc/nginx/sites-available/{{domain}}</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span> NGINX_CONF</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain}} www.{{domain}};</span></span> <span><span> root /var/www/{{domain}}/public;</span></span> <span></span> <span><span> index index.html index.htm index.php;</span></span> <span><span> charset utf-8;</span></span> <span></span> <span><span> add_header X-Frame-Options "SAMEORIGIN";</span></span> <span><span> add_header X-XSS-Protection "1; mode=block";</span></span> <span><span> add_header X-Content-Type-Options "nosniff";</span></span> <span></span> <span><span> set </span><span>\$</span><span>try_location @static;</span></span> <span></span> <span><span> if (</span><span>\$</span><span>request_method != GET) {</span></span> <span><span> set </span><span>\$</span><span>try_location @not_static;</span></span> <span><span> }</span></span> <span></span> <span><span> if (</span><span>\$</span><span>args ~* "live-preview=(.*)") {</span></span> <span><span> set </span><span>\$</span><span>try_location @not_static;</span></span> <span><span> }</span></span> <span></span> <span><span> location / {</span></span> <span><span> try_files </span><span>\$</span><span>uri </span><span>\$</span><span>try_location;</span></span> <span><span> }</span></span> <span></span> <span><span> location @static {</span></span> <span><span> try_files /static</span><span>\$</span><span>{uri}_</span><span>\$</span><span>args.html </span><span>\$</span><span>uri </span><span>\$</span><span>uri/ /index.php?</span><span>\$</span><span>args;</span></span> <span><span> }</span></span> <span></span> <span><span> location @not_static {</span></span> <span><span> try_files </span><span>\$</span><span>uri /index.php?</span><span>\$</span><span>args;</span></span> <span><span> }</span></span> <span></span> <span><span> location = /favicon.ico { access_log off; log_not_found off; }</span></span> <span><span> location = /robots.txt { access_log off; log_not_found off; }</span></span> <span></span> <span><span> error_page 404 /index.php;</span></span> <span></span> <span><span> location ~ \.php</span><span>\$</span><span> {</span></span> <span><span> fastcgi_pass unix:/var/run/php/php{{php_version}}-fpm.sock;</span></span> <span><span> fastcgi_index index.php;</span></span> <span><span> fastcgi_param SCRIPT_FILENAME </span><span>\$</span><span>realpath_root</span><span>\$</span><span>fastcgi_script_name;</span></span> <span><span> include fastcgi_params;</span></span> <span><span> }</span></span> <span></span> <span><span> location ~ /\.(?!well-known).* {</span></span> <span><span> deny all;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>NGINX_CONF</span></span> <span></span> <span><span># === ENABLE SITE AND RELOAD NGINX ===</span></span> <span><span>sudo</span><span> ln</span><span> -s</span><span> /etc/nginx/sites-available/{{domain}}</span><span> /etc/nginx/sites-enabled/</span></span> <span><span>sudo</span><span> nginx</span><span> -t</span><span> &amp;&amp; </span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span> <span></span> <span></span> <span><span># === CLEANUP: Remove passwordless sudo access for security ===</span></span> <span><span>sudo</span><span> rm</span><span> -f</span><span> "/etc/sudoers.d/{{username}}"</span></span> <span></span> <span><span>echo</span><span> "✅ Statamic site successfully installed at http://{{domain}}"</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Install Statamic Site</code> does:</p> <ul> <li>Creates a non-root user and installs PHP, Composer and the Statamic CLI</li> <li>Generates a fresh Statamic site</li> <li>Sets correct file ownership and writes an Nginx virtual-host file</li> <li>Reloads Nginx and prints the site URL when finished</li> </ul> <h2>Define the Variable Group</h2> <p>Now, before running the deployment script, you need to define values for the placeholders <code>{{username}}</code>, <code>{{password}}</code>, <code>{{domain}}</code>, and <code>{{php_version}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.BPFCW798.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>username</code>:</strong> Linux user that owns the web root</li> <li><strong><code>password</code>:</strong> Password for that user (used once then revoked)</li> <li><strong><code>domain</code>:</strong> Public domain or IP of the site</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Installation Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can install statamic by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Install Statamic Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.CMpa09Gq.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.bBbqgNZO.png" alt="Screenshot of the output of the setup Nextjs script" /> <p>CloudRay will automatically connect to your server, run the <code>Install Statamic Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your_domain&gt;</code>. You should see your Statamic site up and running.</p> <img src="/_astro/statamic-running.BYpaFLtM.png" alt="Screenshot showing Statamic running" /> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Use additional CloudRay scripts to automate database dumps, log rotation, or resource checks.</li> </ul> <h2>Summary</h2> <p>With CloudRay, automating the installation of Statamic to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups. Storing this workflow in CloudRay means it’s versioned, reusable, and one‑click reproducible on any new Ubuntu machine.</p> <p>For more installation guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>How to Deploy Your Astro Sitehttps://cloudray.io/articles/how-to-deploy-your-astro-sitehttps://cloudray.io/articles/how-to-deploy-your-astro-siteLearn how to deploy your Astro site to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Mon, 23 Jun 2025 00:00:00 GMT<p><a href="https://astro.build/">Astro</a> is a modern static site builder that enables fast, optimised frontend delivery using a component-based architecture. In this guide, we walk you through deploying an Astro project to an Ubuntu server using <a href="https://cloudray.io">CloudRay</a>, a platform for managing infrastructure and running automation Bash scripts at scale.</p> <p>This guide assumes you have a basic Astro project already created. However, if you’ve not created an Astro project, checkout the <a href="https://docs.astro.build/en/getting-started/">Astro getting started Guide</a> to learn more and choose from the several examples to build your own Astro project.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-a-deployment-script">Create a Deployment Script</a></li> <li><a href="#define-the-variable-group">Define the Variable Group</a></li> <li><a href="#run-the-deployment-script">Run the Deployment Script</a></li> <li><a href="#next-step">Next Step</a></li> <li><a href="#summary">Summary</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your <a href="/docs/machines">machine</a> is added to a CloudRay project and connected using the <a href="/docs/agent">CloudRay Agent</a>.</p> <h2>Create a Deployment Script</h2> <p>Now that your machine is connected to CloudRay, let’s create a reusable Bash script to automate the deployment process. You need to follow these steps to create the script in CloudRay.</p> <img src="/_astro/deployment-script.Ct1rO0RP.png" alt="Screenshot for deployment script of Astro site" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy Astro Site</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bib/bash</span></span> <span></span> <span><span># Exit immediately if a command exits with a non-zero status</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Install Node.js and dependencies</span></span> <span><span># -----------------------------</span></span> <span><span>apt</span><span> update</span><span> &amp;&amp; </span><span>apt</span><span> install</span><span> -y</span><span> curl</span><span> unzip</span><span> git</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_18.x</span><span> |</span><span> bash</span><span> -</span></span> <span><span>apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Clone the Astro Project</span></span> <span><span># -----------------------------</span></span> <span><span>rm</span><span> -rf</span><span> {{clone_dir}}</span></span> <span><span>git</span><span> clone</span><span> {{repo_url}}</span><span> {{clone_dir}}</span></span> <span><span>cd</span><span> {{clone_dir}}</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Build the Astro Project</span></span> <span><span># -----------------------------</span></span> <span><span>npm</span><span> install</span></span> <span><span>npm</span><span> run</span><span> build</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Deploy to Target Directory</span></span> <span><span># -----------------------------</span></span> <span><span>mkdir</span><span> -p</span><span> {{deploy_dir}}</span></span> <span><span>cp</span><span> -r</span><span> dist/</span><span>*</span><span> {{deploy_dir}}</span></span> <span></span> <span><span># -----------------------------</span></span> <span><span># Install and Configure Nginx</span></span> <span><span># -----------------------------</span></span> <span><span>apt</span><span> install</span><span> -y</span><span> nginx</span></span> <span><span>rm</span><span> -f</span><span> /etc/nginx/sites-enabled/default</span></span> <span></span> <span><span>cat</span><span> &gt;</span><span> /etc/nginx/sites-available/astro-site</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name _;</span></span> <span></span> <span><span> root {{deploy_dir}};</span></span> <span><span> index index.html;</span></span> <span></span> <span><span> location / {</span></span> <span><span> try_files </span><span>\$</span><span>uri </span><span>\$</span><span>uri/ =404;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span>ln</span><span> -s</span><span> /etc/nginx/sites-available/astro-site</span><span> /etc/nginx/sites-enabled/astro-site</span></span> <span><span>nginx</span><span> -t</span><span> &amp;&amp; </span><span>systemctl</span><span> restart</span><span> nginx</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Deploy Astro Site</code> does:</p> <ul> <li>Install Node.js and dependencies</li> <li>Clone your Astro project from a Git repository</li> <li>Build the Astro site</li> <li>Copy the production files to <code>{{deploy_dir}}</code></li> <li>Installs and configure Nginx</li> </ul> <h2>Define the Variable Group</h2> <p>Now, before running the deployment script, you need to define values for the placeholders <code>{{repo_url}}</code>, <code>{{deploy_dir}}</code>, and <code>{{clone_dir}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.DmpwLo6a.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>repo_url</code>:</strong> The URL of your GitHub repository</li> <li><strong><code>deploy_dir</code>:</strong> Your Astro deployment directory</li> <li><strong><code>clone_dir</code>:</strong> This is the name of the GitHub repository</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Run the Deployment Script</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>Once the script is saved, you can deploy your Astro site by creating a Runlog:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Deploy Astro Site”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.DsncdSeN.png" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.ixLuF6VS.png" alt="Screenshot of the output of the setup Nextjs script" /> <p>CloudRay will automatically connect to your server, run the <code>Deploy Astro Site</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Once the deployment is complete, you can visit <code>http://&lt;your-server-ip&gt;</code>. You should see your Astro site up and running</p> <h2>Next Step</h2> <p>To improve your deployment and security:</p> <ul> <li>You can use Let’s Encrypt to add HTTPS support</li> <li>Set up automatic redeployment with webhooks or CI/cd</li> </ul> <h2>Summary</h2> <p>With CloudRay, deploying your Astro site to Ubuntu becomes a structured and repeatable process. Scripts remain organised, server access is centralised, and you can easily tweak configurations with variable groups.</p> <p>For more deployment guides and use cases, check out our <a href="/articles">CloudRay Guides</a> or explore the <a href="/docs/getting-started">CloudRay Docs</a>.</p>How to Install Ghost CMS on Ubuntu 24.04https://cloudray.io/articles/how-to-install-ghost-cms-on-ubuntu-24-04https://cloudray.io/articles/how-to-install-ghost-cms-on-ubuntu-24-04Learn how to use reusable Bash scripts to install Ghost CMS on Ubuntu with Nginx MySQL and Ghost CLI for secure automated setupFri, 20 Jun 2025 00:00:00 GMT<p>Ghost CMS is a blasing fast, modern publishing platform for building professional blogs and newsletters. It’s built on Node.js and designed to be simple, secure, and extendable. In this guide, you’ll learn how to install and deploy Ghost CMS on Ubuntu 24.04 using Nginx, MySQL, and the official Ghost CLI.</p> <p>For teams managing multiple servers or performing repeatable setups, <a href="https://cloudray.io">CloudRay</a> can help automate the installation and deployment of Ghost CMS using Bash scripts, without needing to SSH into each server manually.</p> <h2>Contents</h2> <ul> <li><a href="#update-the-system--create-a-ghost-user">Update the System &amp; Create a Ghost User</a></li> <li><a href="#install-and-configure-nginx">Install and Configure Nginx</a></li> <li><a href="#install-and-configure-mysql">Install and Configure MySQL</a></li> <li><a href="#install-nodejs-and-ghost-cli">Install Node.js and Ghost CLI</a></li> <li><a href="#create-ghost-directory-and-install-ghost">Create Ghost Directory and Install Ghost</a></li> </ul> <h2>Update the System &amp; Create a Ghost User</h2> <p>To begin, update your system’s package index and create a dedicated user for managing Ghost. This helps isolate your CMS from the root user and enhances security.</p> <ol> <li>Update the APT package index</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> update</span></span></code><span></span><span></span></pre> <ol> <li>Create a non-root user named <code>ghost</code></li> </ol> <pre><code><span><span>sudo</span><span> adduser</span><span> ghost</span></span></code><span></span><span></span></pre> <p>When prompted, enter a strong password and optionally fill in the user details.</p> <ol> <li>Grant <code>ghost</code> user sudo privileges</li> </ol> <pre><code><span><span>sudo</span><span> adduser</span><span> ghost</span><span> sudo</span></span></code><span></span><span></span></pre> <ol> <li>Verify the Java installation</li> </ol> <pre><code><span><span>su</span><span> -</span><span> ghost</span></span></code><span></span><span></span></pre> <p>From this point on, you will operate as the <code>ghost</code> user for the remainder of the installation.</p> <h2>Install and Configure Nginx</h2> <p>Ghost CMS uses Nginx as a reverse proxy to serve your site over HTTP or HTTPS.</p> <ol> <li>Install Nginx</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> nginx</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Allow Nginx through the firewall</li> </ol> <pre><code><span><span>sudo</span><span> ufw</span><span> allow</span><span> 'Nginx Full'</span></span></code><span></span><span></span></pre> <ol> <li>Confirm Nginx is running</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> status</span><span> nginx</span></span></code><span></span><span></span></pre> <p>You should see output showing that the Nginx service is active (running)</p> <img src="/_astro/confirm-nginx.BiWhZ6Fc.jpg" alt="Confirm Nginx status" /> <p>if it’s not active and running, you can start the nginx service</p> <pre><code><span><span>sudo</span><span> systemctl</span><span> start</span><span> nginx</span></span></code><span></span><span></span></pre> <h2>Install and Configure MySQL</h2> <p>Ghost CMS uses MySQL to store all its content and metadata. Let’s install and set up a new database for Ghost.</p> <ol> <li>Install MySQL Server</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> mysql-server</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Access the MySQL shell</li> </ol> <pre><code><span><span>sudo</span><span> mysql</span></span></code><span></span><span></span></pre> <ol> <li>Create a dedicated database and user for Ghost</li> </ol> <pre><code><span><span>CREATE</span><span> USER</span><span> '</span><span>ghostuser</span><span>'@</span><span>'localhost'</span><span> IDENTIFIED </span><span>BY</span><span> 'your_strong_password'</span><span>;</span></span> <span><span>CREATE</span><span> DATABASE</span><span> ghostdb</span><span>;</span></span> <span><span>GRANT</span><span> ALL </span><span>ON</span><span> ghostdb.</span><span>*</span><span> TO</span><span> 'ghostuser'</span><span>@</span><span>'localhost'</span><span>;</span></span> <span><span>FLUSH PRIVILEGES;</span></span> <span><span>EXIT;</span></span></code><span></span><span></span></pre> <p>You can replace the <code>your_strong_password</code> with a secure password</p> <p>Looking to automate your MySQL setup? Check out this guide on <a href="https://cloudray.io/articles/deploy-mysql-server">automating MySQL installation in CloudRay</a>.</p> <h2>Install Node.js and Ghost CLI</h2> <p>Ghost CMS is built on Node.js, so you’ll need to install Node and the Ghost CLI to manage your Ghost instance.</p> <ol> <li>Install Node.js and npm</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> nodejs</span><span> npm</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Install Ghost CLI globally using npm</li> </ol> <pre><code><span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> ghost-cli@latest</span></span></code><span></span><span></span></pre> <ol> <li>Verify the installation</li> </ol> <pre><code><span><span>ghost</span><span> --version</span></span></code><span></span><span></span></pre> <p>This command confirms that Ghost CLI was installed correctly. You should see the version number in the output.</p> <h2>Create Ghost Directory and Install Ghost</h2> <p>Now that your environment is ready, it’s time to create a directory where Ghost will live and set the appropriate permissions.</p> <ol> <li>Create the Ghost installation directory</li> </ol> <pre><code><span><span>sudo</span><span> mkdir</span><span> -p</span><span> /var/www/ghost</span></span></code><span></span><span></span></pre> <ol> <li>Change the directory ownership to the <code>ghost</code> user</li> </ol> <pre><code><span><span>sudo</span><span> chown</span><span> ghost:ghost</span><span> /var/www/ghost</span></span></code><span></span><span></span></pre> <ol> <li>Give the right permissions to the directory</li> </ol> <pre><code><span><span>sudo</span><span> chmod</span><span> 775</span><span> /var/www/ghost</span></span></code><span></span><span></span></pre> <ol> <li>Navigate to the Ghost directory</li> </ol> <pre><code><span><span>cd</span><span> /var/www/ghost/</span></span></code><span></span><span></span></pre> <ol> <li>Install Ghost using the CLI</li> </ol> <pre><code><span><span>ghost</span><span> install</span></span></code><span></span><span></span></pre> <p>The CLI will guide you through configuration steps like setting your blog URL, configuring MySQL credentials, and setting up Nginx and SSL.</p> <img src="/_astro/ghost-setup.ChcbqSCs.jpg" alt="Screenshot showing Ghost setup" /> <p>If all dependencies are met, Ghost will install and start automatically.</p> <img src="/_astro/ghost-setup2.aArH4ist.jpg" alt="Screenshot showing Ghost setup" /> <p>In this case you can visit the Ghost CMS website with your domain.</p> <img src="/_astro/ghost-website.zbc9CpNL.jpg" alt="Screenshot showing Ghost setup" /> <p>Manually setting up the Ghost CMS across multiple servers can be time-consuming, especially when repeating the same steps on each server. [CloudRay] simplifies this process by allowing you to create reusable Bash scripts that can be executed remotely across your infrastructure.</p> <p>With CloudRay, you can automate the entire Ghost CMS setup:</p> <ul> <li>Create and configure the <code>ghost</code> user</li> <li>Install required packages like Node.js, Nginx, and MySQL</li> <li>Set up the Ghost CLI and automate the Ghost installation</li> <li>Configure Nginx reverse proxy and SSL</li> </ul> <p>Before you automate the installation process, connect your server to CloudRay using the <a href="/docs/agent">CloudRay agent</a>. This allows you to run bash scripts directly from the dashboard without SSH access.</p>Automate Installation of LAMP Stack on Ubuntu Using Bash Scripthttps://cloudray.io/articles/automate-installation-of-lamp-stack-on-ubuntu-using-bash-scripthttps://cloudray.io/articles/automate-installation-of-lamp-stack-on-ubuntu-using-bash-scriptLearn how to automatically install a LAMP stack on Ubuntu 22.04 using a Bash script and streamline deployment with CloudRayThu, 19 Jun 2025 00:00:00 GMT<p>Setting up a LAMP stack (Linux, Apache, MySQL, PHP) on a fresh Ubuntu server is often the first step when provisioning a web server. While it’s a common task, doing it manually across multiple servers or environments is time-consuming and error-prone.</p> <p>This guide shows you how to automatically install the LAMP stack on Ubuntu 24.04 using a Bash script. Whether you’re configuring a single droplet or orchestrating a fleet of cloud instances, using a shell script to automate LAMP installation simplifies the process and enforces consistency. You will learn how to run the scripts using <a href="https://cloudray.io">CloudRay</a>, a centralised platform that helps you organise, run, and manage your bash scripts more effectively.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#creating-the-automation-script">Creating the Automation Script</a></li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/agent">Agent docs</a> to add and manage your server.</p> <h2>Creating the Automation Script</h2> <p>To automate this process across one or more Ubuntu servers, start by adding your <code>LAMP Stack Installation</code> to your CloudRay project.</p> <p>Follow these steps to create the script in CloudRay:</p> <img src="/_astro/automation-script.jELQDYzy.jpg" alt="Screenshot of the automation script for LAMP stack" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>LAMP Stack Installation</code></li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Update and Install Apache</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> apache2</span><span> -y</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> apache2</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> apache2</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> apache2</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> 80/tcp</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Install and Secure MySQL</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> mysql-server</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> mysql</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> mysql</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> mysql</span></span> <span><span>printf</span><span> "y\n2\ny\ny\ny\ny\n"</span><span> |</span><span> sudo</span><span> mysql_secure_installation</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Configure MySQL Root User</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> mysql</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>ALTER USER 'root'@'localhost' IDENTIFIED BY '{{db_password}}';</span></span> <span><span>FLUSH PRIVILEGES;</span></span> <span><span>EXIT</span></span> <span><span>EOF</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Create Database and User</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> mysql</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>CREATE DATABASE {{db_name}};</span></span> <span><span>CREATE USER '{{db_user}}'@'localhost' IDENTIFIED BY '{{db_password}}';</span></span> <span><span>GRANT ALL PRIVILEGES ON {{db_name}}.* TO '{{db_user}}'@'localhost';</span></span> <span><span>FLUSH PRIVILEGES;</span></span> <span><span>EXIT</span></span> <span><span>EOF</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Install PHP 8.3 and Extensions</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> php</span><span> php-fpm</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> php-mysql</span><span> php-opcache</span><span> php-cli</span><span> libapache2-mod-php</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Start and Enable PHP 8.3 FPM</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> php8.3-fpm</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> php8.3-fpm</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> php8.3-fpm</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Configure Apache to Use PHP 8.3 FPM</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> a2enmod</span><span> proxy_fcgi</span><span> setenvif</span></span> <span><span>sudo</span><span> a2enconf</span><span> php8.3-fpm</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> apache2</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> php8.3-fpm</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Remove Default Apache Site</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> rm</span><span> -rf</span><span> /etc/apache2/sites-enabled/000-default.conf</span></span> <span><span>sudo</span><span> rm</span><span> -rf</span><span> /etc/apache2/sites-available/000-default.conf</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Create Apache Virtual Host</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> tee</span><span> /etc/apache2/sites-available/{{domain_name}}.conf</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>&lt;VirtualHost *:80&gt;</span></span> <span><span>ServerAdmin webmaster@{{domain_name}}</span></span> <span><span>ServerName {{domain_name}}</span></span> <span><span>DocumentRoot /var/www/{{domain_name}}</span></span> <span></span> <span><span>&lt;Directory /var/www/{{domain_name}}&gt;</span></span> <span><span> Options Indexes FollowSymLinks</span></span> <span><span> AllowOverride All</span></span> <span><span> Require all granted</span></span> <span><span>&lt;/Directory&gt;</span></span> <span></span> <span><span>&lt;FilesMatch \.php$&gt;</span></span> <span><span> SetHandler "proxy:unix:/var/run/php/php8.3-fpm.sock|fcgi://localhost/"</span></span> <span><span>&lt;/FilesMatch&gt;</span></span> <span></span> <span><span>ErrorLog </span><span>\$</span><span>{APACHE_LOG_DIR}/{{domain_name}}_error.log</span></span> <span><span>CustomLog </span><span>\$</span><span>{APACHE_LOG_DIR}/{{domain_name}}_access.log combined</span></span> <span><span>&lt;/VirtualHost&gt;</span></span> <span><span>EOF</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Enable New Site</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> a2ensite</span><span> {{domain_name}}.conf</span></span> <span><span>sudo</span><span> apache2ctl</span><span> configtest</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Create Web Root and Test PHP Page</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> /var/www/{{domain_name}}</span></span> <span><span>sudo</span><span> tee</span><span> /var/www/{{domain_name}}/info.php</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>&lt;?php</span></span> <span><span>phpinfo();</span></span> <span><span>?&gt;</span></span> <span><span>EOF</span></span> <span></span> <span><span># -------------------------</span></span> <span><span># Restart Apache</span></span> <span><span># -------------------------</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> apache2</span></span></code><span></span><span></span></pre> <p>Once saved, this script will be ready to execute against any Ubuntu server you connect to CloudRay.</p> <h2>Create a Variable Group</h2> <p>To avoid hardcoding sensitive information like the database name, user, password, or domain across multiple servers, CloudRay lets you create variable groups.</p> <p>This scripts use variables like <code>{{db_name}}</code>, <code>{{db_user}}</code>, <code>{{db_password}}</code>, and <code>{{domain_name}}</code> because CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use placeholders in your scripts, making them dynamic and reusable across different servers.</p> <p>To provide values for these variables, you’ll need to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>. Here’s how:</p> <img src="/_astro/variables.CXBDkF0K.jpg" alt="Screenshot of adding a new variable group" /> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “New Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>db_name</code>:</strong> This is the name of the database</li> <li><strong><code>db_user</code>:</strong> The name of the user</li> <li><strong><code>db_password</code>:</strong> The password of the for the database user</li> <li><strong><code>domain_name</code>:</strong> Your domain name of the LAMP stack server</li> </ul> <h2>Running the Script with CloudRay</h2> <p>Once your script is ready and saved in your CloudRay project, follow these steps to execute it on your Ubuntu server:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “LAMP Stack Installation”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/new-runlog.CW5AXxQz.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/install-runlog-output.BBId0mT9.jpg" alt="Screenshot of the output of the install LAMP stack script" /> <p>CloudRay will securely connecting to your server, executing the Bash script, and displaying real-time output logs. This allows you to monitor each step of the process and troubleshoot any issues directly from the dashboard.</p> <p>Once the script completes successfully, open your browser and visit your domain to verify that Apache and PHP are working correctly.</p> <img src="/_astro/webpage-confirm.BtdtYrq4.jpg" alt="Screenshot of the PHP info page confirming successful LAMP installation" /> <p>You should see the PHP information page, which confirms that your LAMP stack was installed and configured properly.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/deploy-ruby-on-rails">Deploy Ruby on Rails</a></li> <li><a href="/articles/deploy-jenkins-with-docker-compose">Deploy Jenkins with Docker Compose</a></li> <li><a href="/articles/deploy-laravel">Deploy Laravel</a></li> <li><a href="/articles/deploy-sonarqube">Deploy SonarQube</a></li> <li><a href="/articles/deploy-static-website-from-github">Deploy Static Website from GitHub</a></li> <li><a href="/articles/deploy-nextjs-application">Deploy Next.js Application</a></li> </ul>How to Install Elastic Stack on Ubuntu 24.10https://cloudray.io/articles/how-to-install-elastic-stack-on-ubuntu-24-10https://cloudray.io/articles/how-to-install-elastic-stack-on-ubuntu-24-10Learn how to install and configure the Elastic Stack (Elasticsearch, Logstash, Kibana, and Filebeat) on Ubuntu 24.10 to monitor and analyse logs effectively.Mon, 02 Jun 2025 00:00:00 GMT<p>Elastic Stack, formerly known as ELK Stack (Elasticsearch, Logstash, and Kibana), is a powerful suite of open-source tools for searching, analysing, and visualising data in real-time. This guide explain how to install and setup elastic stack on ubuntu.</p> <p>For teams managing multiple servers or performing repeatable setups, <a href="https://cloudray.io">CloudRay</a> can help automate the installation of Elastic Stack using Bash scripts, without needing to SSH into each server manually.</p> <h2>Contents</h2> <ul> <li><a href="#install-java-for-elastic-stack">Install Java for Elastic Stack</a></li> <li><a href="#install-and-configure-elasticsearch">Install and Configure Elasticsearch</a></li> <li><a href="#install-logstash">Install Logstash</a></li> <li><a href="#install-and-configure-kibana">Install and Configure Kibana</a></li> <li><a href="#install-and-configure-filebeat">Install and Configure Filebeat</a></li> <li><a href="#automate-the-installation-of-elastic-stack-with-cloudray">Automate the Installation of Elastic Stack with CloudRay</a></li> </ul> <h2>Install Java for Elastic Stack</h2> <p>Elastic stack requires Java component to run. Follow these steps to install Java.</p> <ol> <li>Update the server’s package index</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> update</span></span></code><span></span><span></span></pre> <ol> <li>Install the package to access repository over HTTPS</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> apt-transport-https</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Install OpenJDK 11 using the APT package manager</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> openjdk-11-jdk</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Verify the Java installation</li> </ol> <pre><code><span><span>java</span><span> --version</span></span></code><span></span><span></span></pre> <p>Your output should be like</p> <img src="/_astro/verify-java.BKUvdxa0.jpg" alt="Verifying Java installation" /> <ol> <li>Set the <code>JAVA_HOME</code> environment variable</li> </ol> <pre><code><span><span>sudo</span><span> nano</span><span> /etc/environment</span></span></code><span></span><span></span></pre> <p>Add the following line at the end of the file:</p> <pre><code><span><span>JAVA_HOME</span><span>=</span><span>"/usr/lib/jvm/java-11-openjdk-amd64"</span></span></code><span></span><span></span></pre> <p>Save the file.</p> <ol> <li>Reload the new environment variable</li> </ol> <pre><code><span><span>source</span><span> /etc/environment</span></span> <span><span>echo</span><span> $JAVA_HOME</span></span></code><span></span><span></span></pre> <h2>Install and Configure Elasticsearch</h2> <p>Elasticsearch is the core engine of the Elastic Stack. It provides distributed search and analytics capabilities across all types of data. Follow the steps below to install and configure Elasticsearch on your Ubuntu server.</p> <ol> <li>Import the Elasticsearch GPG key</li> </ol> <pre><code><span><span>wget</span><span> -qO</span><span> -</span><span> https://artifacts.elastic.co/GPG-KEY-elasticsearch</span><span> |</span><span> sudo</span><span> gpg</span><span> --dearmor</span><span> -o</span><span> /usr/share/keyrings/elasticsearch-keyring.gpg</span></span></code><span></span><span></span></pre> <ol> <li>Add the Elasticsearch APT repository</li> </ol> <pre><code><span><span>echo</span><span> "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main"</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/apt/sources.list.d/elastic-8.x.list</span></span></code><span></span><span></span></pre> <ol> <li>Update the APT package index</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> update</span></span></code><span></span><span></span></pre> <ol> <li>Install Elasticsearch</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> elasticsearch</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Start and enable the Elasticsearch service</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> start</span><span> elasticsearch</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> elasticsearch</span></span></code><span></span><span></span></pre> <ol> <li>Verify that Elasticsearch is running</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> status</span><span> elasticsearch</span></span></code><span></span><span></span></pre> <p>You should see output indicating that the service is active and running.</p> <img src="/_astro/confirm-elasticsearch.DuRMz03Y.jpg" alt="Confirm elastic search status" /> <ol> <li>Configure Elasticsearch</li> </ol> <pre><code><span><span>sudo</span><span> nano</span><span> /etc/elasticsearch/elasticsearch.yml</span></span></code><span></span><span></span></pre> <p>Inside the file, locate the <code>network</code> and <code>discovery</code> sections, then make the following changes:</p> <ul> <li>Set Elasticsearch to listen on all network interfaces</li> </ul> <pre><code><span><span>network.host:</span><span> 0.0.0.0</span></span></code><span></span><span></span></pre> <ul> <li>Set the discovery seed hosts to an empty array (for single-node setups)</li> </ul> <pre><code><span><span>discovery.seed_hosts:</span><span> []</span></span></code><span></span><span></span></pre> <img src="/_astro/network-setup.CD1V5uPD.jpg" alt="Confirm elastic search status" /> <ul> <li>For basic development environments, you can disable the security layer</li> </ul> <pre><code><span><span>xpack.security.enabled:</span><span> false</span></span></code><span></span><span></span></pre> <p>This is not recommended for production use.</p> <ol> <li>Restart the Elasticsearch service to apply changes</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> restart</span><span> elasticsearch</span></span></code><span></span><span></span></pre> <ol> <li>Run the following command to check if the Elasticsearch instance is reachable</li> </ol> <pre><code><span><span>curl</span><span> -X</span><span> GET</span><span> "localhost:9200"</span></span></code><span></span><span></span></pre> <p>You should receive a JSON response showing Elasticsearch version details and cluster status.</p> <img src="/_astro/elasticsearch-success.fsFKPTXl.jpg" alt="successful check of Elasticsearch" /> <h2>Install Logstash</h2> <p>Logstash is a powerful data processing pipeline that ingests, transforms, and sends data to your desired destination, typically Elasticsearch. It allows you to parse logs or structured data and filter them before storage or visualisation. Follow these steps to install and start Logstash on your Ubuntu server.</p> <ol> <li>Install Logstash</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> logstash</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Start and enable the Logstash service</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> start</span><span> logstash</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> logstash</span></span></code><span></span><span></span></pre> <ol> <li>Check the Logstash service status</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> status</span><span> logstash</span></span></code><span></span><span></span></pre> <p>You should see that the Logstash service is active (running).</p> <img src="/_astro/confirm-logstash.Bym4rsBy.jpg" alt="Confirm logstash status" /> <h2>Install and Configure Kibana</h2> <p>Kibana is the visualisation layer of the Elastic Stack. It provides a web-based interface for exploring and visualising data stored in Elasticsearch. After installing Kibana, you can access dashboards, perform searches, and monitor your logs in real time. Follow the steps below to install and configure Kibana on your Ubuntu server.</p> <ol> <li>Install Kibana</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> kibana</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Start and enable the Kibana service</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> start</span><span> kibana</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> kibana</span></span></code><span></span><span></span></pre> <ol> <li>Check the Kibana service status</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> status</span><span> kibana</span></span></code><span></span><span></span></pre> <p>You should see an active (running) status if the service started successfully.</p> <img src="/_astro/confirm-kibana.B9mRopoA.jpg" alt="Confirm Kibana status" /> <ol> <li>Configure Kibana by opening the Kibana configuration file</li> </ol> <pre><code><span><span>sudo</span><span> nano</span><span> /etc/kibana/kibana.yml</span></span></code><span></span><span></span></pre> <p>Uncomment and modify the following lines to allow Kibana to bind to any interface and connect to your local Elasticsearch instance:</p> <pre><code><span><span>server.port:</span><span> 5601</span></span> <span><span>server.host:</span><span> "0.0.0.0"</span></span> <span><span>elasticsearch.hosts:</span><span> [</span><span>"http://localhost:9200"</span><span>]</span></span></code><span></span><span></span></pre> <img src="/_astro/settings-kibana.CuG4QHwe.jpg" alt="Adjusting Kibana settings configuration" /> <p>Save and close the file</p> <ol> <li>Restart Kibana to apply the changes</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> restart</span><span> kibana</span></span></code><span></span><span></span></pre> <ol> <li>Adjust the firewall to open port 5601:</li> </ol> <pre><code><span><span>sudo</span><span> ufw</span><span> allow</span><span> 5601/tcp</span></span></code><span></span><span></span></pre> <ol> <li>Once restarted, Kibana will be accessible via your server’s IP address on port <code>5601</code></li> </ol> <pre><code><span><span>http://your_server_ip:5601</span></span></code><span></span><span></span></pre> <img src="/_astro/successful-kibana.DPKixuXb.jpg" alt="Kibana installation successful" /> <h2>Install and Configure Filebeat</h2> <p>Filebeat is a lightweight log shipper that forwards and centralises log data. It collects logs and forwards them to Logstash for processing or directly to Elasticsearch for indexing and visualisation in Kibana. Follow these steps to install and configure Filebeat on your Ubuntu server.</p> <ol> <li>Install Filebeat from the Elastic repository:</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> filebeat</span><span> -y</span></span></code><span></span><span></span></pre> <ol> <li>Configure Filebeat by editing the configuration file:</li> </ol> <pre><code><span><span>sudo</span><span> nano</span><span> /etc/filebeat/filebeat.yml</span></span></code><span></span><span></span></pre> <ul> <li>Comment out the Elasticsearch output section to disable direct shipping to Elasticsearch</li> </ul> <pre><code><span><span>#output.elasticsearch:</span></span> <span><span># hosts: ["localhost:9200"]</span></span></code><span></span><span></span></pre> <ul> <li>Uncomment the Logstash output section and set it to point to your local Logstash instance</li> </ul> <pre><code><span><span>output.logstash:</span></span> <span><span> hosts:</span><span> [</span><span>"localhost:5044"</span><span>]</span></span></code><span></span><span></span></pre> <img src="/_astro/filebeat-configuration.C_GWljV2.jpg" alt="Filebeat settings" /> <ol> <li>Enable the system module to collect logs from the system itself</li> </ol> <pre><code><span><span>sudo</span><span> filebeat</span><span> modules</span><span> enable</span><span> system</span></span></code><span></span><span></span></pre> <ol> <li>Restart the Elasticsearch service to apply changes</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> restart</span><span> elasticsearch</span></span></code><span></span><span></span></pre> <ol> <li>initialise index management (this will temporarily enable Elasticsearch output just for the setup)</li> </ol> <pre><code><span><span>sudo</span><span> filebeat</span><span> setup</span><span> --index-management</span><span> -E</span><span> output.logstash.enabled=</span><span>false</span><span> -E</span><span> 'output.elasticsearch.hosts=["0.0.0.0:9200"]'</span></span></code><span></span><span></span></pre> <ol> <li>start and enable Filebeat to run on system boot:</li> </ol> <pre><code><span><span>sudo</span><span> systemctl</span><span> start</span><span> filebeat</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> filebeat</span></span></code><span></span><span></span></pre> <ol> <li>You can confirm that Filebeat is successfully shipping logs by checking indices in Elasticsearch</li> </ol> <pre><code><span><span>curl</span><span> -XGET</span><span> "localhost:9200/_cat/indices?v"</span></span></code><span></span><span></span></pre> <p>You should see indices prefixed with <code>filebeat-</code> in the output.</p> <img src="/_astro/indices-filebeat.COGfHnUj.jpg" alt="Filebeat showing indices" /> <h2>Automate the Installation of Elastic Stack with CloudRay</h2> <p>Manually setting up the Elastic Stack across multiple servers can be time-consuming, especially when repeating the same steps on each server. <a href="https://cloudray.io">CloudRay</a> simplifies this process by allowing you to create reusable Bash scripts that can be executed remotely across your infrastructure.</p> <p>With CloudRay, you can automate the entire Elastic Stack setup — from installing Java, Elasticsearch, Logstash, Kibana, and Filebeat without logging into each server individually.</p> <p>Before you automate the installation process, connect your server to CloudRay using the <a href="/docs/agent">CloudRay agent</a>. This allows you to run bash scripts directly from the dashboard without SSH access.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Automate PostgreSQL Backup to Amazon S3https://cloudray.io/articles/automate-postgres-backup-to-amazon-s3https://cloudray.io/articles/automate-postgres-backup-to-amazon-s3Learn how to automate the backup of a PostgreSQL database to an Amazon S3 bucket using a reusable Bash script and CloudRay’s centralised automation platform.Mon, 26 May 2025 00:00:00 GMT<p>Automating PostgreSQL backups to Amazon S3 provides a reliable, scalable solution for data protection. S3’s durability and versioning capabilities make it ideal for storing critical database backups offsite.</p> <p>In this article, you will learn how to setup a secure and automated backup process of your PostgreSQL database to an Amazon S3 bucket. To automate the backup process, we will use the built <a href="https://cloudray.io/docs/schedules">CloudRay’s schedule</a> which simplifies running recurring backup script task without manual interventions.</p> <p>This guide assumes you already have a PostgreSQL server up and running. If not, you can follow the setup guide on <a href="https://cloudray.io/articles/setting-up-postgres-database">Setting Up a PostgreSQL Database</a></p> <h2>Contents</h2> <ul> <li><a href="#creating-amazon-s3-bucket">Creating Amazon S3 Bucket</a></li> <li><a href="#creating-an-iam-user">Creating an IAM User</a></li> <li><a href="#manual-backup-of-postgresql-database-to-s3">Manual Backup of PostgreSQL Database to S3</a></li> <li><a href="#automate-and-schdule-postgresql-backups-to-amazon-s3-using-cloudray">Automate and Schdule PostgreSQL Backups to Amazon S3 Using CloudRay</a> <ul> <li><a href="#scheduling-postgresql-database-backup-to-amazon-s3-using-cloudray">Scheduling PostgreSQL Database Backup to Amazon S3 Using CloudRay</a></li> </ul> </li> <li><a href="#wrapping-up">Wrapping Up</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Creating Amazon S3 Bucket</h2> <p>First, you create the S3 bucket that will store the PostgreSQL backup files.</p> <ol> <li> <p>In the AWS Console, Navigate to the <a href="https://us-east-1.console.aws.amazon.com/s3">Amazon s3 service</a></p> </li> <li> <p>Click on Create Bucket. You will see a general configurations</p> </li> <li> <p>Provide a Provide a globally unique bucket name, such as “cloudray-postgres-backups”</p> </li> </ol> <img src="/_astro/create-s3.Cz_DtAlt.jpg" alt="Screenshot of adding permission on AWS console" /> <ol> <li>Leave other options at their default values or customise as needed (e.g., enabling versioning or setting up encryption)</li> </ol> <img src="/_astro/defaults-s3.BRY0vgVk.jpg" alt="Screenshot showing defaults setting of s3" /> <ol> <li>Click <strong>Create bucket</strong></li> </ol> <p>This bucket will serve as the storage location for your PostgreSQL backups. You will reference the created bucket name when assigning permissions to the IAM user.</p> <h2>Creating an IAM User</h2> <p>To securely upload backups to S3, it’s best practice to create a dedicated user with minimal, scoped-down permissions. First, you begin by creating an IAM group and attach an inline policy that grants access to a specific S3 bucket. Then, you create a user and assign it to that group.</p> <p>Here are the steps to getting it done:</p> <ol> <li> <p>Navigate to the <a href="https://us-east-1.console.aws.amazon.com/iam/">IAM section of the AWS Management Console</a></p> </li> <li> <p>Click on <strong>User groups</strong> → <strong>Create group</strong>. Name the group (for example, “s3-backup”). Skip attaching policies for now, and proceed to the next step</p> </li> </ol> <img src="/_astro/create-group.4i3Iq7W9.jpg" alt="Screenshot of AWS console creating group" /> <ol> <li>Once the group is created, open it and go to the <strong>Permissions</strong> tab.</li> </ol> <img src="/_astro/locate-permission.DPGjbp2m.jpg" alt="Screenshot to locate permission tab on AWS console" /> <ol> <li>Click <strong>Add permissions</strong> → <strong>Create inline policy</strong></li> </ol> <img src="/_astro/adding-permission.CaXEw7e_.jpg" alt="Screenshot of adding permission on AWS console" /> <ol> <li>Choose the JSON tab and paste the following policy, replacing <code>your-bucket-name</code> with the actual bucket name created earlier:</li> </ol> <pre><code><span><span>{</span></span> <span><span> "Version": "2012-10-17",</span></span> <span><span> "Statement": [</span></span> <span><span> {</span></span> <span><span> "Effect": "Allow",</span></span> <span><span> "Action": [</span></span> <span><span> "s3:ListAllMyBuckets"</span></span> <span><span> ],</span></span> <span><span> "Resource": "*"</span></span> <span><span> },</span></span> <span><span> {</span></span> <span><span> "Effect": "Allow",</span></span> <span><span> "Action": [</span></span> <span><span> "s3:ListBucket",</span></span> <span><span> "s3:GetBucketLocation",</span></span> <span><span> "s3:GetBucketAcl"</span></span> <span><span> ],</span></span> <span><span> "Resource": "arn:aws:s3:::your-bucket-name"</span></span> <span><span> },</span></span> <span><span> {</span></span> <span><span> "Effect": "Allow",</span></span> <span><span> "Action": [</span></span> <span><span> "s3:GetObject",</span></span> <span><span> "s3:PutObject",</span></span> <span><span> "s3:DeleteObject",</span></span> <span><span> "s3:GetObjectAcl",</span></span> <span><span> "s3:PutObjectAcl",</span></span> <span><span> "s3:ListBucketVersions",</span></span> <span><span> "s3:ListBucketMultipartUploads",</span></span> <span><span> "s3:AbortMultipartUpload",</span></span> <span><span> "s3:ListMultipartUploadParts"</span></span> <span><span> ],</span></span> <span><span> "Resource": "arn:aws:s3:::your-bucket-name/*"</span></span> <span><span> }</span></span> <span><span> ]</span></span> <span><span>}</span></span></code><span></span><span></span></pre> <img src="/_astro/create-permissions.r0aHndnK.jpg" alt="Screenshot of creating permission" /> <ol> <li>Give your policy a name (for example <strong>s3-backup-policy</strong>) and save the policy</li> </ol> <img src="/_astro/save-policy.CN90l3sl.jpg" alt="image showing saved policy" /> <ol> <li> <p>Navigate to Users → Create users</p> </li> <li> <p>Set the username to <strong>backup-s3-user</strong>. Toogle on the <strong>Provide user access to the AWS Management Console - optional</strong>. Then, create a new password for the created user. Click on <strong>Next</strong></p> </li> </ol> <img src="/_astro/create-user-process.CL17cSRA.jpg" alt="screenshot of a creating a user process" /> <ol> <li>On the permissions screen, give the user permission by selecting the group (<strong>s3-backup</strong>) in which the user would be added to</li> </ol> <img src="/_astro/user-permission.DupcJUVg.jpg" alt="screenshot assigning user a permission" /> <ol> <li> <p>Review and create the user</p> </li> <li> <p>Once these are done, you need the Access key and Secret key for programmatic access to AWS resources. To get this keys, go back to <strong>users</strong>, then click on the user created earlier. Click on <strong>Create access Key</strong>.</p> </li> </ol> <img src="/_astro/create-access-key.DI3jHixd.jpg" alt="screenshot creating access key" /> <ol> <li>For the use case, select the <strong>Command Line Interface (CLI)</strong>. Then click on Next</li> </ol> <img src="/_astro/select-cli.CNr9p2m_.jpg" alt="screenshot of selecting CLI" /> <ol> <li>Finally create the access key and download the <code>.csv</code> files</li> </ol> <img src="/_astro/Download-access-key.rKHu19Rt.jpg" alt="screenshot of selecting CLI" /> <div> <p>NOTE</p> <p>For detailed guidance on setting up AWS CLI credentials, refer to this <a href="https://cloudray.io/articles/aws-cli-setup-guide">AWS CLI setup guide</a>. This should be done on the server where your PostgreSQL database is present.</p> </div> <h2>Manual Backup of PostgreSQL Database to S3</h2> <p>Before automating the Backup process, it’s important to understand how to create and upload a PostgreSQL backup to your S3 bucket.</p> <p>First, login to the PostgreSQL database:</p> <pre><code><span><span>sudo</span><span> -u</span><span> postgres</span><span> psql</span></span></code><span></span><span></span></pre> <p>Let’s confirm the databases present:</p> <pre><code><span><span>\l</span></span></code><span></span><span></span></pre> <p>You result would be similar to the below</p> <img src="/_astro/show-databases.DWMbDcVM.jpg" alt="screenshot showing output to database present" /> <p>Next, exit from the database:</p> <pre><code><span><span>Exit;</span></span></code><span></span><span></span></pre> <p>use the <code>pg_dump</code> to first of all export the database into a <code>.sql</code> file:</p> <pre><code><span><span>pg_dump</span><span> -U</span><span> postgres</span><span> -h</span><span> localhost</span><span> -d</span><span> ecommerce_db</span><span> &gt;</span><span> ecommerce_db_backup.sql</span></span></code><span></span><span></span></pre> <p>You will be prompted to enter your database password. This will create the <code>ecommerce_db_backup.sql</code> in the server.</p> <p>Additionally, if you want to backup all your database present, then use the following command:</p> <pre><code><span><span>pg_dumpall</span><span> -U</span><span> postgres</span><span> -h</span><span> localhost</span><span> --clean</span><span> &gt;</span><span> all-databases-backup.sql</span></span></code><span></span><span></span></pre> <p>This command will backup all backup all your databases to the server.</p> <p>Moving forward, before transfering the backups to S3, ensure you have AWS CLI configured. If you haven’t done that yet, run the following command:</p> <pre><code><span><span>aws</span><span> configure</span></span></code><span></span><span></span></pre> <img src="/_astro/configure-cli.RC3NVmE9.jpg" alt="screenshot showing configured CLI" /> <p>Here you provide:</p> <ul> <li>Access Key ID</li> <li>Secret Access Key</li> <li>Default region (same as your S3 bucket)</li> <li>Output format (e.g., json)</li> </ul> <p>Next, use the AWS CLI <code>s3 cp</code> command to upload the backup file. For instance, let’s upload the <code>blog_db_backup</code> file to S3</p> <pre><code><span><span>aws</span><span> s3</span><span> cp</span><span> blog_db_backup.sql</span><span> s3://cloudray-postgres-backups</span></span></code><span></span><span></span></pre> <p>You can verify your upload by visiting your s3 buckets and clicking on the bucket you created earlier</p> <img src="/_astro/successful-backup-s3.CoZEV9kN.jpg" alt="screenshot showing successful backup to s3" /> <p>You can also verify the successful backup in CLI by listing the content of the bucket:</p> <pre><code><span><span>aws</span><span> s3</span><span> ls</span><span> s3://cloudray-postgres-backups</span></span></code><span></span><span></span></pre> <p>your output should be similar to the below:</p> <img src="/_astro/confirmations-on-cli.BX6EJs3z.jpg" alt="screenshot showing successful backup to s3" /> <h2>Automate and Schdule PostgreSQL Backups to Amazon S3 Using CloudRay</h2> <p>Once you have manually verified your backup process, the next step is automation. This removes the need for constant manual effort and reduces the chances of human error.</p> <p>CloudRay offers a flexible scheduling feature that allows you to run scripts at specified intervals. This is particularly useful for automating recurring tasks like backing up PostgreSQL databases.</p> <p>If you haven’t already, go to <a href="https://app.cloudray.io">https://app.cloudray.io</a> and sign up for an account. Once signed in, you will have access to CloudRay’s dashboards for managing scripts and schedules. Additionally, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <p>To automate backups, first create a reusable script in CloudRay. This script will perform the backup and can be used across different schedules</p> <p>Here are the steps to take:</p> <img src="/_astro/backup-script._hLaDePr.jpg" alt="screenshot of backup script" /> <ol> <li>Navigate to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name the script: <strong>Backup Postgres Database</strong></li> <li>Paste in the following code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># ===== Generate Timestamp =====</span></span> <span><span>BACKUP_FILE</span><span>=</span><span>"{{db_name}}_$(</span><span>date</span><span> +%F_%H-%M-%S)"</span></span> <span></span> <span><span># ===== Security Checks =====</span></span> <span><span>if</span><span> [ </span><span>-z</span><span> "{{db_pass}}"</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "❌ Error: PostgreSQL password not set. Use a secure method to provide the password."</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span> <span></span> <span><span># ===== Ensure Backup Directory Exists =====</span></span> <span><span>mkdir</span><span> -p</span><span> "{{backup_dir}}"</span></span> <span></span> <span><span># ===== Create Backup =====</span></span> <span><span>echo</span><span> "Creating backup of database: {{db_name}}"</span></span> <span><span>if</span><span> [ </span><span>"{{compress}}"</span><span> =</span><span> true</span><span> ]; </span><span>then</span></span> <span><span> BACKUP_PATH</span><span>=</span><span>"{{backup_dir}}/${</span><span>BACKUP_FILE</span><span>}.sql.gz"</span></span> <span><span> PGPASSWORD</span><span>=</span><span>"{{db_pass}}"</span><span> pg_dump</span><span> -U</span><span> "{{db_user}}"</span><span> -h</span><span> "{{db_host}}"</span><span> -d</span><span> "{{db_name}}"</span><span> --clean</span><span> |</span><span> gzip</span><span> &gt;</span><span> "</span><span>$BACKUP_PATH</span><span>"</span></span> <span><span>else</span></span> <span><span> BACKUP_PATH</span><span>=</span><span>"{{backup_dir}}/${</span><span>BACKUP_FILE</span><span>}.sql"</span></span> <span><span> PGPASSWORD</span><span>=</span><span>"{{db_pass}}"</span><span> pg_dump</span><span> -U</span><span> "{{db_user}}"</span><span> -h</span><span> "{{db_host}}"</span><span> -d</span><span> "{{db_name}}"</span><span> --clean</span><span> &gt;</span><span> "</span><span>$BACKUP_PATH</span><span>"</span></span> <span><span>fi</span></span> <span></span> <span><span># Check backup success</span></span> <span><span>if</span><span> [ </span><span>$?</span><span> -ne</span><span> 0</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "❌ Backup failed!"</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span> <span><span>echo</span><span> "✅ Backup created: </span><span>$BACKUP_PATH</span><span>"</span></span> <span></span> <span><span># ===== Upload to S3 =====</span></span> <span><span>echo</span><span> "Uploading to S3 bucket: {{s3_bucket}}"</span></span> <span><span>if</span><span> [ </span><span>"{{compress}}"</span><span> =</span><span> true</span><span> ]; </span><span>then</span></span> <span><span> aws</span><span> s3</span><span> cp</span><span> "</span><span>$BACKUP_PATH</span><span>"</span><span> "s3://{{s3_bucket}}/${</span><span>BACKUP_FILE</span><span>}.sql.gz"</span></span> <span><span> aws</span><span> s3</span><span> cp</span><span> "</span><span>$BACKUP_PATH</span><span>"</span><span> "s3://{{s3_bucket}}/{{s3_latest_backup}}"</span></span> <span><span>else</span></span> <span><span> aws</span><span> s3</span><span> cp</span><span> "</span><span>$BACKUP_PATH</span><span>"</span><span> "s3://{{s3_bucket}}/${</span><span>BACKUP_FILE</span><span>}.sql"</span></span> <span><span> aws</span><span> s3</span><span> cp</span><span> "</span><span>$BACKUP_PATH</span><span>"</span><span> "s3://{{s3_bucket}}/latest_backup.sql"</span></span> <span><span>fi</span></span> <span></span> <span><span>if</span><span> [ </span><span>$?</span><span> -eq</span><span> 0</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "✅ Backup successfully uploaded to S3"</span></span> <span><span>else</span></span> <span><span> echo</span><span> "❌ Failed to upload backup to S3"</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span> <span></span> <span><span># ===== Apply Retention Policy =====</span></span> <span><span>echo</span><span> "Cleaning up backups older than {{retention_days}} days..."</span></span> <span><span>if</span><span> [ </span><span>"{{compress}}"</span><span> =</span><span> true</span><span> ]; </span><span>then</span></span> <span><span> find</span><span> "{{backup_dir}}"</span><span> -type</span><span> f</span><span> -name</span><span> "*.sql.gz"</span><span> -mtime</span><span> +"{{retention_days}}"</span><span> -delete</span></span> <span><span>else</span></span> <span><span> find</span><span> "{{backup_dir}}"</span><span> -type</span><span> f</span><span> -name</span><span> "*.sql"</span><span> -mtime</span><span> +"{{retention_days}}"</span><span> -delete</span></span> <span><span>fi</span></span> <span></span> <span><span>echo</span><span> "✨ Backup process completed successfully"</span></span></code><span></span><span></span></pre> <p>Here is what this script does:</p> <ul> <li>Creates a timestamp for PostgreSQL backup</li> <li>Secures the Database access</li> <li>Automate S3 upload of the backups</li> <li>Manages local storage by the implementation of retention policy</li> <li>Provides a clear status updates on the backup process</li> </ul> <p>Next, before running the scripts, you need to define values for the placeholders <code>{{db_user}}</code>, <code>{{db_pass}}</code>, <code>{{db_name}}</code>, <code>{{backup_dir}}</code>, <code>{{s3_bucket}}</code>, <code>{{retention_days}}</code>, and <code>{{s3_latest_backup}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.Bf99XPmd.jpg" alt="CloudRay variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a variable Group:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>db_name</code>:</strong> This is the name of the database</li> <li><strong><code>db_user</code>:</strong> The name of the user</li> <li><strong><code>db_pass</code>:</strong> The password of the for the database user</li> <li><strong><code>backup_dir</code>:</strong> The directory where PostgreSQL database backups will be stored</li> <li><strong><code>s3_bucket</code>:</strong> This is the name of the s3 bucket where the backups will be saved</li> <li><strong><code>retention_days</code>:</strong> The timeframe for the backup to be deleted on the server</li> <li><strong><code>s3_latest_backup</code>:</strong> The latest backup in s3 bucket</li> </ul> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>To run the <code>Backup Postgres Database</code>, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier</li> <li>Script: Choose the “Backup Postgres Database”</li> <li>Variable Group (optional): Select the variable group you created earlier</li> </ul> <img src="/_astro/run-backup-database-script.C4owoYDa.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/result-backup-script.72CZXAgG.jpg" alt="Screenshot of the output of the backup script" /> <p>CloudRay will automatically connect to your server, run the <code>Backup Postgres Database</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>To comfirm successful backup to Amazon S3, check your S3 bucket.</p> <img src="/_astro/confirmation-s3-cloudray.IXwLRr4Y.jpg" alt="Screenshot of confirmations on AWS" /> <h3>Scheduling PostgreSQL Database Backup to Amazon S3 Using CloudRay</h3> <p>CloudRay also offers <a href="https://cloudray.io/docs/schedules">Schedules</a>, allowing you to execute scripts automatically at specific intervals or times. This feature is useful for tasks such as automating database backups.</p> <p>For example, if you want to back up your PostgreSQL database to S3 on monday of every week at 1:00 AM, you can configure a CloudRay schedule to handle this automatically.</p> <p>Here are the steps to achieve this:</p> <ol> <li><strong>Navigate to Schedules:</strong> In your CloudRay dashboard, go to the “Schedules” tab.</li> </ol> <img src="/_astro/locating-schedule.3eGh4hx5.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Click “Add Schedule”:</strong> Start creating a new schedule.</li> </ol> <img src="/_astro/Setup-schedules.D36qiA3z.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Submit Schedule:</strong> Click “Submit” to activate your new schedule.</li> </ol> <img src="/_astro/schedules-enabled.BNje0CIz.jpg" alt="Screenshot of the location of enabled schedule" /> <p>CloudRay will automatically execute the backup script at the scheduled time, ensuring that your database is regularly backed up to your S3 bucket without manual intervention.</p> <h2>Wrapping Up</h2> <p>Automating PostgreSQL backups to Amazon S3 ensures your data is consistently protected, versioned, and stored offsite with minimal manual effort. By combining AWS services with CloudRay’s powerful scripting and scheduling tools, you can create a reliable and reusable backup workflow. Whether you’re managing a single database or multiple servers, this setup offers flexibility, scalability, and peace of mind.</p> <p>Start today by signing up at <a href="https://app.cloudray.io">https://app.cloudray.io</a> and manage your bash scripts in a centralised platform.</p> <h2>Related Guides</h2> <ul> <li><a href="https://cloudray.io/articles/automate-mysql-backup-to-amazon-s3">How to Automate MySQL Backup to Amazon S3</a></li> </ul>How to Automate AWS EC2, Backups and Monitoring Using Bash Scriptshttps://cloudray.io/articles/automate-aws-infrastrucuture-with-bash-scripthttps://cloudray.io/articles/automate-aws-infrastrucuture-with-bash-scriptLearn how to automate AWS EC2, EBS snapshots, and CPU monitoring using Bash scriptsFri, 23 May 2025 00:00:00 GMT<p>Automating AWS infrastructure is a key practice for modern DevOps teams and cloud engineers. While Infrastructure as Code (IaC) tools like <a href="https://developer.hashicorp.com/terraform">Terraform</a> and <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html">CloudFormation</a> have become the industry standard, Bash scripting remains a powerful and accessible way to automate AWS tasks. <a href="/articles/bash-vs-python">Bash scripts</a> is especially useful for teams that wants lightweights, fast, and scriptable workflows without the overhead of learning a new Domain Specific Languages (DSL).</p> <p>In this article, you will learn how to automate common AWS tasks using plain Bash scripts combined with AWS CLI. Whether you’re launching EC2 instances, taking automated EBS snapshots, or setting up monitoring scripts to track CPU utilisation, Bash provides a direct and flexible approach to get things done quickly. At the end, you will learn real-world use case on how to use bash script based automation more effectively using <a href="https://app.cloudray.io/">CloudRay</a> a centralised platform for managing, scheduling, executing, and organising your scripts across environments.</p> <h2>Contents</h2> <ul> <li><a href="#automation-use-cases">Automation Use Cases</a> <ul> <li><a href="#1-automating-ec2-instance-launch-with-bash-script">1. Automating EC2 Instance Launch with Bash Script</a></li> <li><a href="#2-automating-ebs-volume-backups-with-bash-script">2. Automating EBS Volume Backups with Bash Script</a></li> <li><a href="#3-monitoring-ec2-cpu-utilisation-with-bash-script">3. Monitoring EC2 CPU Utilisation with Bash Script</a></li> </ul> </li> <li><a href="#real-world-use-case-of-automating-aws-infrastructure-using-cloudray">Real World Use Case of Automating AWS Infrastructure using CloudRay</a> <ul> <li><a href="#ec2-instance-launch-with-auto-tagging-and-bootstrapping">EC2 Instance Launch with Auto-Tagging and Bootstrapping</a></li> <li><a href="#running-the-script-on-a-schedule-with-cloudray">Running the Script on a Schedule with CloudRay</a></li> </ul> </li> <li><a href="#wrapping-up">Wrapping Up</a></li> </ul> <h2>Automation Use Cases</h2> <p>AWS automation is not limited to large-scale infrastructure provisioning. With just Bash and the AWS CLI, you can automate a variety of real-world tasks such as launching of instances, backing up data, and monitoring system performance. These use cases are useful when you need a quick scripts to integrate into cron jobs, CI/CD pipelines, or even internal tools.</p> <p>To follow along, ensure that your AWS CLI is properly installed and configured. If not, refer to the <a href="https://cloudray.io/articles/aws-cli-setup-guide#installing-the-aws-cli">AWS CLI Setup Guide</a> for a complete walkthrough on installing the CLI, creating key pairs, and configuring credentials.</p> <p>Below are some of the most practical AWS automation tasks you can implement using Bash scripts.</p> <h3>1. Automating EC2 Instance Launch with Bash Script</h3> <p>Automating the creation, termination, and monitoring of EC2 instance is a common use case of AWS infrastructure management. With a simple bash script, you can reduce the manual steps and repeatability especially when managing development, staging, or even short-lived workloads for testing.</p> <p>To begin, create a bash script file named <code>launch-ec2</code>:</p> <pre><code><span><span>nano</span><span> launch-ec2</span></span></code><span></span><span></span></pre> <p>Add the following script to the file to launch EC2 instance, wait for it to become available, retrieve public IP address, and list the attached EBS volumes of the instance:</p> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Launch a new EC2 instance</span></span> <span><span>INSTANCE_ID</span><span>=</span><span>$(</span><span>aws</span><span> ec2</span><span> run-instances</span><span> \</span></span> <span><span> --image-id</span><span> ami-084568db4383264d4</span><span> \ </span><span> # Replace with your preferred AMI</span></span> <span><span> --count</span><span> 1</span><span> \</span></span> <span><span> --instance-type</span><span> t2.micro</span><span> \</span></span> <span><span> --key-name</span><span> my-production-key</span><span> \ </span><span> # Replace with your existing key name</span></span> <span><span> --security-group-ids</span><span> sg-0269249118de8b4fc</span><span> \ </span><span> # Replace with your Security Group ID</span></span> <span><span> --query</span><span> 'Instances[0].InstanceId'</span><span> \</span></span> <span><span> --output</span><span> text</span><span>)</span></span> <span></span> <span><span>echo</span><span> "Launched EC2 Instance with ID: </span><span>$INSTANCE_ID</span><span>"</span></span> <span></span> <span><span># Wait until instance is running</span></span> <span><span>aws</span><span> ec2</span><span> wait</span><span> instance-running</span><span> --instance-ids</span><span> $INSTANCE_ID</span></span> <span><span>echo</span><span> "Instance is now running."</span></span> <span></span> <span><span># Fetch public IP address</span></span> <span><span>PUBLIC_IP</span><span>=</span><span>$(</span><span>aws</span><span> ec2</span><span> describe-instances</span><span> \</span></span> <span><span> --instance-ids</span><span> $INSTANCE_ID </span><span>\</span></span> <span><span> --query</span><span> 'Reservations[0].Instances[0].PublicIpAddress'</span><span> \</span></span> <span><span> --output</span><span> text</span><span>)</span></span> <span></span> <span><span>echo</span><span> "Public IP Address: </span><span>$PUBLIC_IP</span><span>"</span></span> <span></span> <span><span># Get associated EBS Volume ID</span></span> <span><span>VOLUME_ID</span><span>=</span><span>$(</span><span>aws</span><span> ec2</span><span> describe-instances</span><span> \</span></span> <span><span> --instance-ids</span><span> $INSTANCE_ID </span><span>\</span></span> <span><span> --query</span><span> 'Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId'</span><span> \</span></span> <span><span> --output</span><span> text</span><span>)</span></span> <span></span> <span><span>echo</span><span> "EBS Volume attached to instance: </span><span>$VOLUME_ID</span><span>"</span></span></code><span></span><span></span></pre> <p>Here is what the script does:</p> <ul> <li>launches an EC2 instance with a specified AMI, instance type, key pair, and security group</li> <li>Waits for the instance to become active before proceeding</li> <li>Retrieves and display public IP for SSH access or web server testing</li> <li>Fetches the EBS Volume ID for later automation (e.g., backup snapshots or monitoring usage)</li> </ul> <div> <p>TIP</p> <p>Make sure you replace the AMI and security group in the script with your own AMI and security group. To get the AMI, navigate to the EC2 Console, select AMIs from the sidebar, and use filters to locate the Amazon Linux or Ubuntu image you would like to use (copy its AMI ID)</p> </div> <p>Next, make your script executable:</p> <pre><code><span><span>chmod</span><span> +x</span><span> launch-ec2</span></span></code><span></span><span></span></pre> <p>Finally you can run the scripts:</p> <pre><code><span><span>./launch-ec2</span></span></code><span></span><span></span></pre> <p>Your result should look similar to the output below</p> <img src="/_astro/automate-ec2-output.CrtdqI1Z.jpg" alt="screenshot showing output of EC2 automation on terminal" /> <p>This shows that the instance was created successfully and both the IP address and the EBS volume is displayed. To confirm further, you can check the AWS console.</p> <img src="/_astro/console-output.CwU_Ub1O.jpg" alt="screenshot showing output of EC2 automation on console" /> <p>You can see the EC2 instance running successful.</p> <h3>2. Automating EBS Volume Backups with Bash Script</h3> <p>Another critical automation task is backing up your Elastic Block Store (EBS) volumes. Regular backups ensure you can recover your data in the event of accidental deletion, instance failure, or security breaches.</p> <p>With a Bash script, you can create snapshots of your EBS volumes on demand or integrate them into a scheduled cron job for automated backups.</p> <p>To get started, create a script named <code>backup-ebs.sh</code>:</p> <pre><code><span><span>nano</span><span> backup-ebs.sh</span></span></code><span></span><span></span></pre> <p>Now add the following script to automate the creation of a snapshot for a given volume and tag it for easier identification:</p> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Configuration</span></span> <span><span>VOLUME_ID</span><span>=</span><span>"vol-04fa2bf1eb229e072"</span><span> # Replace with your volume ID</span></span> <span><span>DESCRIPTION</span><span>=</span><span>"Backup on $(</span><span>date</span><span> '+%Y-%m-%d %H:%M:%S')"</span></span> <span><span>TAG_KEY</span><span>=</span><span>"Purpose"</span></span> <span><span>TAG_VALUE</span><span>=</span><span>"AutomatedBackup"</span></span> <span></span> <span><span>echo</span><span> "Creating snapshot of volume: </span><span>$VOLUME_ID</span><span>"</span></span> <span></span> <span><span># Create snapshot</span></span> <span><span>SNAPSHOT_ID</span><span>=</span><span>$(</span><span>aws</span><span> ec2</span><span> create-snapshot</span><span> \</span></span> <span><span> --volume-id</span><span> $VOLUME_ID </span><span>\</span></span> <span><span> --description</span><span> "</span><span>$DESCRIPTION</span><span>"</span><span> \</span></span> <span><span> --query</span><span> 'SnapshotId'</span><span> \</span></span> <span><span> --output</span><span> text</span><span>)</span></span> <span></span> <span><span>echo</span><span> "Snapshot created with ID: </span><span>$SNAPSHOT_ID</span><span>"</span></span> <span></span> <span><span># Add tags to the snapshot</span></span> <span><span>aws</span><span> ec2</span><span> create-tags</span><span> \</span></span> <span><span> --resources</span><span> $SNAPSHOT_ID </span><span>\</span></span> <span><span> --tags</span><span> Key=</span><span>$TAG_KEY</span><span>,Value=</span><span>$TAG_VALUE</span></span> <span></span> <span><span>echo</span><span> "Snapshot </span><span>$SNAPSHOT_ID</span><span> tagged with </span><span>$TAG_KEY</span><span>=</span><span>$TAG_VALUE</span><span>"</span></span></code><span></span><span></span></pre> <p>Here is what the script does:</p> <ul> <li>Takes a snapshot of a specified EBS volume using the AWS CLI</li> <li>Adds a human-readable description that includes the date and time of the backup</li> <li>Applies tags to the snapshot so you can easily search or filter for it in the AWS console</li> </ul> <div> <p>TIP</p> <p>To find your EBS Volume ID, go to the EC2 Console → Volumes → and look under the “Volume ID” column. Be sure to copy the correct volume attached to your running instance.</p> </div> <p>Again, make the script executable:</p> <pre><code><span><span>chmod</span><span> +x</span><span> backup-ebs.sh</span></span></code><span></span><span></span></pre> <p>Finally, run the script:</p> <pre><code><span><span>./backup-ebs.sh</span></span></code><span></span><span></span></pre> <p>If successful, the output should display the snapshot ID along with a confirmation that it has been tagged</p> <img src="/_astro/automate-ebs-output.D5BVvf96.jpg" alt="screenshot showing output of EBS automation on terminal" /> <p>This backup script is a great candidate for <a href="https://cloudray.io/docs/schedules">CloudRay’s scheduler feature</a> allowing you to run it every day, week, or hour without needing a separate server or cron job setup.</p> <h3>3. Monitoring EC2 CPU Utilisation with Bash Script</h3> <p>System performance monitoring is essential for maintaining the health and stability of your applications. While AWS CloudWatch provides detailed metrics and dashboards, you can also automate metric checks using a simple Bash script</p> <p>One common metric to monitor is CPU utilisation. By querying CloudWatch, we can track when an EC2 instance’s CPU usage spikes above a defined threshold and respond accordingly.</p> <p>Start Start by creating a script file named <code>monitor-cpu.sh</code>:</p> <pre><code><span><span>nano</span><span> monitor-cpu.sh</span></span></code><span></span><span></span></pre> <p>Then paste the following code into the file:</p> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Configuration</span></span> <span><span>INSTANCE_ID</span><span>=</span><span>"i-044166a99d5666bfc"</span><span> # Replace with your instance ID</span></span> <span><span>CPU_THRESHOLD</span><span>=</span><span>70</span><span> # Trigger alert if CPU &gt; 70%</span></span> <span><span>TIME_RANGE_MINUTES</span><span>=</span><span>60</span><span> # How far back to check</span></span> <span></span> <span><span># Fetch CPU Utilisation</span></span> <span><span>CPU_UTILISATION</span><span>=</span><span>$(</span><span>aws</span><span> cloudwatch</span><span> get-metric-statistics</span><span> \</span></span> <span><span> --namespace</span><span> AWS/EC2</span><span> \</span></span> <span><span> --metric-name</span><span> CPUUtilisation</span><span> \</span></span> <span><span> --statistics</span><span> Maximum</span><span> \</span></span> <span><span> --period</span><span> 300</span><span> \</span></span> <span><span> --start-time</span><span> $(</span><span>date</span><span> -u</span><span> -d</span><span> "</span><span>$TIME_RANGE_MINUTES</span><span> minutes ago"</span><span> +%Y-%m-%dT%H:%M:%S</span><span>) </span><span>\</span></span> <span><span> --end-time</span><span> $(</span><span>date</span><span> -u</span><span> +%Y-%m-%dT%H:%M:%S</span><span>) </span><span>\</span></span> <span><span> --dimensions</span><span> Name=InstanceId,Value=</span><span>$INSTANCE_ID </span><span>\</span></span> <span><span> --query</span><span> 'Datapoints | sort_by(@, &amp;Timestamp)[-1].Maximum'</span><span> \</span></span> <span><span> --output</span><span> text</span><span>)</span></span> <span></span> <span><span># Show retrieved metric</span></span> <span><span>echo</span><span> "CPU Utilisation for instance </span><span>$INSTANCE_ID</span><span>: </span><span>$CPU_UTILIZATION</span><span>%"</span></span> <span></span> <span><span># Check against threshold</span></span> <span><span>if</span><span> (( $(echo </span><span>"</span><span>$CPU_UTILISATION</span><span> &gt; </span><span>$CPU_THRESHOLD</span><span>"</span><span> |</span><span> bc </span><span>-</span><span>l) )); </span><span>then</span></span> <span><span> echo</span><span> "⚠️ High CPU alert: </span><span>$INSTANCE_ID</span><span> at </span><span>$CPU_UTILISATION</span><span>%"</span></span> <span><span>else</span></span> <span><span> echo</span><span> "✅ CPU usage is within safe range."</span></span> <span><span>fi</span></span></code><span></span><span></span></pre> <p>Here is what the script does:</p> <ul> <li>Retrieves the maximum CPU utilisation from the last hour for a specific EC2 instance using CloudWatch metrics</li> <li>Compares it to a defined threshold (For example, 70%)</li> <li>Prints an alert if the usage exceeds the threshold, or a success message if within range</li> </ul> <p>Make the script executable:</p> <pre><code><span><span>chmod</span><span> +x</span><span> monitor-cpu.sh</span></span></code><span></span><span></span></pre> <p>Finally, run the script:</p> <pre><code><span><span>./monitor-cpu.sh</span></span></code><span></span><span></span></pre> <p>Your result would be similar to the below</p> <img src="/_astro/check-cpu-usage.B-uIP41X.jpg" alt="screenshot showing output of EBS automation on terminal" /> <p>This lightweight monitoring script is perfect for integrating with a scheduled task on CloudRay. You can set it to run every 15 minutes and trigger custom actions like Slack alerts, emails, or remediation scripts whenever thresholds are breached.</p> <h2>Real World Use Case of Automating AWS Infrastructure using CloudRay</h2> <p>While Bash scripting gives you a powerful tool to automate AWS tasks locally, managing and reusing these scripts across environments becomes tedious without a centralised system. That is where CloudRay comes in.</p> <p>CloudRay provides a Scripts dashboard where you can centrally manage, execute, and reuse your infrastructure automation scripts without relying on manual CLI or scattered cron jobs. it supports scheduling, allowing you to trigger EC2 provisioning or any AWS operation at predefined times.</p> <p>Let’s walk through a real-world scenario where a DevOps engineer needs to launch a pre-configured EC2 instance every morning for development testing.</p> <h3>EC2 Instance Launch with Auto-Tagging and Bootstrapping</h3> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <p>You can follow the below steps to create the script in CloudRay:</p> <img src="/_astro/launch-script.DNIpLSFv.jpg" alt="screenshot showing script creation in CloudRay" /> <ol> <li>Create a CloudRay account at <a href="https://app.cloudray.io/">https://app.cloudray.io/</a></li> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Launch EC2 for Daily Dev Testing</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Optional: user data script (e.g., install NGINX on launch)</span></span> <span><span>USER_DATA_SCRIPT</span><span>=</span><span>'#!/bin/bash</span></span> <span><span>sudo apt update -y</span></span> <span><span>sudo apt install -y nginx</span></span> <span><span>sudo systemctl enable nginx</span></span> <span><span>sudo systemctl start nginx</span></span> <span><span>'</span></span> <span></span> <span><span>echo</span><span> "[$(</span><span>date</span><span>)] Starting EC2 launch process..."</span><span> |</span><span> tee</span><span> -a</span><span> {{log_file}}</span></span> <span></span> <span><span># Launch EC2 instance</span></span> <span><span>INSTANCE_ID</span><span>=</span><span>$(</span><span>aws</span><span> ec2</span><span> run-instances</span><span> \</span></span> <span><span> --image-id</span><span> "{{ami_id}}"</span><span> \</span></span> <span><span> --count</span><span> 1</span><span> \</span></span> <span><span> --instance-type</span><span> "{{instance_type}}"</span><span> \</span></span> <span><span> --key-name</span><span> "{{key_name}}"</span><span> \</span></span> <span><span> --security-group-ids</span><span> "{{security_group_id}}"</span><span> \</span></span> <span><span> --block-device-mappings</span><span> "[{</span><span>\"</span><span>DeviceName</span><span>\"</span><span>:</span><span>\"</span><span>/dev/xvda</span><span>\"</span><span>,</span><span>\"</span><span>Ebs</span><span>\"</span><span>:{</span><span>\"</span><span>VolumeSize</span><span>\"</span><span>:{{volume_size}}}}]"</span><span> \</span></span> <span><span> --tag-specifications</span><span> "ResourceType=instance,Tags=[{Key=Name,Value={{tag_name}}},{Key=Environment,Value={{environment}}}]"</span><span> \</span></span> <span><span> --user-data</span><span> "$(</span><span>echo</span><span> -n</span><span> "</span><span>$USER_DATA_SCRIPT</span><span>" </span><span>|</span><span> base64</span><span> -w</span><span> 0</span><span>)"</span><span> \</span></span> <span><span> --query</span><span> 'Instances[0].InstanceId'</span><span> \</span></span> <span><span> --output</span><span> text</span><span>)</span></span> <span></span> <span><span>if</span><span> [[ </span><span>-z</span><span> "</span><span>$INSTANCE_ID</span><span>"</span><span> ]]; </span><span>then</span></span> <span><span> echo</span><span> "[$(</span><span>date</span><span>)] Failed to launch instance."</span><span> |</span><span> tee</span><span> -a</span><span> {{log_file}}</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span> <span></span> <span><span>echo</span><span> "[$(</span><span>date</span><span>)] Launched instance: </span><span>$INSTANCE_ID</span><span>"</span><span> |</span><span> tee</span><span> -a</span><span> {{log_file}}</span></span> <span></span> <span><span># Wait until running</span></span> <span><span>echo</span><span> "[$(</span><span>date</span><span>)] Waiting for instance to enter running state..."</span><span> |</span><span> tee</span><span> -a</span><span> {{log_file}}</span></span> <span><span>aws</span><span> ec2</span><span> wait</span><span> instance-running</span><span> --instance-ids</span><span> "</span><span>$INSTANCE_ID</span><span>"</span></span> <span><span>echo</span><span> "[$(</span><span>date</span><span>)] Instance is running."</span><span> |</span><span> tee</span><span> -a</span><span> {{log_file}}</span></span> <span></span> <span><span># Fetch public IP</span></span> <span><span>PUBLIC_IP</span><span>=</span><span>$(</span><span>aws</span><span> ec2</span><span> describe-instances</span><span> \</span></span> <span><span> --instance-ids</span><span> "</span><span>$INSTANCE_ID</span><span>"</span><span> \</span></span> <span><span> --query</span><span> 'Reservations[0].Instances[0].PublicIpAddress'</span><span> \</span></span> <span><span> --output</span><span> text</span><span>)</span></span> <span></span> <span><span>LAUNCH_TIME</span><span>=</span><span>$(</span><span>aws</span><span> ec2</span><span> describe-instances</span><span> \</span></span> <span><span> --instance-ids</span><span> "</span><span>$INSTANCE_ID</span><span>"</span><span> \</span></span> <span><span> --query</span><span> 'Reservations[0].Instances[0].LaunchTime'</span><span> \</span></span> <span><span> --output</span><span> text</span><span>)</span></span> <span></span> <span><span>echo</span><span> "[$(</span><span>date</span><span>)] Public IP Address: </span><span>$PUBLIC_IP</span><span>"</span><span> |</span><span> tee</span><span> -a</span><span> {{log_file}}</span></span> <span><span>echo</span><span> "[$(</span><span>date</span><span>)] Launch Time: </span><span>$LAUNCH_TIME</span><span>"</span><span> |</span><span> tee</span><span> -a</span><span> {{log_file}}</span></span> <span></span> <span><span># Output summary</span></span> <span><span>echo</span><span> ""</span></span> <span><span>echo</span><span> "================= EC2 Instance Launched ================="</span></span> <span><span>echo</span><span> "Instance ID : </span><span>$INSTANCE_ID</span><span>"</span></span> <span><span>echo</span><span> "Public IP : </span><span>$PUBLIC_IP</span><span>"</span></span> <span><span>echo</span><span> "Launch Time : </span><span>$LAUNCH_TIME</span><span>"</span></span> <span><span>echo</span><span> "Tag Name : {{tag_name}}"</span></span> <span><span>echo</span><span> "Environment : {{environment}}"</span></span> <span><span>echo</span><span> "========================================================"</span></span></code><span></span><span></span></pre> <p>This script provisions a fully tagged EC2 instance, installs NGINX on launch, and logs key metadata for auditing or monitoring.</p> <p>Before running the scripts, you need to define values for the placeholders <code>{{ami_id}}</code>, <code>{{instance_type}}</code>, <code>{{key_name}}</code> <code>{{security_group_id}}</code>, and <code>{{volume_size}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.gXXlqUgQ.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a variable Group:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>ami_id</code>:</strong> This is the AMI ID of your instance type (In this case it could be Ubuntu or any other operating system)</li> <li><strong><code>instance_type</code>:</strong> This is the type of instance you want to use</li> <li><strong><code>key_name</code>:</strong> The name of your private key in your AWS account</li> <li><strong><code>security_group_id</code>:</strong> This is your security group ID</li> <li><strong><code>volume_size</code>:</strong> The size of your EBS volume in GB</li> </ul> <p>You can choose to run the script using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>.</p> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>To run the <code>Launch EC2 for Daily Dev Testing</code>, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Launch EC2 for Daily Dev Testing”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/create-runlog.D1MW52tC.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/output-runlog.DnWfa8VN.jpg" alt="Screenshot of the output automation script" /> <p>CloudRay will automatically connect to your server, run the <code>Launch EC2 for Daily Dev Testing</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>You can Verify the instance also from the AWS console.</p> <img src="/_astro/verify-auth-aws-console.C_LDF4Hc.jpg" alt="Screenshot of the output automation script" /> <h3>Running the Script on a Schedule with CloudRay</h3> <p>CloudRay also offers <a href="https://cloudray.io/docs/schedules">Schedules</a>, allowing you to execute scripts automatically at specific intervals or times.</p> <p>To execute this script daily at 8:00 AM without manual effort:</p> <ol> <li><strong>Navigate to Schedules:</strong> In your CloudRay dashboard, go to the “Schedules” tab.</li> </ol> <img src="/_astro/locating-schedule.3eGh4hx5.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Click “Add Schedule”:</strong> Start creating a new schedule.</li> </ol> <img src="/_astro/setup-schedules.Bj3WC4e_.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Submit Schedule:</strong> Click “Submit” to activate your new schedule.</li> </ol> <img src="/_astro/enable-schedules.Bow5pu39.jpg" alt="Screenshot of the location of enabled schedule" /> <p>CloudRay will automatically execute the backup script at the scheduled time, ensuring that your EC2 instance is launched regularly everyday at 8AM.</p> <h2>Wrapping Up</h2> <p>Bash scripting provides a lightweight yet powerful way to automate AWS infrastructure tasks like EC2 provisioning, EBS backups, and performance monitoring. By combining these scripts with CloudRay’s scheduling and central management, you can build reliable, automated workflows without complex tooling. Start with the examples provided, customize them for your needs, and explore more automation possibilities.</p> <p>Start today by signing up at <a href="https://app.cloudray.io">https://app.cloudray.io</a> and manage your bash scripts in a centralised platform.</p>How to Automate MySQL Backup to Amazon S3https://cloudray.io/articles/automate-mysql-backup-to-amazon-s3https://cloudray.io/articles/automate-mysql-backup-to-amazon-s3Learn how to automate the backup of a MySQL database to an Amazon S3 bucket using a reusable Bash script and CloudRay’s centralised automation platform.Mon, 19 May 2025 00:00:00 GMT<p>Automating MySQL backups and storing them on Amazon S3 is a practical way to ensure your data is safe, retrievable, and versioned offsite. S3 provides scalable, durable storage, making it an excellent choice for backup solutions.</p> <p>In this article, you will learn how to setup a secure and automated backup process of your MySQL database to an Amazon S3 bucket. To automate the backup process, we will use the built <a href="https://cloudray.io/docs/schedules">CloudRay’s schedule</a> which simplifies running recurring backup script task without manual interventions.</p> <p>This guide assumes you already have a MySQL server up and running. If not, you can follow the setup guide on <a href="https://cloudray.io/articles/deploy-mysql-server">How to Deploy MySQL Server</a>.</p> <h2>Contents</h2> <ul> <li><a href="#creating-amazon-s3-bucket">Creating Amazon S3 Bucket</a></li> <li><a href="#creating-an-iam-user">Creating an IAM User</a></li> <li><a href="#manual-backup-of-mysql-database-to-s3">Manual Backup of MySQL Database to S3</a></li> <li><a href="#automate-and-schdule-mysql-backups-to-amazon-s3-using-cloudray">Automate and Schdule MySQL Backups to Amazon S3 Using CloudRay</a> <ul> <li><a href="#scheduling-mysql-database-backup-to-amazon-s3-using-cloudray">Scheduling MySQL Database Backup to Amazon S3 Using CloudRay</a></li> </ul> </li> <li><a href="#wrapping-up">Wrapping Up</a></li> </ul> <h2>Creating Amazon S3 Bucket</h2> <p>First, you create the S3 bucket that will store the MySQL backup files.</p> <ol> <li> <p>In the AWS Console, Navigate to the <a href="https://us-east-1.console.aws.amazon.com/s3">Amazon s3 service</a></p> </li> <li> <p>Click on Create Bucket. You will see a general configurations</p> </li> <li> <p>Provide a Provide a globally unique bucket name, such as “cloudray-mysql-backups”</p> </li> </ol> <img src="/_astro/create-s3.D6u-B-vy.jpg" alt="Screenshot of adding permission on AWS console" /> <ol> <li>Leave other options at their default values or customise as needed (e.g., enabling versioning or setting up encryption)</li> </ol> <img src="/_astro/defaults-s3.BHGm_K-h.jpg" alt="Screenshot showing defaults setting of s3" /> <ol> <li>Click <strong>Create bucket</strong></li> </ol> <p>This bucket will serve as the storage location for your MySQL backups. You will reference the created bucket name when assigning permissions to the IAM user.</p> <h2>Creating an IAM User</h2> <p>To securely upload backups to S3, it’s best practice to create a dedicated user with minimal, scoped-down permissions. First, you begin by creating an IAM group and attach an inline policy that grants access to a specific S3 bucket. Then, you create a user and assign it to that group.</p> <p>Here are the steps od getting it down:</p> <ol> <li> <p>Navigate to the <a href="https://us-east-1.console.aws.amazon.com/iam/">IAM section of the AWS Management Console</a></p> </li> <li> <p>Click on <strong>User groups</strong> → <strong>Create group</strong>. Name the group (for example, “s3-backup”). Skip attaching policies for now, and proceed to the next step</p> </li> </ol> <img src="/_astro/create-group.CvZvqr_B.jpg" alt="Screenshot of AWS console creating group" /> <ol> <li>Once the group is created, open it and go to the <strong>Permissions</strong> tab.</li> </ol> <img src="/_astro/locate-permission.DBfAaI0O.jpg" alt="Screenshot to locate permission tab on AWS console" /> <ol> <li>Click <strong>Add permissions</strong> → <strong>Create inline policy</strong></li> </ol> <img src="/_astro/adding-permission.C-hCtNmP.jpg" alt="Screenshot of adding permission on AWS console" /> <ol> <li>Choose the JSON tab and paste the following policy, replacing <code>your-bucket-name</code> with the actual bucket name created earlier:</li> </ol> <pre><code><span><span>{</span></span> <span><span> "Version": "2012-10-17",</span></span> <span><span> "Statement": [</span></span> <span><span> {</span></span> <span><span> "Effect": "Allow",</span></span> <span><span> "Action": [</span></span> <span><span> "s3:ListAllMyBuckets"</span></span> <span><span> ],</span></span> <span><span> "Resource": "*"</span></span> <span><span> },</span></span> <span><span> {</span></span> <span><span> "Effect": "Allow",</span></span> <span><span> "Action": [</span></span> <span><span> "s3:ListBucket",</span></span> <span><span> "s3:GetBucketLocation",</span></span> <span><span> "s3:GetBucketAcl"</span></span> <span><span> ],</span></span> <span><span> "Resource": "arn:aws:s3:::your-bucket-name"</span></span> <span><span> },</span></span> <span><span> {</span></span> <span><span> "Effect": "Allow",</span></span> <span><span> "Action": [</span></span> <span><span> "s3:GetObject",</span></span> <span><span> "s3:PutObject",</span></span> <span><span> "s3:DeleteObject",</span></span> <span><span> "s3:GetObjectAcl",</span></span> <span><span> "s3:PutObjectAcl",</span></span> <span><span> "s3:ListBucketVersions",</span></span> <span><span> "s3:ListBucketMultipartUploads",</span></span> <span><span> "s3:AbortMultipartUpload",</span></span> <span><span> "s3:ListMultipartUploadParts"</span></span> <span><span> ],</span></span> <span><span> "Resource": "arn:aws:s3:::your-bucket-name/*"</span></span> <span><span> }</span></span> <span><span> ]</span></span> <span><span>}</span></span></code><span></span><span></span></pre> <img src="/_astro/create-permissions.BB6kAu7u.jpg" alt="Screenshot of creating permission" /> <ol> <li>Give your policy a name (for example <strong>s3-backup-policy</strong>) and save the policy</li> </ol> <img src="/_astro/save-policy.C-FA-mbd.jpg" alt="image showing saved policy" /> <ol> <li> <p>Navigate to Users → Create users</p> </li> <li> <p>Set the username to <strong>backup-s3-user</strong>. Toogle on the <strong>Provide user access to the AWS Management Console - optional</strong>. Then, create a new password for the created user. Click on <strong>Next</strong></p> </li> </ol> <img src="/_astro/create-user-process.CJ3KknxQ.jpg" alt="screenshot of a creating a user process" /> <ol> <li>On the permissions screen, give the user permission by selecting the group (<strong>s3-backup</strong>) in which the user would be added to</li> </ol> <img src="/_astro/user-permission.-llYOy11.jpg" alt="screenshot assigning user a permission" /> <ol> <li> <p>Review and create the user</p> </li> <li> <p>Once these are done, you need the Access key and Secret key for programmatic access to AWS resources. To get this keys, go back to <strong>users</strong>, then click on the user created earlier. Click on <strong>Create access Key</strong>.</p> </li> </ol> <img src="/_astro/create-access-key.DBHw8VaW.jpg" alt="screenshot creating access key" /> <ol> <li>For the use case, select the <strong>Command Line Interface (CLI)</strong>. Then click on Next</li> </ol> <img src="/_astro/select-cli.BDz_OTIZ.jpg" alt="screenshot of selecting CLI" /> <ol> <li>Finally create the access key and download the <code>.csv</code> files</li> </ol> <img src="/_astro/Download-access-key.BfIX9Zuu.jpg" alt="screenshot of selecting CLI" /> <div> <p>NOTE</p> <p>For detailed guidance on setting up AWS CLI credentials, refer to this <a href="https://cloudray.io/articles/aws-cli-setup-guide">AWS CLI setup guide</a>. This should be done on the server where your MySQL database is present.</p> </div> <h2>Manual Backup of MySQL Database to S3</h2> <p>Before automating the Backup process, it’s important to understand how to create and upload a MySQL backup to your S3 bucket.</p> <p>First, login to the MySQL database:</p> <pre><code><span><span>mysql</span><span> -u</span><span> root</span><span> -p</span></span></code><span></span><span></span></pre> <p>Let’s confirm the databases present:</p> <pre><code><span><span>SHOW DATABASES;</span></span></code><span></span><span></span></pre> <p>You result would be similar to the below</p> <img src="/_astro/show-databases.BYENtwW_.jpg" alt="screenshot showing output to database present" /> <p>Next, exit from the database:</p> <pre><code><span><span>Exit;</span></span></code><span></span><span></span></pre> <p>use the <code>mysqldump</code> to first of all export the database into a <code>.sql</code> file:</p> <pre><code><span><span>mysqldump</span><span> -u</span><span> root</span><span> -p</span><span> blog_db</span><span> &gt;</span><span> blog_db_backup.sql</span></span></code><span></span><span></span></pre> <p>You will be prompted to enter your database password. This will create the <code>blog_db_backup.sql</code> in the server.</p> <p>Additionally, if you want to backup all your database present, then use the following command:</p> <pre><code><span><span>mysqldump</span><span> -u</span><span> root</span><span> -p</span><span> --all-databases</span><span> &gt;</span><span> all-databases-backup.sql</span></span></code><span></span><span></span></pre> <p>This command will backup all backup all your databases to the server.</p> <p>Moving forward, before transfering the backups to S3, ensure you have AWS CLI configured. If you haven’t done that yet, run the following command:</p> <pre><code><span><span>aws</span><span> configure</span></span></code><span></span><span></span></pre> <img src="/_astro/configure-cli.Cwz_I0O7.jpg" alt="screenshot showing configured CLI" /> <p>Here you provide:</p> <ul> <li>Access Key ID</li> <li>Secret Access Key</li> <li>Default region (same as your S3 bucket)</li> <li>Output format (e.g., json)</li> </ul> <p>Next, use the AWS CLI <code>s3 cp</code> command to upload the backup file. For instance, let’s upload the <code>blog_db_backup</code> file to S3</p> <pre><code><span><span>aws</span><span> s3</span><span> cp</span><span> blog_db_backup.sql</span><span> s3://cloudray-mysql-backups</span></span></code><span></span><span></span></pre> <p>You can verify your upload by visiting your s3 buckets and clicking on the bucket you created earlier</p> <img src="/_astro/successful-backup-s3.CoXKsc8n.jpg" alt="screenshot showing successful backup to s3" /> <p>You can also verify the successful backup in CLI by listing the content of the bucket:</p> <pre><code><span><span>aws</span><span> s3</span><span> ls</span><span> s3://cloudray-mysql-backups</span></span></code><span></span><span></span></pre> <p>your output should be similar to the below:</p> <img src="/_astro/confirmations-on-cli.BC25WuLK.jpg" alt="screenshot showing successful backup to s3" /> <h2>Automate and Schdule MySQL Backups to Amazon S3 Using CloudRay</h2> <p>Once you have manually verified your backup process, the next step is automation. This removes the need for constant manual effort and reduces the chances of human error.</p> <p>CloudRay offers a flexible scheduling feature that allows you to run scripts at specified intervals. This is particularly useful for automating recurring tasks like backing up MySQL databases.</p> <p>If you haven’t already, go to <a href="https://app.cloudray.io">https://app.cloudray.io</a> and sign up for an account. Once signed in, you will have access to CloudRay’s dashboards for managing scripts and schedules. Additionally, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <p>To automate backups, first create a reusable script in CloudRay. This script will perform the backup and can be used across different schedules</p> <p>Here are the steps to take:</p> <img src="/_astro/backup-script.mW4CzjZw.jpg" alt="screenshot of backup script" /> <ol> <li>Navigate to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name the script: <strong>Backup MySQL Database</strong></li> <li>Paste in the following code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span></span> <span><span># ===== Generate Timestamp =====</span></span> <span><span>BACKUP_FILE</span><span>=</span><span>"{{db_name}}_$(</span><span>date</span><span> +%F_%H-%M-%S)"</span></span> <span></span> <span><span># ===== Security Checks =====</span></span> <span><span>if</span><span> [ </span><span>-z</span><span> {{db_pass}} ]; </span><span>then</span></span> <span><span> echo</span><span> "❌ Error: MySQL password not set. Use a secure method to provide the password."</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span> <span></span> <span><span># ===== Ensure Backup Directory Exists =====</span></span> <span><span>mkdir</span><span> -p</span><span> {{backup_dir}}</span></span> <span></span> <span><span># ===== Create Backup =====</span></span> <span><span>echo</span><span> "Creating backup of database: {{db_name}}"</span></span> <span><span>if</span><span> [ {{compress}} </span><span>=</span><span> true</span><span> ]; </span><span>then</span></span> <span><span> BACKUP_PATH</span><span>=</span><span>"{{backup_dir}}/${</span><span>BACKUP_FILE</span><span>}.sql.gz"</span></span> <span><span> mysqldump</span><span> --user=</span><span>{{db_user}} --password={{db_pass}} --single-transaction --quick {{db_name}} | gzip &gt; "$BACKUP_PATH"</span></span> <span><span>else</span></span> <span><span> BACKUP_PATH</span><span>=</span><span>"{{backup_dir}}/${</span><span>BACKUP_FILE</span><span>}.sql"</span></span> <span><span> mysqldump</span><span> --user=</span><span>{{db_user}} --password={{db_pass}} --single-transaction --quick {{db_name}} &gt; "$BACKUP_PATH"</span></span> <span><span>fi</span></span> <span></span> <span><span># Check backup success</span></span> <span><span>if</span><span> [ </span><span>$?</span><span> -ne</span><span> 0</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "❌ Backup failed!"</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span> <span><span>echo</span><span> "✅ Backup created: </span><span>$BACKUP_PATH</span><span>"</span></span> <span></span> <span><span># ===== Upload to S3 =====</span></span> <span><span>echo</span><span> "Uploading to S3 bucket: {{s3_bucket}}"</span></span> <span><span>if</span><span> [ {{compress}} </span><span>=</span><span> true</span><span> ]; </span><span>then</span></span> <span><span> aws</span><span> s3</span><span> cp</span><span> "</span><span>$BACKUP_PATH</span><span>"</span><span> "s3://{{s3_bucket}}/${</span><span>BACKUP_FILE</span><span>}.sql.gz"</span></span> <span><span> aws</span><span> s3</span><span> cp</span><span> "</span><span>$BACKUP_PATH</span><span>"</span><span> "s3://{{s3_bucket}}/{{s3_latest_backup}}"</span></span> <span><span>else</span></span> <span><span> aws</span><span> s3</span><span> cp</span><span> "</span><span>$BACKUP_PATH</span><span>"</span><span> "s3://{{s3_bucket}}/${</span><span>BACKUP_FILE</span><span>}.sql"</span></span> <span><span> aws</span><span> s3</span><span> cp</span><span> "</span><span>$BACKUP_PATH</span><span>"</span><span> "s3://{{s3_bucket}}/latest_backup.sql"</span></span> <span><span>fi</span></span> <span></span> <span><span>if</span><span> [ </span><span>$?</span><span> -eq</span><span> 0</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "✅ Backup successfully uploaded to S3"</span></span> <span><span>else</span></span> <span><span> echo</span><span> "❌ Failed to upload backup to S3"</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span> <span></span> <span><span># ===== Apply Retention Policy =====</span></span> <span><span>echo</span><span> "Cleaning up backups older than {{retention_days}} days..."</span></span> <span><span>if</span><span> [ {{compress}} </span><span>=</span><span> true</span><span> ]; </span><span>then</span></span> <span><span> find</span><span> {{backup_dir}}</span><span> -type</span><span> f</span><span> -name</span><span> "*.sql.gz"</span><span> -mtime</span><span> +{{retention_days}}</span><span> -delete</span></span> <span><span>else</span></span> <span><span> find</span><span> {{backup_dir}}</span><span> -type</span><span> f</span><span> -name</span><span> "*.sql"</span><span> -mtime</span><span> +{{retention_days}}</span><span> -delete</span></span> <span><span>fi</span></span> <span></span> <span><span>echo</span><span> "✨ Backup process completed successfully"</span></span></code><span></span><span></span></pre> <p>Here is what this script does:</p> <ul> <li>Creates a timestamp MySQL backup</li> <li>Secures the Database access</li> <li>Automate S3 upload of the backups</li> <li>Manages local storage by the implementation of retention policy</li> <li>Provides a clear status updates on the backup process</li> </ul> <p>Next, before running the scripts, you need to define values for the placeholders <code>{{db_user}}</code>, <code>{{db_pass}}</code>, <code>{{db_name}}</code>, <code>{{backup_dir}}</code>, <code>{{s3_bucket}}</code>, <code>{{retention_days}}</code>, and <code>{{s3_latest_backup}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/backup-script.mW4CzjZw.jpg" alt="CloudRay variable group setup for WordPress deployment parameters" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a variable Group:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>db_name</code>:</strong> This is the name of the database</li> <li><strong><code>db_user</code>:</strong> The name of the user</li> <li><strong><code>db_pass</code>:</strong> The password of the for the database user</li> <li><strong><code>backup_dir</code>:</strong> The directory where MySQL database backups will be stored</li> <li><strong><code>s3_bucket</code>:</strong> This is the name of the s3 bucket where the backups will be saved</li> <li><strong><code>retention_days</code>:</strong> The timeframe for the backup to be deleted on the server</li> <li><strong><code>s3_latest_backup</code>:</strong> The latest backup in s3 bucket</li> </ul> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>To run the <code>Backup MySQL Database</code>, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier</li> <li>Script: Choose the “Backup MySQL Database”</li> <li>Variable Group (optional): Select the variable group you created earlier</li> </ul> <img src="/_astro/run-backup-database-script.BL2GzzKS.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/result-backup-script.BnKpsQD2.jpg" alt="Screenshot of the output of the install k3s script" /> <p>CloudRay will automatically connect to your server, run the <code>Backup MySQL Database</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>To comfirm successful backup to Amazon S3, check your S3 bucket.</p> <img src="/_astro/confirmation-s3-cloudray.CFdbUAyv.jpg" alt="Screenshot of the output of the install k3s script" /> <h3>Scheduling MySQL Database Backup to Amazon S3 Using CloudRay</h3> <p>CloudRay also offers <a href="https://cloudray.io/docs/schedules">Schedules</a>, allowing you to execute scripts automatically at specific intervals or times. This feature is useful for tasks such as automating database backups.</p> <p>For example, if you want to back up your MySQL database to S3 on monday of every week at 1:00 AM, you can configure a CloudRay schedule to handle this automatically.</p> <p>Here are the steps to achieve this:</p> <ol> <li><strong>Navigate to Schedules:</strong> In your CloudRay dashboard, go to the “Schedules” tab.</li> </ol> <img src="/_astro/locating-schedule.3eGh4hx5.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Click “Add Schedule”:</strong> Start creating a new schedule.</li> </ol> <img src="/_astro/Setup-schedules.BhqKe5w1.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Submit Schedule:</strong> Click “Submit” to activate your new schedule.</li> </ol> <img src="/_astro/schedules-enabled.B0PE8Kx7.jpg" alt="Screenshot of the location of enabled schedule" /> <p>CloudRay will automatically execute the backup script at the scheduled time, ensuring that your database is regularly backed up to your S3 bucket without manual intervention.</p> <h2>Wrapping Up</h2> <p>Automating MySQL backups to Amazon S3 ensures your data is consistently protected, versioned, and stored offsite with minimal manual effort. By combining AWS services with CloudRay’s powerful scripting and scheduling tools, you can create a reliable and reusable backup workflow. Whether you’re managing a single database or multiple servers, this setup offers flexibility, scalability, and peace of mind.</p> <p>Start today by signing up at <a href="https://app.cloudray.io">https://app.cloudray.io</a> and manage your bash scripts in a centralised platform.</p>Automate the Installation of Prometheushttps://cloudray.io/articles/install-prometheushttps://cloudray.io/articles/install-prometheusLearn how to automate the installation and configuration of Prometheus monitoring using CloudRayTue, 13 May 2025 00:00:00 GMT<p>Prometheus requires proper configuration of storage paths, user permissions, and network access to function as an effective monitoring solution. This guide demonstrates how to automate Prometheus installation with <a href="https://app.cloudray.io/">CloudRay</a>, implementing secure service configuration, proper file permissions, and systemd service management.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#prometheus-installation-script">Prometheus Installation Script</a></li> <li><a href="#prometheus-service-script">Prometheus Service Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-scripts-to-install-prometheus-with-cloudray">Running the Scripts to Install Prometheus with CloudRay</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before beginning Prometheus deployment, ensure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>These scripts target Ubuntu/Debian systems. For RHEL-based distributions, replace apt commands with appropriate <code>yum</code> or <code>dnf</code> equivalents. The Prometheus configuration directory path is consistent across Linux distributions</p> </div> <h2>Create the Automation Script</h2> <p>Two Bash scripts are required for complete Prometheus deployment:</p> <ol> <li><strong>Prometheus Installation Script:</strong> This script handles binary installation and filesystem setup</li> <li><strong>Prometheus Service Script:</strong> This script configures systemd service and runtime parameters</li> </ol> <h3>Prometheus Installation Script</h3> <p>This script performs the initial Prometheus deployment with production-ready defaults:</p> <img src="/_astro/install-script.MWbPfdlj.jpg" alt="Screenshot of adding a new install script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Prometheus Installation Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit immediately on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update package list</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span># Download Prometheus tarball</span></span> <span><span>wget</span><span> https://github.com/prometheus/prometheus/releases/download/v{{prom_version}}/prometheus-{{prom_version}}.linux-amd64.tar.gz</span></span> <span></span> <span><span># Extract the tarball</span></span> <span><span>tar</span><span> xvf</span><span> prometheus-{{prom_version}}.linux-amd64.tar.gz</span></span> <span></span> <span><span># Move Prometheus binaries</span></span> <span><span>sudo</span><span> mv</span><span> prometheus-{{prom_version}}.linux-amd64/prometheus</span><span> /usr/local/bin/</span></span> <span><span>sudo</span><span> mv</span><span> prometheus-{{prom_version}}.linux-amd64/promtool</span><span> /usr/local/bin/</span></span> <span></span> <span><span># Create Prometheus directories</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> /etc/prometheus</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> /var/lib/prometheus</span></span> <span></span> <span><span># Move config files</span></span> <span><span>sudo</span><span> mv</span><span> prometheus-{{prom_version}}.linux-amd64/prometheus.yml</span><span> /etc/prometheus/prometheus.yml</span></span> <span></span> <span><span># Show installed versions</span></span> <span><span>prometheus</span><span> --version</span></span> <span><span>promtool</span><span> --version</span></span> <span></span> <span><span># Create Prometheus user and group</span></span> <span><span>sudo</span><span> groupadd</span><span> --system</span><span> {{prom_group}}</span></span> <span><span>sudo</span><span> useradd</span><span> -s</span><span> /sbin/nologin</span><span> --system</span><span> -g</span><span> {{prom_group}}</span><span> {{prom_user}}</span></span> <span></span> <span><span># Set permissions</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> {{prom_user}}:{{prom_group}}</span><span> /etc/prometheus</span><span> /var/lib/prometheus</span></span> <span><span>sudo</span><span> chmod</span><span> -R</span><span> 775</span><span> /etc/prometheus</span><span> /var/lib/prometheus</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what the <code>Prometheus Installation Script</code> does:</p> <ul> <li>Downloads specified Prometheus version</li> <li>Installs binaries to <code>/usr/local/bin</code></li> <li>Creates dedicated system user and group</li> <li>Sets secure directory permissions</li> </ul> <h3>Prometheus Service Script</h3> <p>This script configures Prometheus as a systemd service:</p> <img src="/_astro/configure-script.CZ8PZv_o.jpg" alt="Screenshot of configuring Prometheus" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Prometheus Service Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit immediately on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create systemd service file</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; /etc/systemd/system/prometheus.service"</span><span> &lt;&lt;</span><span>EOL</span></span> <span><span>[Unit]</span></span> <span><span>Description=Prometheus</span></span> <span><span>Wants=network-online.target</span></span> <span><span>After=network-online.target</span></span> <span></span> <span><span>[Service]</span></span> <span><span>User={{prom_user}}</span></span> <span><span>Group={{prom_group}}</span></span> <span><span>Restart=always</span></span> <span><span>Type=simple</span></span> <span><span>ExecStart=/usr/local/bin/prometheus </span><span>\\</span></span> <span><span> --config.file=/etc/prometheus/prometheus.yml </span><span>\\</span></span> <span><span> --storage.tsdb.path=/var/lib/prometheus/ </span><span>\\</span></span> <span><span> --web.console.templates=/etc/prometheus/consoles </span><span>\\</span></span> <span><span> --web.console.libraries=/etc/prometheus/console_libraries </span><span>\\</span></span> <span><span> --web.listen-address=0.0.0.0:9090</span></span> <span></span> <span><span>[Install]</span></span> <span><span>WantedBy=multi-user.target</span></span> <span><span>EOL</span></span> <span></span> <span><span># Reload systemd daemon configs</span></span> <span><span>sudo</span><span> systemctl</span><span> daemon-reload</span></span> <span></span> <span><span># Start and enable Prometheus service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> prometheus</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> prometheus</span></span> <span></span> <span><span># Show service status</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> prometheus</span><span> --no-pager</span></span> <span></span> <span><span>echo</span><span> "✅ Prometheus service created and started."</span></span></code><span></span><span></span></pre> <p>This is what the <code>Prometheus Service Script</code> does:</p> <ul> <li>Systemd service unit with automatic restart</li> <li>Proper user/group context</li> <li>Standard data storage locations</li> <li>Network binding on port 9090</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{prom_version}}</code>, <code>{{prom_user}}</code>, and <code>{{prom_group}}</code>, used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.C9lVosA4.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>prom_version</code>:</strong> Prometheus release version</li> <li><strong><code>prom_user</code>:</strong> System user for Prometheus</li> <li><strong><code>prom_group</code>:</strong> System group for Prometheus</li> </ul> <p>Since the variables are setup, proceed to run the scripts with CloudRay.</p> <h2>Running the Scripts to Install Prometheus with CloudRay</h2> <p>Now that everything is setup, you can use CloudRay to automate the installation of prometheus.</p> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.DWH35jzv.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Automate Prometheus Installation”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.BjyEwWWi.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.CDRWNw7F.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where Prometheus will be installed (For example “prom-server”)</li> <li>Script Playlist: Choose the playlist you created (For example “Automate Prometheus Installation”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.qmpzlLNs.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Once the script runs successfully, your Prometheus will be fully installed. You can now visit your Prometheus using your server IP on port <code>9090</code> (server-IP:9090).</p> <img src="/_astro/browser-display.BpHVMVej.jpg" alt="Screenshot of the successful installation of Prometheus" /> <p>This show that Prometheus is working successfully.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Automate Grafana Installation with Nginxhttps://cloudray.io/articles/install-grafanahttps://cloudray.io/articles/install-grafanaStep-by-step guide to automate Grafana deployments with Nginx reverse proxy using CloudRay’s scripting toolsMon, 12 May 2025 00:00:00 GMT<p>This guide demonstrates how to automate Grafana deployments using <a href="https://app.cloudray.io/">CloudRay’s script automation</a>. The deployment addresses these core requirements:</p> <ul> <li>Secure default configuration with disabled anonymous access</li> <li>Nginx reverse proxy configuration</li> <li>Service isolation with proper systemd management</li> </ul> <p>The approach prevents common security issues by disabling anonymous access by default, using reverse proxy for additional security layer, and implementing proper service management.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#install-grafana-and-nginx">Install Grafana and Nginx</a></li> <li><a href="#configure-grafana">Configure Grafana</a></li> </ul> </li> <li><a href="#running-the-scripts-to-install-grafana-with-cloudray">Running the Scripts to Install Grafana with CloudRay</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before starting automation, ensure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment. Additionally, if you’re using a different version or a different distribution, adjust the commands accordingly</p> </div> <h2>Create the Automation Script</h2> <p>Two Bash scripts are required for deployment:</p> <ol> <li><strong>Install Grafana and Nginx:</strong> This script handles package installation and base configuration</li> <li><strong>Configure Grafana:</strong> This script applies security settings and service configuration</li> </ol> <h3>Install Grafana and Nginx</h3> <p>This script performs the initial server setup including package installation and reverse proxy configuration.</p> <img src="/_astro/install-script.xIZxw0cr.jpg" alt="Screenshot of adding a new install script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install Grafana and Nginx</code></li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit immediately if a command exits with a non-zero status</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update package list</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span># Install required dependencies</span></span> <span><span>sudo</span><span> apt-get</span><span> install</span><span> -y</span><span> gnupg2</span><span> curl</span><span> software-properties-common</span></span> <span></span> <span><span># Add Grafana GPG key</span></span> <span><span>curl</span><span> https://packages.grafana.com/gpg.key</span><span> |</span><span> sudo</span><span> apt-key</span><span> add</span><span> -</span></span> <span></span> <span><span># Add Grafana repository</span></span> <span><span>printf</span><span> '\n'</span><span> |</span><span> sudo</span><span> add-apt-repository</span><span> "deb https://packages.grafana.com/oss/deb stable main"</span></span> <span></span> <span><span># Update package list again</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span># Install Grafana</span></span> <span><span>sudo</span><span> apt</span><span> -y</span><span> install</span><span> grafana</span></span> <span></span> <span><span># Start and enable Grafana service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> grafana-server</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> grafana-server</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> grafana-server</span><span> --no-pager</span></span> <span></span> <span><span># Install Nginx</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nginx</span></span> <span></span> <span><span># Start and enable Nginx service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> nginx</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> nginx</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> nginx</span><span> --no-pager</span></span> <span></span> <span><span># Remove default Nginx site</span></span> <span><span>sudo</span><span> unlink</span><span> /etc/nginx/sites-enabled/default</span></span> <span></span> <span><span># Create Grafana Nginx config</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; /etc/nginx/sites-available/grafana.conf"</span><span> &lt;&lt;</span><span>EOL</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> location / {</span></span> <span><span> proxy_pass http://localhost:3000;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOL</span></span> <span></span> <span><span># Enable the Grafana Nginx site</span></span> <span><span>sudo</span><span> ln</span><span> -s</span><span> /etc/nginx/sites-available/grafana.conf</span><span> /etc/nginx/sites-enabled/grafana.conf</span></span> <span></span> <span><span># Test Nginx configuration</span></span> <span><span>sudo</span><span> service</span><span> nginx</span><span> configtest</span></span> <span></span> <span><span># Restart Nginx</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> nginx</span></span> <span></span> <span><span>echo</span><span> "✅ Grafana and Nginx installation completed."</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Install Grafana and Nginx</code> script does:</p> <ul> <li>Updates all installed packages to the latest version</li> <li>Adds official Grafana repository</li> <li>Installs Grafana and Nginx</li> <li>Configures Nginx as reverse proxy on port 80</li> <li>Validates Nginx configuration before restarting</li> </ul> <h3>Configure Grafana</h3> <p>This script applies security-focused configuration to the Grafana installation.</p> <img src="/_astro/configure-script.Bc-4JwgD.jpg" alt="CloudRay script configuration for securing Grafana" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Configure Grafana</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit immediately if a command exits with a non-zero status</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update grafana.ini settings</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> 's/;allow_sign_up = true/allow_sign_up = true/'</span><span> /etc/grafana/grafana.ini</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> 's/;enabled = false/enabled = false/'</span><span> /etc/grafana/grafana.ini</span></span> <span></span> <span><span># Restart Grafana to apply changes</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> grafana-server</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> grafana-server</span><span> --no-pager</span></span> <span></span> <span><span>echo</span><span> "✅ Grafana configuration updated successfully."</span></span></code><span></span><span></span></pre> <p>This is what the <code>Configure Grafana</code> does:</p> <ul> <li>Disables anonymous access by default</li> <li>Restricts sign-ups (to be enabled per requirement)</li> <li>Ensures changes are applied with service restart</li> </ul> <h2>Running the Scripts to Install Grafana with CloudRay</h2> <p>Now that everything is setup, you can use CloudRay to automate the installation of Grafana.</p> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.CVHog8Rg.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Automate Grafana Installation”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.CA7nrblR.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.CS9JWh5t.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where Grafana will be installed</li> <li>Script Playlist: Choose the playlist you created (For example “Automate Grafana Installation”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.CAGBqw2U.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Once the script runs successfully, your Grafana will be fully deployed. You can now visit your Grafana using your domain on port <code>3000</code> (http://SERVER_IP:3000).</p> <img src="/_astro/browser-display.D05CYjmc.jpg" alt="Screenshot of the output on the browser" /> <p>Your Grafana is now deployed and managed with CloudRay. You use the username <strong>admin</strong> and password <strong>admin</strong> to access your dashboard. It’s best practice to reconfigure the default credentials (username and password).</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Configure and Setup AWS CLI and Key Pairshttps://cloudray.io/articles/aws-cli-setup-guidehttps://cloudray.io/articles/aws-cli-setup-guideLearn how to configure the AWS CLI and securely manage EC2 key pairs for infrastructure automationThu, 01 May 2025 00:00:00 GMT<p>Automating AWS infrastructure requires proper setup of the AWS Command Line Interface (CLI), secure key management, and an understanding of core components like AMIs. This guide walks through configuring the AWS CLI and generating and managing EC2 key pairs.</p> <h2>Contents</h2> <ul> <li><a href="#installing-the-aws-cli">Installing the AWS CLI</a> <ul> <li><a href="#linuxmacos-installation">Linux/macOS installation</a></li> <li><a href="#windows-installation">Windows installation</a></li> </ul> </li> <li><a href="#configuring-aws-cli">Configuring AWS CLI</a></li> <li><a href="#managing-ec2-key-pairs">Managing EC2 Key pairs</a></li> </ul> <h2>Installing the AWS CLI</h2> <p>The AWS CLI is the primary tool for interacting with AWS services programmatically. Below are installation instructions for different operating systems.</p> <h3>Linux/macOS installation</h3> <ol> <li>Install unzip(if not installed) and download the latest AWS CLI version:</li> </ol> <pre><code><span><span>sudo</span><span> apt</span><span> install</span><span> unzip</span></span> <span><span>curl</span><span> "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"</span><span> -o</span><span> "awscliv2.zip"</span></span></code><span></span><span></span></pre> <ol> <li>Extract the packages:</li> </ol> <pre><code><span><span>unzip</span><span> awscliv2.zip</span></span></code><span></span><span></span></pre> <ol> <li>Run the installer with sudo:</li> </ol> <pre><code><span><span>sudo</span><span> ./aws/install</span></span></code><span></span><span></span></pre> <ol> <li>Verify the installation:</li> </ol> <pre><code><span><span>aws</span><span> --version</span></span></code><span></span><span></span></pre> <p>Your expected output should look like this:</p> <pre><code><span><span>aws-cli/2.27.4</span><span> Python/3.13.2</span><span> Linux/6.11.0-9-generic</span><span> exe/x86_64.ubuntu.24</span></span></code><span></span><span></span></pre> <h3>Windows installation</h3> <ol> <li>Download the AWS CLI MSI installer.</li> <li>Run the installer and follow the prompts.</li> <li>Open PowerShell and verify installation:</li> </ol> <pre><code><span><span>aws</span><span> --version</span></span></code><span></span><span></span></pre> <p>Alternatively, you can use a preconfigured <a href="https://cloudray.io/docs/scripts">CloudRay automation script</a> to install and configure the AWS CLI across multiple environments for consistency.</p> <h2>Configuring AWS CLI</h2> <p>Before using the AWS CLI, you need to configure authentication credentials.</p> <ol> <li>Login to the <a href="https://us-east-1.console.aws.amazon.com/iam">AWS IAM Console</a></li> <li>Navigate to Users → Select your IAM user → Security credentials.</li> </ol> <img src="/_astro/access-key.3yF0GR6J.jpg" alt="Screenshot of Access key location on AWS console" /> <ol> <li> <p>Under Access keys, click Create access key.</p> </li> <li> <p>Choose CLI (Command Line Interface) as the use case and click Next.</p> </li> </ol> <img src="/_astro/use-case.BMy0yRes.jpg" alt="Screenshot of use case selection" /> <ol> <li>You can give your access key tag (optional) and click “create access key”</li> </ol> <img src="/_astro/create-access-key.BmpyHEqW.jpg" alt="Screenshot to create the access key" /> <ol> <li>You can copy the access and secret access key on the AWS console</li> </ol> <img src="/_astro/get-access-key.B4k9_hyb.jpg" alt="Screenshot to copy and get access key" /> <p>Additionally, you can download the .csv file or copy the keys manually</p> <ol> <li>Run the configuration wizard:</li> </ol> <pre><code><span><span>aws</span><span> configure</span></span></code><span></span><span></span></pre> <p>You will be prompted for:</p> <ul> <li><strong>AWS Access Key ID</strong> - A unique identifier for your IAM user</li> <li><strong>AWS Secret Access Key</strong> - A secret password tied to your access key</li> <li><strong>Default region name</strong> - The AWS region where resources will be created (e.g., us-east-1)</li> <li><strong>Default output format</strong> - How responses are displayed (json, text, or table)</li> </ul> <img src="/_astro/get-access-key.B4k9_hyb.jpg" alt="Screenshot to the terminal representation of AWS configuration" /> <ol> <li>Verify that the CLI can authentication with AWS:</li> </ol> <pre><code><span><span>aws</span><span> sts</span><span> get-caller-identity</span></span></code><span></span><span></span></pre> <img src="/_astro/verify-aws-cli.zwJ5MkYI.jpg" alt="Screenshot to the terminal representation of AWS configuration" /> <p>A successful response includes your AWS account ID and IAM user ARN.</p> <h2>Managing EC2 Key pairs</h2> <p>SSH key pairs are essential for secure access to your EC2 instances. This section covers creating, managing, and using key pairs with AWS CLI. Here is how to create a new key pair using AWS CLI:</p> <ol> <li>First, generate a key pair using AWS CLI:</li> </ol> <pre><code><span><span>aws</span><span> ec2</span><span> create-key-pair</span><span> \</span></span> <span><span> --key-name</span><span> "my-production-key"</span><span> \</span></span> <span><span> --key-type</span><span> ed25519</span><span> \</span></span> <span><span> --query</span><span> 'KeyMaterial'</span><span> \</span></span> <span><span> --output</span><span> text</span><span> &gt;</span><span> my-production-key.pem</span></span></code><span></span><span></span></pre> <ol> <li>Set proper file permissions:</li> </ol> <pre><code><span><span>chmod</span><span> 400</span><span> my-production-key.pem</span></span></code><span></span><span></span></pre> <p>Here are what all the key parameters represents:</p> <ul> <li><code>--key-name</code>: Unique identifier for your key</li> <li><code>--key-type</code>: Choose between <code>rsa</code> (default) or more secure <code>ed25519</code></li> <li><code>KeyMaterial</code>: The private key content (saved to .pem file)</li> </ul> <p>Additionally, here is how to list all available key pairs in your region:</p> <pre><code><span><span>aws</span><span> ec2</span><span> describe-key-pairs</span><span> \</span></span> <span><span> --query</span><span> 'KeyPairs[*].KeyName'</span><span> \</span></span> <span><span> --output</span><span> table</span></span></code><span></span><span></span></pre> <img src="/_astro/verify-key-output.BpdF-VUZ.jpg" alt="Screenshot to the terminal representation of AWS configuration" /> <p>This shows the keys present in your AWS user account</p> <p>Finally, here is how to delete a key pair:</p> <ol> <li>First, verify no instances are using the key:</li> </ol> <pre><code><span><span>aws</span><span> ec2</span><span> describe-instances</span><span> \</span></span> <span><span> --filters</span><span> "Name=key-name,Values=my-production-key"</span><span> \</span></span> <span><span> --query</span><span> "Reservations[].Instances[].InstanceId"</span></span></code><span></span><span></span></pre> <ol> <li>Delete the key pair:</li> </ol> <pre><code><span><span>aws</span><span> ec2</span><span> delete-key-pair</span><span> --key-name</span><span> "my-production-key"</span></span></code><span></span><span></span></pre> <div> <p>NOTE</p> <p>You can locate Amazon Machine Images (AMIs) in the <a href="https://console.aws.amazon.com/ec2/">EC2 Console</a> under AMIs in the left navigation. Use the search filters to find official AWS-provided images or community AMIs. For production environments, always verify AMI sources and use the most recent stable versions.</p> </div> <p>While manual AWS CLI configuration works for individual setups, managing infrastructure at scale requires automation. <a href="https://app.cloudray.io/">CloudRay</a> provides centralised management for Bash scripts, enabling teams to securely automate and schedule AWS operations like instance provisioning, key rotation, and infrastructure monitoring.</p> <p>For a practical implementation of these concepts, see our guide on Automating AWS Infrastructure with Bash and CLI which covers EC2 lifecycle management, automated backups, and monitoring solutions.</p>Automate the Installation of FTP Serverhttps://cloudray.io/articles/install-ftp-serverhttps://cloudray.io/articles/install-ftp-serverLearn how to deploy a secure vsftpd server with automated user provisioning and TLS configuration using CloudRayWed, 30 Apr 2025 00:00:00 GMT<p>This guide covers the automated deployment of vsftpd (Very Secure FTP Daemon) using <a href="https://app.cloudray.io/">CloudRay</a> scripting capabilities. This implementation ensures secure, consistent file transfer services with built-in user isolation and SSL encryption.</p> <p>CloudRay’s Bash scripting engine handles the entire deployment lifecycle from package installation to firewall configuration while maintaining compliance with security best practices.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#ftp-server-installation-script">FTP Server Installation Script</a></li> <li><a href="#ftp-configuration-script">FTP Configuration Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-scripts-with-cloudray">Running the Scripts with CloudRay</a> <ul> <li><a href="#running-ftp-server-installation-script">Running FTP Server Installation Script</a></li> <li><a href="#running-the-deploy-wordpress-script">Running the Deploy WordPress Script</a></li> </ul> </li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment. Additionally, if you’re using a different version or a different distribution, adjust the commands accordingly</p> </div> <h2>Create the Automation Script</h2> <p>To automate the installation of FTP Server, you will need two Bash scripts:</p> <ol> <li><strong>FTP Server Installation:</strong> This script handles the core vsftpd installation and user provisioning</li> <li><strong>FTP Configuration Script:</strong> This script implements security hardening and shared storage</li> </ol> <p>Let’s begin with the installation of FTP server.</p> <h3>FTP Server Installation Script</h3> <p>To create the FTP Server Installation Script, you need to follow these steps:</p> <img src="/_astro/install-script.-j_axVYb.jpg" alt="Screenshot of adding a new install script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>FTP Server Installation Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>echo</span><span> "🛠️ Starting FTP server base installation..."</span></span> <span></span> <span><span># Update system and install vsftpd</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> vsftpd</span><span> lftp</span><span> -y</span><span> # lftp included for testing</span></span> <span></span> <span><span># Start and enable service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> vsftpd</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> vsftpd</span></span> <span></span> <span><span># Configure SSL (only if not already configured)</span></span> <span><span>if</span><span> [ </span><span>!</span><span> -f</span><span> /etc/ssl/private/vsftpd.pem ]; </span><span>then</span></span> <span><span> echo</span><span> "🔐 Generating SSL certificate..."</span></span> <span><span> sudo</span><span> mkdir</span><span> -p</span><span> /etc/ssl/private</span></span> <span><span> sudo</span><span> openssl</span><span> req</span><span> -x509</span><span> -nodes</span><span> -days</span><span> 3650</span><span> -newkey</span><span> rsa:2048</span><span> \</span></span> <span><span> -keyout</span><span> /etc/ssl/private/vsftpd.pem</span><span> \</span></span> <span><span> -out</span><span> /etc/ssl/private/vsftpd.pem</span><span> \</span></span> <span><span> -subj</span><span> "/C=US/ST=California/L=San Francisco/O=My Company/OU=IT/CN=ftp.server.com"</span></span> <span><span> sudo</span><span> chmod</span><span> 600</span><span> /etc/ssl/private/vsftpd.pem</span></span> <span><span> echo</span><span> "✅ SSL certificate created"</span></span> <span><span>else</span></span> <span><span> echo</span><span> "ℹ️ SSL certificate already exists - skipping generation"</span></span> <span><span>fi</span></span> <span></span> <span><span># Backup original config</span></span> <span><span>sudo</span><span> cp</span><span> /etc/vsftpd.conf</span><span> /etc/vsftpd.conf.bak</span></span> <span></span> <span><span># Configure vsftpd with user whitelisting</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> 'cat &gt; /etc/vsftpd.conf &lt;&lt;EOF</span></span> <span><span>listen=NO</span></span> <span><span>listen_ipv6=YES</span></span> <span><span>anonymous_enable=NO</span></span> <span><span>local_enable=YES</span></span> <span><span>write_enable=YES</span></span> <span><span>dirmessage_enable=YES</span></span> <span><span>use_localtime=YES</span></span> <span><span>xferlog_enable=YES</span></span> <span><span>connect_from_port_20=YES</span></span> <span><span>chroot_local_user=YES</span></span> <span><span>secure_chroot_dir=/var/run/vsftpd/empty</span></span> <span><span>pam_service_name=vsftpd</span></span> <span><span>rsa_cert_file=/etc/ssl/private/vsftpd.pem</span></span> <span><span>rsa_private_key_file=/etc/ssl/private/vsftpd.pem</span></span> <span><span>ssl_enable=YES</span></span> <span><span>user_sub_token=\$USER</span></span> <span><span>local_root=/home/\$USER/ftp</span></span> <span><span>pasv_min_port=30000</span></span> <span><span>pasv_max_port=31000</span></span> <span><span>userlist_enable=YES</span></span> <span><span>userlist_file=/etc/vsftpd.userlist</span></span> <span><span>userlist_deny=NO</span></span> <span><span>allow_anon_ssl=NO</span></span> <span><span>force_local_data_ssl=YES</span></span> <span><span>force_local_logins_ssl=YES</span></span> <span><span>EOF'</span></span> <span></span> <span><span># Restart service</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> vsftpd</span></span> <span></span> <span><span># Configure firewall</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> ssh</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> 20,21,990/tcp</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> 30000:31000/tcp</span></span> <span><span>echo</span><span> "y"</span><span> |</span><span> sudo</span><span> ufw</span><span> enable</span></span> <span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what the <code>FTP Server Installation Script</code> does:</p> <ul> <li>Updates all installed packages to the latest version</li> <li>Installs vsftpd with TLS/SSL support</li> <li>Generates SSL certificates automatically</li> <li>Enforces FTPS (FTP over SSL/TLS)</li> </ul> <h3>FTP Configuration Script</h3> <p>Next, you need to configure users and give the users the necessary permissions. To do so, follow similar steps as the above:</p> <img src="/_astro/configure-script.CIvbeRYh.jpg" alt="Screenshot of configuring cockpit" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>FTP Configuration Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create FTP user</span></span> <span><span>sudo</span><span> adduser</span><span> --disabled-password</span><span> --gecos</span><span> ""</span><span> "{{ftp_user}}"</span></span> <span><span>echo</span><span> "{{ftp_user}}:{{ftp_pass}}"</span><span> |</span><span> sudo</span><span> chpasswd</span></span> <span></span> <span><span># Setup directory structure</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> "/home/{{ftp_user}}/ftp/upload"</span></span> <span><span>sudo</span><span> chown</span><span> nobody:nogroup</span><span> "/home/{{ftp_user}}/ftp"</span></span> <span><span>sudo</span><span> chmod</span><span> a-w</span><span> "/home/{{ftp_user}}/ftp"</span></span> <span><span>sudo</span><span> chown</span><span> "{{ftp_user}}:{{ftp_user}}"</span><span> "/home/{{ftp_user}}/ftp/upload"</span></span> <span></span> <span><span># Update userlist</span></span> <span><span>sudo</span><span> touch</span><span> /etc/vsftpd.userlist</span></span> <span><span>grep</span><span> -qxF</span><span> "{{ftp_user}}"</span><span> /etc/vsftpd.userlist</span><span> ||</span><span> echo</span><span> "{{ftp_user}}"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> /etc/vsftpd.userlist</span></span> <span></span> <span><span>echo</span><span> "{{ftp_user}}"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> /etc/vsftpd.userlist</span></span> <span></span> <span><span>echo</span><span> "FTP configuration for user '{{ftp_user}}' completed successfully!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>FTP Configuration Script</code> does:</p> <ul> <li>Creates chroot-jailed users from variables</li> <li>Prepares directory structure with proper permissions</li> <li>Configures passive mode port range</li> <li>Implements user whitelisting</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{ftp_user}}</code> and <code>{{ftp_pass}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.D-98Alu_.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>ftp_user</code>:</strong> The FTP user</li> <li><strong><code>ftp_pass</code>:</strong> The user password</li> </ul> <p>Since the variables are setup, proceed to run the scripts with CloudRay.</p> <h2>Running the Scripts with CloudRay</h2> <p>The FTP server deployment follows a two-phase approach that separates infrastructure setup from user management. This modular design allows for efficient scaling and maintenance. You run the <code>FTP Server Installation</code> once per server to install the FTP server. Then you run the <code>FTP Configuration Script</code> separately to configure and configure each users on the FTP server.</p> <h3>Running FTP Server Installation Script</h3> <p>Follow these steps:</p> <ol> <li>Navigate to <strong>Runlogs</strong> in your CloudRay project</li> <li>Click New <strong>Run a Script</strong></li> </ol> <img src="/_astro/installation-runlog.CHlXoPCe.jpg" alt="screenshot of creating the setup runlog" /> <ol> <li>Configure the runlog: <ul> <li><strong>Server:</strong> Select your target server</li> <li><strong>Script:</strong> Choose “FTP Server Installation”</li> <li><strong>Variable Group:</strong> Select the variable group you created earlier</li> </ul> </li> <li>Run the script: Click on “Run” to execute the script on your server</li> </ol> <img src="/_astro/result-install-runlog.BW1IFn6t.jpg" alt="CloudRay script execution log for WordPress server preparation" /> <p>CloudRay will connect to your server, run the <code>FTP Server Installation</code> script, and show you the live output as the script executes. This one-time setup installs all required system packages and services. You only need to run this script once when setting up a new server and install FTP.</p> <h3>Running the Deploy WordPress Script</h3> <p>To create and configure user for the FTP server:</p> <ol> <li>Navigate to Runlogs &gt; Run a Script</li> </ol> <img src="/_astro/configure-runlog.rVoBHg00.jpg" alt="screenshot of creating the configure runlog" /> <ol> <li>Configure the runlog: <ul> <li><strong>Server:</strong> Select the same server where you ran the setup</li> <li><strong>Script:</strong> Choose “FTP Configuration Script”</li> <li><strong>Variable Group:</strong> Select your predefined variables (or create new ones for this site)</li> </ul> </li> <li>Click Run Now to deploy</li> </ol> <img src="/_astro/result-configure-runlog.D5XotHXk.jpg" alt="Screenshot of the result of configure script" /> <p>Once the script runs successfully, your FTP server user will be created and configured. You can test connectivity using <a href="https://filezilla-project.org/">FileZilla</a>.</p> <img src="/_astro/browser-display.SB_AKf2E.jpg" alt="Screenshot of the successful connection on FileZilla" /> <p>This show that the FTP server is working successfully.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Automate WordPress Multi-Site Backupshttps://cloudray.io/articles/automate-wordpress-multi-site-backupshttps://cloudray.io/articles/automate-wordpress-multi-site-backupsAutomate WordPress multi-site backups with CloudRay using scheduled scripts for databases and files, retention policies, and secure, reliable protection.Tue, 29 Apr 2025 00:00:00 GMT<p>This guide demonstrates how to implement an automated WordPress backup solution for enterprise-grade protection using CloudRay. You will learn how to create isolated backups for WordPress sites (files and database), implement retention policies to manage storage usage, and schedule automated backups using <a href="https://cloudray.io/docs/schedules">CloudRay’s Schedules</a> feature.</p> <div> <p>IMPORTANT</p> <p>Before implementing these backup solutions, ensure you have multiple WordPress sites deployed with isolated users and databases. If you haven’t set this up yet, follow our comprehensive guide on <a href="/articles/deploy-multi-wordpress-sites-on-one-server">Deploy Multiple WordPress Sites on One Server</a>.</p> </div> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-backup-scripts">Create Backup Scripts</a> <ul> <li><a href="#backup-wordpress-sites-database-script">Backup WordPress Sites Database Script</a></li> <li><a href="#backup-wordpress-sites-files-script">Backup WordPress Sites Files Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#how-to-schedule-automated-wordpress-backups-for-databases">How to Schedule Automated WordPress Backups for Databases</a></li> <li><a href="#how-to-schedule-automated-wordpress-backups-for-filesystem">How to Schedule Automated WordPress Backups for Filesystem</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific installation needs and environment. Additionally, if you’re using a different version or a different distribution, adjust the commands accordingly.</p> </div> <h2>Create Backup Scripts</h2> <p>To fully protect your WordPress multi-site installation, you can implement two specialised backup solutions. The <code>Backup WordPress Sites Database Script</code> securely exports all database content with transaction consistency, while the <code>Backup WordPress Sites Files Script</code> preserves your complete file structure including themes, plugins, and uploads.</p> <p>To automate Backups, you will need two bash scripts:</p> <ul> <li><strong>Backup WordPress Sites Database Script:</strong> This script creates compressed, transaction-safe exports of each WordPress database while verifying data integrity and maintaining proper user isolation</li> <li><strong>Backup WordPress Sites Files Script:</strong> This script archives all critical WordPress files while excluding temporary data, with built-in verification to ensure backup reliability</li> </ul> <p>These WordPress backup scripts solve critical challenges for multi-site administrators by providing:</p> <ul> <li>Scheduled WordPress backups that run without manual intervention</li> <li>Secure WordPress backups with integrity verification</li> <li>A complete WordPress backup solution covering both database and files</li> </ul> <h3>Backup WordPress Sites Database Script</h3> <p>To create your automated database backup script in CloudRay, follow these steps to create the script that will protect your WordPress site critical data:</p> <img src="/_astro/db-backup-script.pgd4bc-Y.jpg" alt="Screenshot of adding a new database backup script" /> <ol> <li>Go to Scripts → New Script</li> <li>Name it <code>Backup WordPress Sites Database Script</code></li> <li>Add the following script:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create backup directory if not exists</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> "{{backup_dir}}"</span><span> ||</span><span> { </span><span>echo</span><span> "Failed to create backup directory"</span><span>; </span><span>exit</span><span> 1</span><span>; }</span></span> <span></span> <span><span>{</span></span> <span><span> echo</span><span> "=== Starting Database Backup at $(</span><span>date</span><span>) ==="</span></span> <span><span> echo</span><span> "Host: $(</span><span>hostname</span><span>)"</span></span> <span><span> echo</span><span> "MySQL Version: $(</span><span>mysql</span><span> --version</span><span>)"</span></span> <span></span> <span><span> # Verify MySQL connectivity</span></span> <span><span> if</span><span> !</span><span> mysql</span><span> -u</span><span> "{{db_user1}}"</span><span> -p</span><span>"{{db_pass1}}"</span><span> -e</span><span> "SHOW DATABASES;"</span><span> &gt;</span><span>/dev/null</span><span> 2&gt;&amp;1</span><span>; </span><span>then</span></span> <span><span> echo</span><span> "ERROR: Cannot connect to MySQL server"</span></span> <span><span> exit</span><span> 1</span></span> <span><span> fi</span></span> <span></span> <span><span> # Backup first WordPress database</span></span> <span><span> echo</span><span> "Backing up database {{db1}} to {{backup_dir}}/{{db1}}_db_{{timestamp}}.sql.gz"</span></span> <span><span> if</span><span> mysqldump</span><span> --single-transaction</span><span> --quick</span><span> \</span></span> <span><span> -u</span><span> "{{db_user1}}"</span><span> -p</span><span>"{{db_pass1}}"</span><span> "{{db1}}"</span><span> \</span></span> <span><span> |</span><span> gzip</span><span> &gt;</span><span> "{{backup_dir}}/{{db1}}_db_{{timestamp}}.sql.gz"</span><span>; </span><span>then</span></span> <span></span> <span><span> if</span><span> gzip</span><span> -t</span><span> "{{backup_dir}}/{{db1}}_db_{{timestamp}}.sql.gz"</span><span>; </span><span>then</span></span> <span><span> echo</span><span> "SUCCESS: {{db1}} backup (Size: $(</span><span>du</span><span> -h</span><span> "{{backup_dir}}/{{db1}}_db_{{timestamp}}.sql.gz" </span><span>|</span><span> cut</span><span> -f1</span><span>))"</span></span> <span><span> zgrep</span><span> -q</span><span> "wp_options"</span><span> "{{backup_dir}}/{{db1}}_db_{{timestamp}}.sql.gz"</span><span> ||</span><span> echo</span><span> "WARNING: Missing WordPress tables in {{db1}} backup"</span></span> <span><span> else</span></span> <span><span> echo</span><span> "ERROR: Backup verification failed for {{db1}}"</span></span> <span><span> sudo</span><span> rm</span><span> -f</span><span> "{{backup_dir}}/{{db1}}_db_{{timestamp}}.sql.gz"</span></span> <span><span> fi</span></span> <span><span> else</span></span> <span><span> echo</span><span> "ERROR: Database backup failed for {{db1}}"</span></span> <span><span> fi</span></span> <span></span> <span><span> # Apply retention policy</span></span> <span><span> echo</span><span> "Cleaning up backups older than {{retention_days}} days..."</span></span> <span><span> find</span><span> "{{backup_dir}}"</span><span> -name</span><span> "*_db_*.sql.gz"</span><span> -mtime</span><span> +{{retention_days}}</span><span> -delete</span></span> <span></span> <span><span> echo</span><span> "=== Database Backup Completed at $(</span><span>date</span><span>) ==="</span></span> <span><span> echo</span><span> "Remaining backups:"</span></span> <span><span> ls</span><span> -lh</span><span> "{{backup_dir}}"/</span><span>*</span><span>_db_</span><span>*</span><span>.sql.gz</span><span> 2&gt;</span><span>/dev/null</span><span> ||</span><span> echo</span><span> "No database backups found"</span></span> <span><span> echo</span><span> "Disk usage: $(</span><span>df</span><span> -h</span><span> {{backup_dir}})"</span></span> <span><span>} </span><span>|</span><span> tee</span><span> -a</span><span> "{{db_log_file}}"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what the <code>Backup WordPress Sites Database Script</code> does:</p> <ul> <li>Creates transaction-consistent backups using <code>mysqldump --single-transaction</code> to avoid database locking during backups</li> <li>Compresses database dumps with <code>gzip</code> to minimise storage usage while maintaining data integrity</li> <li>Verifies backup completeness by checking for essential WordPress tables like wp_options</li> <li>Implements automatic retention by deleting backups older than the specified number of days</li> <li>Provides detailed logging with timestamps, success/failure status, and backup sizes</li> <li>Validates MySQL connectivity before attempting backups to prevent partial failures</li> <li>Maintains isolated backups for each WordPress site according to your multi-site architecture</li> <li>Includes corruption checks using <code>gzip -t</code> to automatically detect and remove invalid backups</li> </ul> <h3>Backup WordPress Sites Files Script</h3> <p>Similarly, to create your automated WordPress files backup script in CloudRay, follow these steps to create the script:</p> <img src="/_astro/file-backup-script.J_HjGA7J.jpg" alt="Screenshot of adding a new file backup script" /> <ol> <li>Go to Scripts → New Script</li> <li>Name it <code>Backup WordPress Sites Files Script</code></li> <li>Add the following script:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create backup directory if not exists</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> "{{backup_dir}}"</span><span> ||</span><span> { </span><span>echo</span><span> "Failed to create backup directory"</span><span>; </span><span>exit</span><span> 1</span><span>; }</span></span> <span></span> <span><span>{</span></span> <span><span> echo</span><span> "=== Starting Backup at $(</span><span>date</span><span>) ==="</span></span> <span></span> <span><span> for</span><span> USER </span><span>in</span><span> {{user1}}; do</span></span> <span><span> WP_DIR</span><span>=</span><span>"/home/</span><span>$USER</span><span>/public_html"</span></span> <span></span> <span><span> # Verify user directory exists</span></span> <span><span> if</span><span> [ </span><span>!</span><span> -d</span><span> "</span><span>$WP_DIR</span><span>"</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "ERROR: Directory </span><span>$WP_DIR</span><span> not found for user </span><span>$USER</span><span>"</span></span> <span><span> continue</span></span> <span><span> fi</span></span> <span></span> <span><span> BACKUP_FILE</span><span>=</span><span>"{{backup_dir}}/${</span><span>USER</span><span>}_files_{{timestamp}}.tar.gz"</span></span> <span></span> <span><span> echo</span><span> "Backing up </span><span>$WP_DIR</span><span> to </span><span>$BACKUP_FILE</span><span>"</span></span> <span></span> <span><span> # Create backup with verbose output</span></span> <span><span> if</span><span> sudo</span><span> tar</span><span> -czvf</span><span> "</span><span>$BACKUP_FILE</span><span>"</span><span> \</span></span> <span><span> --exclude=</span><span>'cache/*'</span><span> \</span></span> <span><span> --exclude=</span><span>'.git/*'</span><span> \</span></span> <span><span> -C</span><span> /</span><span> "home/</span><span>$USER</span><span>/public_html"</span><span> 2&gt;&amp;1</span><span>; </span><span>then</span></span> <span></span> <span><span> # Verify backup integrity</span></span> <span><span> if</span><span> sudo</span><span> gzip</span><span> -t</span><span> "</span><span>$BACKUP_FILE</span><span>"</span><span>; </span><span>then</span></span> <span><span> echo</span><span> "SUCCESS: Backup created for </span><span>$USER</span><span> (Size: $(</span><span>du</span><span> -h</span><span> "</span><span>$BACKUP_FILE</span><span>" </span><span>|</span><span> cut</span><span> -f1</span><span>))"</span></span> <span><span> else</span></span> <span><span> echo</span><span> "ERROR: Backup verification failed for </span><span>$USER</span><span>"</span></span> <span><span> sudo</span><span> rm</span><span> -f</span><span> "</span><span>$BACKUP_FILE</span><span>"</span></span> <span><span> fi</span></span> <span><span> else</span></span> <span><span> echo</span><span> "ERROR: Backup failed for </span><span>$USER</span><span>"</span></span> <span><span> fi</span></span> <span><span> done</span></span> <span></span> <span><span> echo</span><span> "=== Backup completed at $(</span><span>date</span><span>) ==="</span></span> <span><span> echo</span><span> "Disk usage: $(</span><span>df</span><span> -h</span><span> {{backup_dir}})"</span></span> <span><span>} </span><span>|</span><span> tee</span><span> -a</span><span> "{{log_file}}"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what the <code>Backup WordPress Sites Files Script</code> does:</p> <ul> <li>Creates compressed archives of each WordPress site’s <code>public_html</code> directory using tar+gzip for efficient storage</li> <li>Maintains file permissions by running as <code>sudo</code> to properly backup all WordPress files</li> <li>Excludes non-essential files like <code>cache</code> and <code>git</code> directories to reduce backup size</li> <li>Verifies backup integrity with <code>gzip -t</code> to ensure archives are not corrupted</li> <li>Provides detailed logging including timestamps, file sizes, and success/failure status</li> <li>Handles multiple sites by iterating through all configured WordPress users</li> <li>Checks directory existence before attempting backups to prevent errors</li> <li>Tracks disk usage after completion to monitor storage consumption</li> <li>Preserves directory structure using -C flag for proper restoration and outputs verbose progress during archive creation for monitoring</li> </ul> <h2>Create a Variable Group</h2> <p>These backup scripts builds on the same variables used in the guide <a href="/articles/deploy-multi-wordpress-sites-on-one-server">Deploy Multiple WordPress Sites</a>. However, you need to define values for the following placeholders <code>{{backup_dir}}</code>, <code>{{timestamp}}</code>, <code>{{log_file}}</code>, <code>{{db_log_file}}</code>, and <code>{{retention_days}}</code> used in the scrips.</p> <p>Follow these steps to edit the variable group and include the new variables to ensure that these new values are automatically substituted when the script runs:</p> <img src="/_astro/variables.CSLIrbh-.jpg" alt="Screenshot of adding a new variable group" /> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Edit the Variable Group:</strong> Click on variable group you created earlier for the WordPress sites and edit it</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>backup_dir</code>:</strong> The central directory of all backups</li> <li><strong><code>timestamp</code>:</strong> time format for naming the backups</li> <li><strong><code>log_file</code>:</strong> Filesystem backup logs</li> <li><strong><code>db_log_file</code>:</strong> Database backup logs</li> <li><strong><code>retention_days</code>:</strong> Auto-delete backups older than X days</li> </ul> <p>Since the variables are setup, proceed to creating backups schedules with CloudRay.</p> <h2>How to Schedule Automated WordPress Backups for Databases</h2> <p>CloudRay offers <a href="https://cloudray.io/docs/schedules">Schedules</a>, allowing you to execute scripts automatically at specific intervals or times. This feature is useful for tasks such as automating database backups.</p> <p>For example, if you want to back up your WordPress sites database on the first day of every month at 12:00 AM, you can configure a CloudRay schedule to handle this automatically.</p> <p>Here are the steps to achieve this:</p> <ol> <li><strong>Navigate to Schedules:</strong> In your CloudRay dashboard, go to the “Schedules” tab.</li> </ol> <img src="/_astro/locating-schedule.3eGh4hx5.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Click “Add Schedule”:</strong> Start creating a new schedule.</li> </ol> <img src="/_astro/Setup-schedules.D3Mu-jqG.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Submit Schedule:</strong> Click “Submit” to activate your new schedule.</li> </ol> <img src="/_astro/schedules-enabled.Dm8wk4CU.jpg" alt="Screenshot of the location of enabled schedule" /> <p>CloudRay will automatically execute the backup script at the scheduled time, ensuring that your WordPress sites database is regularly backed up without manual intervention.</p> <h2>How to Schedule Automated WordPress Backups for Filesystem</h2> <p>For comprehensive WordPress protection, you can implementing a weekly filesystem backup schedule alongside your database backups.</p> <p>For example, if you want to backup your WordPress sites filesystem weekly every sunday at 2:00AM, you can configure a CloudRay schedule. Similarly, here is how to achieve this:</p> <ol> <li><strong>Click “Add Schedule”:</strong> Start creating a new schedule.</li> </ol> <img src="/_astro/Setup-schedules2.B2qvZonv.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Submit Schedule:</strong> Click “Submit” to activate your new schedule.</li> </ol> <img src="/_astro/schedules-enabled2.D7_Hq9aY.jpg" alt="Screenshot of the location of enabled schedule" /> <p>With this schedule in place, CloudRay will automatically perform weekly backups of your WordPress files every Sunday at 2:00 AM, ensuring your site content remains protected without manual intervention.</p> <div> <p>TIP</p> <p>These scripts are designed for easy reuse across multiple WordPress sites. Simply edit the variable group to include additional database credentials (<code>db2</code>, <code>db_user2</code>, etc.) or users (<code>user2</code>, <code>user3</code>) as needed. CloudRay will automatically apply the updated variables when the scheduled backups execute, allowing the same scripts to manage backups for any number of WordPress installations.</p> </div> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>Deploy Multiple WordPress Sites on One Serverhttps://cloudray.io/articles/deploy-multi-wordpress-sites-on-one-serverhttps://cloudray.io/articles/deploy-multi-wordpress-sites-on-one-serverDeploy multiple WordPress sites on one server with Caddy using CloudRay. Isolate users and databases, automate SSL, and optimize resources.Mon, 28 Apr 2025 00:00:00 GMT<p>This guide outlines a method for deploying isolated WordPress instances with enhanced security measures, leveraging automated provisioning through bash scripts executed via <a href="https://app.cloudray.io/">CloudRay</a>.</p> <p>The deployment model addresses three core requirements:</p> <ul> <li>User-level process isolation where each WordPress instance runs under its own Linux system account</li> <li>Database security via separate MySQL accounts</li> <li>Automated SSL certificate provisioning</li> </ul> <p>This approach prevents cross-site contamination from common WordPress threats including malware infections via vulnerable plugins, credential stuffing attacks, and file inclusion exploits.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#server-environment-setup">Server Environment Setup</a></li> <li><a href="#deploy-wordpress">Deploy Wordpress</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a> <ul> <li><a href="#running-server-environment-setup-script">Running Server Environment Setup Script</a></li> <li><a href="#running-the-deploy-wordpress-script">Running the Deploy WordPress Script</a></li> </ul> </li> <li><a href="#restart-wordpress-server-optional">Restart WordPress Server (Optional)</a></li> <li><a href="#managing-wordpress-files-after-deployment">Managing WordPress Files After Deployment</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment. Additionally, if you’re using a different version or a different distribution, adjust the commands accordingly</p> </div> <h2>Create the Automation Script</h2> <p>To streamline the deployment and management processes, you’ll need three Bash scripts:</p> <ol> <li><strong>Server Environment Setup</strong>: Installs dependencies, creates users, and configures the server environment</li> <li><strong>Deploy Wordpress</strong>: Downloads WordPress, configures databases, and sets up Caddy virtual hosts</li> </ol> <h3>Server Environment Setup</h3> <p>This script prepares your server by installing all required dependencies and configuring the base environment for isolated WordPress installations. To create the <code>Server Environment Setup</code> script, you need to follow these steps:</p> <img src="/_astro/setup-script.5JUm2WAp.jpg" alt="CloudRay WordPress server setup script configuration" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Server Environment Setup</code>. You can give it any name of your choice.</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update and upgrade</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> -y</span></span> <span></span> <span><span># Install PHP and required modules</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> php</span><span> php-common</span><span> php-mysql</span><span> php-xml</span><span> php-xmlrpc</span><span> php-curl</span><span> php-gd</span><span> php-imagick</span><span> php-cli</span><span> php-dev</span><span> php-imap</span><span> php-mbstring</span><span> php-soap</span><span> php-zip</span><span> php-intl</span><span> php-cgi</span><span> php8.3-fpm</span><span> unzip</span></span> <span></span> <span><span># Install Caddy web server</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> caddy</span></span></code><span></span><span></span></pre> <p>Below is a breakdown of what the <code>Server Environment Setup</code> does:</p> <ul> <li>Updates all system packages to their latest versions</li> <li>Installs PHP 8.3 with FPM pools optimised for WordPress</li> <li>Deploys Caddy with automatic HTTPS configuration</li> </ul> <p>This script assumes MySQL is already installed. For configuring a remote MySQL server instead, see our <a href="/articles/deploy-mysql-server">MySQL Server Guide</a>.</p> <h3>Deploy Wordpress</h3> <p>Once your server environment is prepared, this script handles the actual WordPress deployment process for each isolated user account. It takes care of downloading WordPress, configuring database connections, setting appropriate permissions, and configuring Caddy to serve each site securely.</p> <p>To create the Deploy WordPress Script, follow these steps:</p> <img src="/_astro/deploy-script.D131O584.jpg" alt="Automated WordPress deployment script in CloudRay dashboard" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Deploy Wordpress</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Add users with passwords (non-interactive)</span></span> <span><span>echo</span><span> -e</span><span> "{{pass}}\n{{pass}}"</span><span> |</span><span> sudo</span><span> adduser</span><span> --gecos</span><span> ""</span><span> {{user}}</span><span> --disabled-password</span></span> <span><span>echo</span><span> "{{user}}:{{pass}}"</span><span> |</span><span> sudo</span><span> chpasswd</span></span> <span></span> <span><span># Create web directories</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> /home/{{user}}/public_html</span></span> <span></span> <span><span># Install MySQL and create databases &amp; users</span></span> <span><span>sudo</span><span> mysql</span><span> -u</span><span> root</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>CREATE DATABASE {{db}};</span></span> <span><span>CREATE USER '{{db_user}}'@'localhost' IDENTIFIED BY '{{db_pass}}';</span></span> <span><span>GRANT ALL PRIVILEGES ON {{db}}.* TO '{{db_user}}'@'localhost';</span></span> <span><span>FLUSH PRIVILEGES;</span></span> <span><span>EOF</span></span> <span></span> <span><span># Configure PHP-FPM pool</span></span> <span><span>sudo</span><span> tee</span><span> /etc/php/8.3/fpm/pool.d/{{user}}.conf</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>[{{user}}]</span></span> <span><span>user = {{user}}</span></span> <span><span>group = www-data</span></span> <span><span>listen = /run/php/{{user}}.sock</span></span> <span><span>listen.owner = www-data</span></span> <span><span>listen.group = www-data</span></span> <span><span>pm = dynamic</span></span> <span><span>pm.max_children = 5</span></span> <span><span>pm.start_servers = 2</span></span> <span><span>pm.min_spare_servers = 1</span></span> <span><span>pm.max_spare_servers = 3</span></span> <span><span>chdir = /</span></span> <span><span>security.limit_extensions = .php</span></span> <span><span>php_admin_value[error_log] = /var/log/php8.3-fpm/{{user}}-error.log</span></span> <span><span>php_admin_flag[log_errors] = on</span></span> <span><span>EOF</span></span> <span></span> <span><span># Restart PHP-FPM service</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> php8.3-fpm</span></span> <span></span> <span><span># Set ownership and permissions</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> {{user}}:{{user}}</span><span> /home/{{user}}</span></span> <span><span>sudo</span><span> chmod</span><span> 755</span><span> /home/{{user}}/public_html</span></span> <span></span> <span><span># Download and configure WordPress for USER</span></span> <span><span>sudo</span><span> -u</span><span> {{user}}</span><span> bash</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>cd ~/public_html</span></span> <span><span>wget https://wordpress.org/latest.tar.gz</span></span> <span><span>tar xzf latest.tar.gz --strip-components=1</span></span> <span><span>rm latest.tar.gz</span></span> <span><span>cp wp-config-sample.php wp-config.php</span></span> <span><span>sed -i "s/database_name_here/{{db}}/" wp-config.php</span></span> <span><span>sed -i "s/username_here/{{user}}/" wp-config.php</span></span> <span><span>sed -i "s/password_here/{{db_pass}}/" wp-config.php</span></span> <span><span>EOF</span></span> <span></span> <span><span># Set final permissions</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> {{user}}:www-data</span><span> /home/{{user}}/public_html</span></span> <span><span>sudo</span><span> find</span><span> /home/{{user}}/public_html</span><span> -type</span><span> d</span><span> -exec</span><span> chmod</span><span> 750</span><span> {}</span><span> \;</span></span> <span><span>sudo</span><span> find</span><span> /home/{{user}}/public_html</span><span> -type</span><span> f</span><span> -exec</span><span> chmod</span><span> 640</span><span> {}</span><span> \;</span></span> <span><span>sudo</span><span> chmod</span><span> 600</span><span> /home/{{user}}/public_html/wp-config.php</span></span> <span><span>sudo</span><span> chmod</span><span> 710</span><span> /home/{{user}}</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> {{user}}:www-data</span><span> /home/{{user}}</span></span> <span><span>sudo</span><span> chmod</span><span> 750</span><span> /home/{{user}}</span></span> <span><span>sudo</span><span> chmod</span><span> 755</span><span> /home/{{user}}/public_html</span></span> <span><span>sudo</span><span> -u</span><span> www-data</span><span> ls</span><span> -la</span><span> /home/{{user}}/public_html</span></span> <span></span> <span><span># Setup Caddy configuration</span></span> <span><span>sudo</span><span> tee</span><span> /etc/caddy/Caddyfile</span><span> &gt;</span><span> /dev/null</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>{{domain}} {</span></span> <span><span> root * /home/{{user}}/public_html</span></span> <span><span> php_fastcgi unix//run/php/{{user}}.sock</span></span> <span><span> file_server</span></span> <span><span> encode gzip</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span># Reload Caddy and check services</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> php8.3-fpm</span><span> caddy</span></span></code><span></span><span></span></pre> <p>This is what the <code>Deploy Wordpress</code> does:</p> <ul> <li>Creates Linux user (For example, <code>{{user1}}</code>) with dedicated directory</li> <li>Sets strict file ownership (user:www-data) and socket permissions</li> <li>Sets ownership and permissions for the user web directory</li> <li>Downloads and extracts WordPress for user under its isolated directory</li> <li>Configures unique database credentials in <code>wp-config.php</code> for each instance</li> <li>Applies secure file and directory permissions to protect sensitive files and limit access</li> <li>Generates a Caddyfile with virtual host configurations for domains, mapping each to its respective PHP-FPM pool</li> <li>Reloads Caddy and verifies service status to apply the changes</li> </ul> <p>With this script, your WordPress sites will be fully deployed, isolated by user accounts, securely served over HTTPS via Caddy.</p> <h2>Create a Variable Group</h2> <p>Before running the scripts, you need to define values for the placeholders <code>{{user}}</code>, <code>{{pass}}</code>, <code>{{db}}</code>, <code>{{db_user}}</code>, <code>{{db_pass}}</code>, and <code>{{domain}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.C0sswrB0.jpg" alt="CloudRay variable group setup for WordPress deployment parameters" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>user</code>:</strong> The Linux system users for the Wordpress site</li> <li><strong><code>pass</code>:</strong> The password for the Linux users</li> <li><strong><code>db</code>:</strong> The name of the MySQL database for the Wordpress site</li> <li><strong><code>db_user</code>:</strong> The MySQL username</li> <li><strong><code>db_pass</code>:</strong> The password of the database users</li> <li><strong><code>domain</code>:</strong> The domain name associated with the Wordpress site</li> </ul> <h2>Running the Script with CloudRay</h2> <p>You can run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. The deployment process involves two distinct phases:</p> <ol> <li><strong>Server Environment Setup:</strong> Run once per server to install dependencies</li> <li><strong>WordPress Deployment:</strong> Run separately for each site you want to deploy</li> </ol> <h3>Running Server Environment Setup Script</h3> <p>Follow these steps:</p> <ol> <li>Navigate to <strong>Runlogs</strong> in your CloudRay project</li> <li>Click New <strong>Run a Script</strong></li> </ol> <img src="/_astro/setup-runlog.qjrDxAgM.jpg" alt="screenshot of creating the setup runlog" /> <ol> <li>Configure the runlog: <ul> <li><strong>Server:</strong> Select your target server</li> <li><strong>Script:</strong> Choose “Server Environment Setup”</li> <li><strong>Variable Group:</strong> Select the variable group you created earlier</li> </ul> </li> <li>Run the script: Click on “Run” to execute the script on your server</li> </ol> <img src="/_astro/result-setup-runlog.DEFezDxQ.jpg" alt="CloudRay script execution log for WordPress server preparation" /> <p>CloudRay will connect to your server, run the <code>Server Environment Setup</code> script, and show you the live output as the script executes. This one-time setup installs all required system packages and services. You only need to run this script once when setting up a new server.</p> <h3>Running the Deploy WordPress Script</h3> <p>For each WordPress site you want to deploy:</p> <ol> <li>Navigate to Runlogs &gt; Run a Script</li> </ol> <img src="/_astro/deploy-runlog.DXtfRnva.jpg" alt="screenshot of creating the setup runlog" /> <ol> <li> <p>Configure the runlog:</p> <ul> <li><strong>Server:</strong> Select the same server where you ran the setup</li> <li><strong>Script:</strong> Choose “Deploy WordPress”</li> <li><strong>Variable Group:</strong> Select your predefined variables (or create new ones for this site)</li> </ul> </li> <li> <p>Click Run Now to deploy</p> </li> </ol> <img src="/_astro/result-deploy-runlog.D2-thj4W.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>You can now visit one of the Wordpress sites using your domains (For Example, <a href="https://site1.com">https://site1.com</a>, <a href="https://site2.com">https://site2.com</a>, etc).</p> <img src="/_astro/browser-display.DTKc47x_.jpg" alt="WordPress installation success page on Chrome browser" /> <p>Your Wordpress sites are now deployed and managed with CloudRay.</p> <p>If you wish to deploy additional WordPress sites on your server using CloudRay, the process is straightforward. Simply update the variables in your Variable Group such as <code>user</code>, <code>pass</code>, <code>db</code>, <code>db_user</code>, <code>db_pass</code>, and <code>domain</code> to reflect the new site’s details. Once these variables are set, rerun the existing <code>Deploy WordPress</code> script through CloudRay. This approach allows you to provision new, isolated WordPress instances efficiently, without modifying the underlying scripts.</p> <div> <p>TIP</p> <p>You can create multiple variable groups in CloudRay to store configurations for different WordPress sites, making it easy to redeploy or manage them individually.</p> </div> <h2>Restart WordPress Server (Optional)</h2> <p>For maintenance or troubleshooting, use this script to safely restart all WordPress services while minimising downtime. It preserves active connections and verifies each component.</p> <ol> <li>Go to Scripts &gt; New Script in CloudRay</li> <li>Name: “Restart WordPress Services”</li> </ol> <img src="/_astro/restart-server.B1UKIbJH.jpg" alt="Screenshot of restart script" /> <ol> <li>Paste the following code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># CloudRay: WordPress Service Restarter</span></span> <span><span># Safely reloads PHP-FPM, Caddy, and verifies MySQL</span></span> <span></span> <span><span>echo</span><span> "🔄 Starting WordPress service restart sequence..."</span></span> <span></span> <span><span># Graceful PHP-FPM reload (maintains active connections)</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> php8.3-fpm</span><span> &amp;&amp; </span><span>\</span></span> <span><span>echo</span><span> "✅ PHP-FPM reloaded (zero-downtime)"</span></span> <span></span> <span><span># Hard Caddy restart (ensures config reload)</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> caddy</span><span> &amp;&amp; </span><span>\</span></span> <span><span>echo</span><span> "✅ Caddy restarted with fresh config"</span></span> <span></span> <span><span># MySQL status check (no restart)</span></span> <span><span>mysql_status</span><span>=</span><span>$(</span><span>sudo</span><span> systemctl</span><span> is-active</span><span> mysql</span><span>)</span></span> <span><span>[ </span><span>"</span><span>$mysql_status</span><span>"</span><span> =</span><span> "active"</span><span> ] &amp;&amp; </span><span>\</span></span> <span><span>echo</span><span> "✅ MySQL verified (running)"</span><span> ||</span><span> \</span></span> <span><span>echo</span><span> "⚠️ MySQL status: </span><span>$mysql_status</span><span>"</span></span> <span></span> <span><span># Post-restart verification</span></span> <span><span>echo</span><span> -e</span><span> "\n🔍 Service Status Summary:"</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> php8.3-fpm</span><span> caddy</span><span> mysql</span><span> --no-pager</span></span> <span></span> <span><span># WordPress connectivity test</span></span> <span><span>echo</span><span> -e</span><span> "\n🌐 Testing WordPress at {{domain}}..."</span></span> <span><span>curl</span><span> -Is</span><span> https://{{domain}}</span><span> |</span><span> grep</span><span> -E</span><span> "HTTP|Location"</span><span> ||</span><span> \</span></span> <span><span>echo</span><span> "Failed to connect to WordPress"</span></span> <span></span> <span><span>echo</span><span> -e</span><span> "\n🟢 Restart completed at $(</span><span>date</span><span> +'%Y-%m-%d %H:%M:%S')"</span></span></code><span></span><span></span></pre> <p>Run this script with the same variable group as your WordPress site to automatically test the correct domain.</p> <h2>Managing WordPress Files After Deployment</h2> <p>Once your WordPress sites are deployed, you may need to manage files directly. For secure file transfers, we recommend setting up an FTP server with user isolation that matches your WordPress account structure. Our guide on Automating FTP Server Installation with CloudRay shows how to configure vsftpd with the same user accounts created for WordPress, enabling secure file management through clients like FileZilla while maintaining your security isolation.</p> <p>The multi-site deployment approach used here provides several advantages:</p> <ul> <li><strong>Resource Efficiency:</strong> Hosting multiple isolated WordPress instances on a single server optimises hardware utilisation</li> <li><strong>Simplified Management:</strong> Centralised administration through CloudRay while maintaining site independence</li> <li><strong>Cost Effective:</strong> Eliminates the need for separate servers for each WordPress installation</li> <li><strong>Consistent Security:</strong> Uniform security policies with individualised access controls</li> </ul> <p>For production environments, consider implementing our <a href="/articles/automate-wordpress-multi-site-backups">Automated WordPress Backup Solution</a> to protect all sites simultaneously while maintaining their isolated structures.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>Automate the Installation of Redis Serverhttps://cloudray.io/articles/install-redis-serverhttps://cloudray.io/articles/install-redis-serverLearn how to automate Redis installation configuration and secure setup with password binding and service management using Bash scriptsTue, 22 Apr 2025 00:00:00 GMT<p>Redis deployments require proper configuration of network binding, memory management, and authentication to prevent unauthorised access. This guide demonstrates how to automate Redis Server installation with <a href="https://app.cloudray.io/">CloudRay</a>. This automation with CloudRay would implement secure password authentication, proper daemon configuration, network binding restrictions, and service lifecycle management.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#redis-installation-script">Redis Installation Script</a></li> <li><a href="#redis-configuration-script">Redis Configuration Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-scripts-to-install-redis-with-cloudray">Running the Scripts to Install Redis with CloudRay</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before beginning Redis deployment, ensure your target servers are connected to CloudRay. Follow the <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage servers if not already configure</p> <div> <p>NOTE</p> <p>These scripts target Ubuntu/Debian systems. For RHEL-based distributions, replace <code>apt</code> commands with appropriate <code>yum</code> or <code>dnf</code> equivalents. The Redis configuration file path may differ on non-Debian systems.</p> </div> <h2>Create the Automation Script</h2> <p>Two Bash scripts are required for complete Redis deployment:</p> <ol> <li><strong>Redis Installation Script::</strong> This script handles package installation and base configuration</li> <li><strong>Redis Configuration Script:</strong> This script implement security settings and authentication</li> </ol> <p>Let’s begin with the Redis installation script.</p> <h3>Redis Installation Script</h3> <p>This script performs the initial Redis Server deployment with production-ready defaults. Follow these steps to create the script:</p> <img src="/_astro/install-redis.BRbLju9o.png" alt="Screenshot of adding a new install script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Redis Installation Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on any error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update package list</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span># Install Redis server</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> redis-server</span><span> -y</span></span> <span></span> <span><span># Display installed Redis version</span></span> <span><span>redis-server</span><span> --version</span></span> <span></span> <span><span># Modify redis.conf:</span></span> <span><span># - Uncomment 'bind 127.0.0.1 -::1'</span></span> <span><span># - Ensure daemonize is 'yes'</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> 's/^# *bind 127.0.0.1 ::1/bind 127.0.0.1 ::1/'</span><span> /etc/redis/redis.conf</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> 's/^# *daemonize no/daemonize yes/'</span><span> /etc/redis/redis.conf</span></span> <span></span> <span><span># Enable Redis service to start on boot</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> redis-server.service</span></span> <span></span> <span><span># Start Redis service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> redis</span></span> <span></span> <span><span># Check Redis service status</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> redis</span><span> --no-pager</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Redis Installation Script</code> does:</p> <ul> <li>Installs Redis from system packages</li> <li>Configures loopback interface binding</li> <li>Enables daemon mode for service management</li> <li>Verifies successful service start</li> </ul> <h3>Redis Configuration Script</h3> <p>This script implements security controls and tests basic functionality.</p> <img src="/_astro/configuration-script.BEWae1po.png" alt="Screenshot of securing cockpit" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Redis Configuration Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on any error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Modify redis.conf:</span></span> <span><span># - Uncomment and set requirepass to the strong password</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> "s/^# *requirepass .*/requirepass {{redis_password}}/"</span><span> /etc/redis/redis.conf</span></span> <span></span> <span><span># Restart Redis service to apply changes</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> redis</span></span> <span></span> <span><span># Connect to Redis CLI and test authentication &amp; set/get commands</span></span> <span><span>redis-cli</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>auth {{redis_password}}</span></span> <span><span>select 1</span></span> <span><span>set testkey "Hello from CloudRay!"</span></span> <span><span>get testkey</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "✅ Redis authentication and test command successful."</span></span></code><span></span><span></span></pre> <p>This is what the <code>Secure Cockpit Script</code> does:</p> <ul> <li>Enables password authentication</li> <li>Restarts service to apply security settings</li> <li>Validates configuration with test operations</li> </ul> <h2>Create a Variable Group</h2> <p>The configuration script requires a password variable. Create a variable group to store this securely. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.CfWiHQaC.png" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li>Navigate to <strong>Variable Group</strong> under Scripts</li> <li>Create a new group named <code>Redis Deployment Variables</code></li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>redis_password</code>:</strong> The password for Redis authentication</li> </ul> <p>Since the variables are setup, proceed to run the scripts with CloudRay.</p> <h2>Running the Scripts to Install Redis with CloudRay</h2> <p>The Redis deployment consists of two dependent scripts that should execute sequentially. <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a> provide an efficient way to orchestrate this workflow.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.CGdUSO3y.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Automate Redis Server Installation”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.Vkhj2ESg.png" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.DJ3HOJp2.png" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where Redis will be installed</li> <li>Script Playlist: Choose the playlist you created (For example “Automate Redis Server Installation”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.DfsP7d9e.png" alt="Screenshot of the result of all the script from the script playlist" /> <p>After successful script execution to install Redis Server. Validate the Redis installation by clicking the Runlog of the <code>Configure Redis Script</code></p> <img src="/_astro/result.DqLQTW96.png" alt="Screenshot of the result of all the script from the script playlist" /> <p>As seen in the ouput, Redis is successfully working correctly.</p> <div> <p>IMPORTANT</p> <p>For production environments, we recommend running the installation playlist on multi nodes, configuring Redis Sentinel for higher availability, setting up monitoring.</p> </div> <p>For other database deployments using CloudRay check out our guides on:</p> <ul> <li><a href="/articles/deploy-mysql-server">Automating MySQL Server Installation</a></li> <li><a href="/articles/setting-up-postgres-database">Setting up PostgreSQL</a></li> <li><a href="/articles/install-mongodb">Automating MongoDB Installation</a></li> </ul> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>Automating Web App Deployment with Terraform, GitHub, and CloudRayhttps://cloudray.io/articles/automate-web-app-deployment-using-terraform-and-cloudrayhttps://cloudray.io/articles/automate-web-app-deployment-using-terraform-and-cloudrayLearn how to automate the provisioning and deployment of a Node.js app using Terraform, CloudRay, and GitHub Actions with zero manual interventionMon, 07 Apr 2025 00:00:00 GMT<p><a href="https://app.cloudray.io/">CloudRay</a> makes it easy to automate application deployment across your infrastructure. Instead of juggling manual steps or switching between tools, you can integrate CloudRay with <a href="https://www.terraform.io/">Terraform</a> and GitHub to build a seamless deployment pipeline.</p> <p>In this tutorial, you will learn how to use Terraform to provision infrastructure on DigitalOcean, and then configure CloudRay to automate the deployment and management of a Node.js application. Additionally, You will also learn how to implement a simple CI/CD pipeline using GitHub Actions and <a href="https://cloudray.io/docs/incoming-webhooks">CloudRay webhook</a>, so every push to your repository automatically triggers a deployment.</p> <p>By the end, you’ll have a fully automated system that provisions a server, deploys your web app, and keeps it updated with zero manual effort.</p> <h2>Contents</h2> <ul> <li><a href="#prerequisites">Prerequisites</a></li> <li><a href="#provisioning-infrastructure-with-terraform">Provisioning Infrastructure with Terraform</a></li> <li><a href="#configuring-cloudray-for-automated-deployments">Configuring CloudRay for Automated Deployments</a> <ul> <li><a href="#creating-deployment-script-and-cloudray-webhook">Creating Deployment Script and CloudRay Webhook</a></li> <li><a href="#integrating-cloudray-webhook-in-terraform">Integrating CloudRay Webhook in Terraform</a></li> </ul> </li> <li><a href="#setting-up-continuous-deployment-with-github-and-cloudray">Setting Up Continuous Deployment with GitHub and CloudRay</a></li> <li><a href="#conclusion">Conclusion</a></li> </ul> <h2>Prerequisites</h2> <p>Before you begin, ensure you have the following set up:</p> <ul> <li>Terraform installed: You can install Terraform by following the <a href="https://developer.hashicorp.com/terraform/install">official instructions</a></li> <li>CloudRay account: Sign up at <a href="https://app.cloudray.io">https://app.cloudray.io</a></li> <li>DigitalOcean Personal Access Token: Create one via your <a href="https://docs.digitalocean.com/reference/api/create-personal-access-token/">DigitalOcean control panel</a>. You will need this to authenticate Terraform with DigitalOcean</li> <li>SSH key added to your DigitalOcean account: Create and upload a key using <a href="https://cloudray.io/docs/server-keys">this guide</a>. Make note of the name you assign—it will be used in your Terraform configuration</li> <li>GitHub repository with a sample Node.js app: For this tutorial, we’ll use a sample Node.js app stored in GitHub. You can fork and <a href="https://github.com/GeoSegun/node-application-cloudray.git">clone this starter app</a> or use your own</li> </ul> <h2>Provisioning Infrastructure with Terraform</h2> <p>Terraform lets you define infrastructure as code and supports a wide range of platforms through installable providers. Each provider acts as a bridge between Terraform and the APIs of the service you’re provisioning—like DigitalOcean in our case.</p> <p>We will start by using Terraform to provision a virtual machine (droplet) on DigitalOcean. This will serve as the host for our Node.js application.</p> <p>First, you create a project directory to house your infrastructure configuration files and navigate into the file:</p> <pre><code><span><span>mkdir</span><span> infra-cloudray</span><span> &amp;&amp; </span><span>cd</span><span> infra-cloudray</span></span></code><span></span><span></span></pre> <p>Next, before running any terraform command, set the following environment variables to pass in your private SSH key and DigitalOcean API token securely. Replace the token value with your own generated token from the DigitalOcean dashboard:</p> <pre><code><span><span>export</span><span> TF_VAR_pvt_key</span><span>=</span><span>"~/.ssh/id_ed25519"</span></span> <span><span>export</span><span> TF_VAR_do_token</span><span>=</span><span>"dop_v1_XXXXXXXXXXXXXXXXXXXXXXXXXXXX"</span></span></code><span></span><span></span></pre> <div> <p>TIP</p> <p><code>TF_VAR_</code> prefix allows you to pass environment variables to Terraform as input variables</p> </div> <p>Create a file named <code>provider.tf</code> which stores the configuration of the provider:</p> <pre><code><span><span>nano</span><span> provider.tf</span></span></code><span></span><span></span></pre> <p>Then add the following configuration into the file:</p> <pre><code><span><span>terraform</span><span> {</span></span> <span><span> required_providers</span><span> {</span></span> <span><span> digitalocean</span><span> =</span><span> {</span></span> <span><span> source </span><span>=</span><span> "digitalocean/digitalocean"</span></span> <span><span> version </span><span>=</span><span> "~&gt; 2.0"</span></span> <span><span> }</span></span> <span><span> }</span></span> <span><span>}</span></span> <span></span> <span><span>provider</span><span> "digitalocean"</span><span> {</span></span> <span><span> token</span><span> =</span><span> var</span><span>.</span><span>do_token</span></span> <span><span>}</span></span> <span></span> <span><span>data</span><span> "digitalocean_ssh_key"</span><span> "my_key"</span><span> {</span></span> <span><span> name</span><span> =</span><span> "my_key"</span></span> <span><span>}</span></span></code><span></span><span></span></pre> <p>This file sets up the Terraform provider configuration, which tells Terraform to use the DigitalOcean plugin and specifies your authentication token. Replace <code>my_key</code> with the exact name you used when uploading your SSH key to DigitalOcean.</p> <p>Create the second file named <code>variable.tf</code>:</p> <pre><code><span><span>nano</span><span> variables.tf</span></span></code><span></span><span></span></pre> <p>Add the following configurations into the file:</p> <pre><code><span><span>variable</span><span> "do_token"</span><span> {</span></span> <span><span> description</span><span> =</span><span> "DigitalOcean API token"</span></span> <span><span> type</span><span> =</span><span> string</span></span> <span><span> sensitive</span><span> =</span><span> true</span></span> <span><span>}</span></span> <span></span> <span><span>variable</span><span> "pvt_key"</span><span> {</span></span> <span><span> description</span><span> =</span><span> "Path to the private SSH key"</span></span> <span><span> type</span><span> =</span><span> string</span></span> <span><span> default</span><span> =</span><span> "~/.ssh/id_ed25519"</span></span> <span><span>}</span></span></code><span></span><span></span></pre> <p>This file declares input variables, including the DigitalOcean API token and your SSH private key path.</p> <p>Finally, create the main infrastructure file named <code>www-cloudray.tf</code>:</p> <pre><code><span><span>nano</span><span> www-cloudray.tf</span></span></code><span></span><span></span></pre> <p>Similarly, add the following configuration inside the file:</p> <pre><code><span><span>resource</span><span> "digitalocean_droplet"</span><span> "www-cloudray"</span><span> {</span></span> <span><span> image</span><span> =</span><span> "ubuntu-24-10-x64"</span></span> <span><span> name</span><span> =</span><span> "www-cloudray"</span></span> <span><span> region</span><span> =</span><span> "nyc3"</span></span> <span><span> size</span><span> =</span><span> "s-1vcpu-1gb"</span></span> <span><span> ssh_keys</span><span> =</span><span> [</span></span> <span><span> data</span><span>.</span><span>digitalocean_ssh_key</span><span>.</span><span>my_key</span><span>.</span><span>id</span></span> <span><span> ]</span></span> <span></span> <span><span> connection</span><span> {</span></span> <span><span> host</span><span> =</span><span> self</span><span>.</span><span>ipv4_address</span></span> <span><span> user</span><span> =</span><span> "root"</span></span> <span><span> type</span><span> =</span><span> "ssh"</span></span> <span><span> private_key</span><span> =</span><span> file</span><span>(var</span><span>.</span><span>pvt_key)</span></span> <span><span> timeout</span><span> =</span><span> "2m"</span></span> <span><span> }</span></span> <span><span>}</span></span></code><span></span><span></span></pre> <p>This is the main infrastructure file where we define our server. We use Ubuntu 24.10 and connect via SSH using your uploaded key. Furthermore, the configuration tells terraform to create a droplet named “www-cloudray”, use the SSH key you added to DigitalOcean and automatically connect connect using SSH for further provisioning.</p> <p>Now, you initialize the terraform file:</p> <pre><code><span><span>terraform</span><span> init</span></span></code><span></span><span></span></pre> <img src="/_astro/terraform-init.DMcvn4Vr.jpg" alt="Screenshot of successful initiallisation" /> <p>Next, you run the <code>terraform plan</code> to see your execution plan:</p> <pre><code><span><span>terraform</span><span> plan</span></span></code><span></span><span></span></pre> <img src="/_astro/terraform-plan.BOFT3PsX.jpg" alt="Screenshot of terraform plan output" /> <p>The <code>+ resource "digitalocean_droplet" "www-cloudray"</code> shows that terraform will create a droplet resources named <code>www-cloudray</code>. Then, apply the configuration by running the command:</p> <pre><code><span><span>terraform</span><span> apply</span></span></code><span></span><span></span></pre> <p>When prompted, type <code>yes</code> to confirm.</p> <p>After deployment, you can inspect the created infrastructure and get the server’s IP address using:</p> <pre><code><span><span>terraform</span><span> show</span><span> terraform.tfstate</span></span></code><span></span><span></span></pre> <img src="/_astro/terraform-inspect.DKZ35Wjt.jpg" alt="Screenshot of showing server details" /> <p>Now, you can <a href="https://cloudray.io/docs/servers">add the server to CloudRay</a></p> <h2>Configuring CloudRay for Automated Deployments</h2> <p>Now that your DigitalOcean droplet is provisioned and added to CloudRay, let’s automate deployments using CloudRay’s script orchestration. This section covers:</p> <ul> <li>Creating a deployment script and webhook</li> <li>Modifying Terraform to trigger deployment of the application</li> </ul> <p>Let’s get started.</p> <h3>Creating Deployment Script and CloudRay Webhook</h3> <p>First you create the deployment script by following these steps:</p> <p>To create the setup script, you need to follow this steps:</p> <img src="/_astro/deploy-script.D1wDH0ub.jpg" alt="Screenshot of adding a new deployment script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Deploy App Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Update package lists</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> -y</span></span> <span></span> <span><span># Install Nginx</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> </span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nginx</span><span> </span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install Node.js and npm</span></span> <span><span>curl</span><span> -fsSL</span><span> https://deb.nodesource.com/setup_20.x</span><span> |</span><span> sudo</span><span> -E</span><span> bash</span><span> -</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span></span> <span></span> <span><span># Install PM2</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> pm2</span></span> <span></span> <span><span># Clean and clone repo</span></span> <span><span>sudo</span><span> rm</span><span> -rf</span><span> "{{app_dir}}"</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> "{{app_dir}}"</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> $USER</span><span>:</span><span>$USER </span><span>"{{app_dir}}"</span></span> <span><span>git</span><span> clone</span><span> "{{repo_url}}"</span><span> "{{app_dir}}"</span></span> <span></span> <span><span># Install dependencies</span></span> <span><span>cd</span><span> "{{app_dir}}/app"</span></span> <span><span>npm</span><span> install</span></span> <span></span> <span><span># Start application</span></span> <span><span>pm2</span><span> start</span><span> server.js</span><span> --name</span><span> nodeapp</span></span> <span><span>pm2</span><span> save</span></span> <span><span>pm2</span><span> startup</span></span> <span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; /etc/nginx/sites-available/nodeapp"</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>server {</span></span> <span><span> listen 80;</span></span> <span><span> server_name {{domain}} www.{{domain}};</span></span> <span><span> location / {</span></span> <span><span> proxy_pass http://127.0.0.1:3000;</span></span> <span><span> proxy_http_version 1.1;</span></span> <span><span> proxy_set_header Upgrade </span><span>\$</span><span>http_upgrade;</span></span> <span><span> proxy_set_header Connection 'upgrade';</span></span> <span><span> proxy_set_header Host </span><span>\$</span><span>host;</span></span> <span><span> proxy_cache_bypass </span><span>\$</span><span>http_upgrade;</span></span> <span><span> }</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span># Enable config</span></span> <span><span>sudo</span><span> ln</span><span> -sf</span><span> /etc/nginx/sites-available/nodeapp</span><span> /etc/nginx/sites-enabled/</span></span> <span><span>sudo</span><span> nginx</span><span> -t</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> nginx</span></span> <span></span> <span><span># SSL with Certbot</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> certbot</span><span> python3-certbot-nginx</span></span> <span><span>sudo</span><span> certbot</span><span> --nginx</span><span> -d</span><span> {{domain}}</span><span> -d</span><span> www.{{domain}}</span><span> --email</span><span> {{email}}</span><span> --agree-tos</span><span> --non-interactive</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> nginx</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Deploy App Script</code> does:</p> <ul> <li>Sets up the web server and runtime environment</li> <li>Clones your Node.js repository and installs dependencies</li> <li>Runs app as a managed background service</li> <li>Routes web traffic to your Node.js app</li> <li>Automatically provisions Let’s Encrypt SSL certificates</li> </ul> <p>before using the scripts, you need to define values for the placeholders <code>{{app_dir}}</code>, <code>{{repo_url}}</code>, <code>{{domain}}</code>, and <code>{{email}}</code>, used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.DthQ6BU5.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a variable Group:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>{{app_dir}}</code>:</strong> This is the application install path</li> <li><strong><code>{{repo_url}}</code>:</strong> This is the GitHub repository URL</li> <li><strong><code>domain</code>:</strong> The registered domain name for SSL certificate configuration</li> <li><strong><code>email</code>:</strong> The email address associated with the SSL certificate (used for renewal alerts)</li> </ul> <p>Since the script and variables are setup, proceed to creating a Webhook for the deployment script.</p> <p>To create an <a href="https://cloudray.io/docs/incoming-webhooks">Incoming Webhook</a> in CloudRay follow these steps:</p> <ul> <li>In CloudRay navigate and click on “Incoming Webhooks”</li> <li>Click on “New Incoming Webhook” and fill in the detials to create the Webhook</li> </ul> <img src="/_astro/create-webhook.D4o2HffA.jpg" alt="Screenshot of creating a webhook" /> <ul> <li>Click “Create New Webhook”. This will generate a unique URL for your webhook</li> </ul> <img src="/_astro/create-webhook.D4o2HffA.jpg" alt="Screenshot of created webhook" /> <p>Now that your deployment script and CloudRay webhook are ready, the final step is to make sure Terraform can trigger that webhook</p> <h3>Integrating CloudRay Webhook in Terraform</h3> <p>We can update our Terraform configuration to trigger the CloudRay webhook after the droplet is successfully provisioned. Instead of using the <code>remote-exec</code> provisioner, we use a <code>null_resource</code> block with a <code>local-exec</code> provisioner, which runs the webhook call locally from the machine running Terraform.</p> <p>Update your <code>www-cloudray.tf</code> file:</p> <pre><code><span><span>nano</span><span> www-cloudray.tf</span></span></code><span></span><span></span></pre> <p>Add the following <code>null_resource</code> configuration outside the droplet block:</p> <pre><code><span><span>...</span></span> <span><span>resource</span><span> "null_resource"</span><span> "trigger_cloudray_webhook"</span><span> {</span></span> <span><span> provisioner</span><span> "local-exec"</span><span> {</span></span> <span><span> command</span><span> =</span><span> &lt;&lt;EOT</span></span> <span><span> curl -X POST \</span></span> <span><span> https://api.cloudray.io/w/58b3b37d-a77c-4b22-a6bc-ef3e2af8af34 \</span></span> <span><span> -H "Content-Type: application/json" \</span></span> <span><span> -d '{"test": false}'</span></span> <span><span> EOT</span></span> <span><span> }</span></span> <span></span> <span><span> depends_on</span><span> =</span><span> [digitalocean_droplet</span><span>.</span><span>www-cloudray]</span></span> <span><span>}</span></span> <span><span>...</span></span></code><span></span><span></span></pre> <p>This tells Terraform to trigger the CloudRay webhook only after the <code>www-cloudray</code> droplet has been successfully created.</p> <p>Your updated <code>www-cloudray.tf</code> file should now look like this:</p> <img src="/_astro/edited-terraform-file.B-H7gRqw.jpg" alt="Screenshot of new wwww-cloudray.tf file" /> <p>This approach ensures that the webhook is triggered reliably during the provisioning process without the need for SSH access into the droplet.</p> <p>Before applying your changes, reinitialize your Terraform project and upgrade any modules or providers:</p> <pre><code><span><span>terraform</span><span> init</span><span> -upgrade</span></span></code><span></span><span></span></pre> <p>Then apply the configuration:</p> <pre><code><span><span>terraform</span><span> apply</span></span></code><span></span><span></span></pre> <p>Once terraform apply completes successfully, head over to your <a href="https://cloudray.io/docs/runlogs">CloudRay Runlog</a> to confirm that the webhook was received and the deployment job ran successfully.</p> <img src="/_astro/result-automation-scripts.DxuDoloi.jpg" alt="Screenshot of successful Runlog" /> <p>Open your browser and navigate to your configured domain (e.g., <a href="https://www.mydomain.com">https://www.mydomain.com</a>). You should now see your application live and running on the newly provisioned droplet.</p> <img src="/_astro/running-application.C1Ka77Ef.jpg" alt="Screenshot of successful Runlog" /> <h2>Setting Up Continuous Deployment with GitHub and CloudRay</h2> <p>To automate your deployment pipeline, we’ll integrate GitHub Actions with CloudRay so that any push to the <code>main</code> branch triggers a webhook, which then updates your server with the latest code.</p> <p>First, let’s create the script on CloudRay to update always update the application. You can follow similar process as the above and use this code:</p> <img src="/_astro/git-action-deploy.DYpe3zAf.jpg" alt="Screenshot of setup CI" /> <pre><code><span><span>#!/bin/bash</span></span> <span><span>set</span><span> -e</span><span> # Exit immediately if any command fails</span></span> <span></span> <span><span># Navigate to project root</span></span> <span><span>cd</span><span> /var/www/nodeapp</span></span> <span></span> <span><span>echo</span><span> "➡️ Pulling latest changes..."</span></span> <span><span>git</span><span> pull</span><span> origin</span><span> main</span></span> <span></span> <span><span># Move to app directory</span></span> <span><span>cd</span><span> app</span></span> <span></span> <span><span>echo</span><span> "📦 Installing dependencies..."</span></span> <span><span>npm</span><span> install</span></span> <span></span> <span><span>echo</span><span> "🔄 Restarting application..."</span></span> <span><span>pm2</span><span> restart</span><span> nodeapp</span><span> ||</span><span> (</span><span>pm2</span><span> delete</span><span> nodeapp</span><span> &amp;&amp; </span><span>pm2</span><span> start</span><span> server.js</span><span> --name</span><span> nodeapp</span><span>)</span></span></code><span></span><span></span></pre> <p>Next, we also create a Webhook following similar steps as discussed earlier</p> <img src="/_astro/new-webhook.DbJ--Vnm.jpg" alt="Screenshot of setup CI" /> <p>Then go to the project repository (the node.js application), create a directory and file:</p> <pre><code><span><span>mkdir</span><span> -p</span><span> .github/workflows</span></span> <span><span>touch</span><span> .github/workflows/deploy.yml</span></span></code><span></span><span></span></pre> <p>Then add the following content to <code>.github/workflows/deploy.yml</code>:</p> <pre><code><span><span>name</span><span>: </span><span>Deploy to CloudRay</span></span> <span></span> <span><span>on</span><span>:</span></span> <span><span> push</span><span>:</span></span> <span><span> branches</span><span>: [ </span><span>main</span><span> ]</span></span> <span></span> <span><span>jobs</span><span>:</span></span> <span><span> deploy</span><span>:</span></span> <span><span> runs-on</span><span>: </span><span>ubuntu-latest</span></span> <span><span> steps</span><span>:</span></span> <span><span> - </span><span>name</span><span>: </span><span>Trigger CloudRay Webhook</span></span> <span><span> env</span><span>:</span></span> <span><span> WEBHOOK_URL</span><span>: </span><span>${{ secrets.CLOUDRAY_WEBHOOK }}</span></span> <span><span> run</span><span>: </span><span>|</span></span> <span><span> curl -X POST "$WEBHOOK_URL" \</span></span> <span><span> -H "Content-Type: application/json" \</span></span> <span><span> -d '{"test": false}'</span></span></code><span></span><span></span></pre> <p>To add your CloudRay webhook to GitHub secrets, follow these steps:</p> <img src="/_astro/secret-steps.C7Y1QsPY.jpg" alt="Screenshot of first addition of GitHub secret step" /> <ul> <li>Go to your GitHub repo Settings → Secrets → Actions</li> <li>Click “New repository secret”</li> </ul> <img src="/_astro/create-secret.XWQfre7P.jpg" alt="Screenshot of first addition of GitHub secret step" /> <ul> <li>Name: CLOUDRAY_WEBHOOK</li> <li>Value: Paste your CloudRay webhook URL</li> <li>Click “Add secret”</li> </ul> <p>Finally, let’s test the pipeline, make a changes on the application (modify the <code>index.html</code> file):</p> <pre><code><span><span>&lt;!</span><span>DOCTYPE</span><span> html</span><span>&gt;</span></span> <span><span>&lt;</span><span>html</span><span> lang</span><span>=</span><span>"en"</span><span>&gt;</span></span> <span><span>&lt;</span><span>head</span><span>&gt;</span></span> <span><span> &lt;</span><span>meta</span><span> charset</span><span>=</span><span>"UTF-8"</span><span>&gt;</span></span> <span><span> &lt;</span><span>meta</span><span> name</span><span>=</span><span>"viewport"</span><span> content</span><span>=</span><span>"width=device-width, initial-scale=1.0"</span><span>&gt;</span></span> <span><span> &lt;</span><span>title</span><span>&gt;Welcome&lt;/</span><span>title</span><span>&gt;</span></span> <span><span>&lt;/</span><span>head</span><span>&gt;</span></span> <span><span>&lt;</span><span>body</span><span>&gt;</span></span> <span><span> &lt;</span><span>h1</span><span>&gt;Greetings from CloudRay&lt;/</span><span>h1</span><span>&gt;</span></span> <span><span> &lt;</span><span>p</span><span>&gt;This update was deployed automatically via GitHub → CloudRay 🎉&lt;/</span><span>p</span><span>&gt;</span></span> <span><span>&lt;/</span><span>body</span><span>&gt;</span></span> <span><span>&lt;/</span><span>html</span><span>&gt;</span></span></code><span></span><span></span></pre> <p>Commit and push to main:</p> <pre><code><span><span>git</span><span> add</span><span> .</span></span> <span><span>git</span><span> commit</span><span> -m</span><span> "Test CI/CD pipeline"</span></span> <span><span>git</span><span> push</span><span> origin</span><span> main</span></span></code><span></span><span></span></pre> <p>Watch the process, GitHub Actions will show the workflow running.</p> <img src="/_astro/show-workflow.DIJiZGIp.jpg" alt="Screenshot of first addition of GitHub secret step" /> <p>Additionally, CloudRay will display the script execution in Run Logs.</p> <img src="/_astro/successful-runlog.XX74287p.jpg" alt="Screenshot of first addition of GitHub secret step" /> <p>Finally, visit your domain to see changes live within seconds.</p> <img src="/_astro/live-changes.k7Anyihi.jpg" alt="Screenshot of first addition of GitHub secret step" /> <h2>Conclusion</h2> <p>By following this guide, you’ve successfully built a complete infrastructure automation and CI/CD pipeline that combines Terraform, DigitalOcean, CloudRay, and GitHub Actions.</p> <p>The entire process from server creation to application updates now happens automatically whenever you push code changes, giving you more time to focus on development rather than deployment tasks.</p> <p>Ready to streamline your own deployment workflow? <a href="https://app.cloudray.io">Sign up for CloudRay today</a> and experience the power of infrastructure automation firsthand.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>Automate the Installation of Cockpithttps://cloudray.io/articles/install-cockpithttps://cloudray.io/articles/install-cockpitLearn how to automate Cockpit installation and security setup on Rocky Linux 9 using Bash scripts for consistent SSL firewall and admin configurationSun, 06 Apr 2025 00:00:00 GMT<p>Automating Cockpit deployment on Rocky Linux 9 ensures consistent, secure server management with minimal manual intervention. <a href="https://app.cloudray.io/">CloudRay</a> simplifies this process by handling dependencies, configurations, and security hardening through Bash scripted workflows.</p> <p>This guide walks through automating Cockpit setup with SSL encryption and firewall rules using CloudRay.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#install-cockpit-script">Install Cockpit Script</a></li> <li><a href="#secure-cockpit-script">Secure Cockpit Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-scripts-to-install-cockpit-with-cloudray">Running the Scripts to Install Cockpit with CloudRay</a></li> <li><a href="#troubleshooting">Troubleshooting</a></li> <li><a href="#related-guide">Related Guide</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment. Additionally, if you’re using a different version or a different distribution, adjust the commands accordingly</p> </div> <h2>Create the Automation Script</h2> <p>To automate the installation of Cockpit, you will need two Bash scripts:</p> <ol> <li><strong>Install Cockpit Script:</strong> This script installs and setup cockpit</li> <li><strong>Secure Cockpit Script:</strong> This script configures firewall rules, enables HTTPS encryption with Let’s Encrypt certificates, and secures remote access to the Cockpit web interface</li> </ol> <p>Let’s begin with the installation of Cockpit.</p> <h3>Install Cockpit Script</h3> <p>To create the Install Cockpit Script, you need to follow these steps:</p> <img src="/_astro/install-script.D0BQrGA5.jpg" alt="Screenshot of adding a new install script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install Cockpit Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on Error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update system</span></span> <span><span>sudo</span><span> dnf</span><span> update</span><span> -y</span></span> <span></span> <span><span># Install Cockpit</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> cockpit</span><span> -y</span></span> <span></span> <span><span># Enable and start Cockpit service</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> cockpit.socket</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> cockpit</span></span> <span></span> <span><span># Check Cockpit status</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> cockpit</span></span> <span></span> <span><span># Create an admin user for Cockpit</span></span> <span><span>sudo</span><span> adduser</span><span> {{admin_user}}</span></span> <span><span>echo</span><span> "{{admin_pass}}"</span><span> |</span><span> sudo</span><span> passwd</span><span> {{admin_user}}</span><span> --stdin</span></span> <span><span>sudo</span><span> usermod</span><span> -aG</span><span> wheel</span><span> {{admin_user}}</span></span> <span></span> <span><span>echo</span><span> "Cockpit installation and basic setup completed!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Install Cockpit Script</code> does:</p> <ul> <li>Updates all installed packages to the latest version</li> <li>Installs Cockpit and dependencies</li> <li>Creates a privileged user with variables for credentials</li> </ul> <h3>Secure Cockpit Script</h3> <p>Next, you need secure the Cockpit installed on your server. To do so, follow similar steps as the above:</p> <img src="/_astro/secure-script.BRoLalRf.jpg" alt="Screenshot of securing cockpit" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Secure Cockpit Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install Firewalld</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> firewalld</span><span> -y</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> firewalld</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> firewalld</span></span> <span></span> <span><span># Configure firewall rules</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --add-service=http</span><span> --permanent</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --permanent</span><span> --add-port=9090/tcp</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --reload</span></span> <span></span> <span><span># Install EPEL and Certbot</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> epel-release</span><span> -y</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> certbot</span><span> -y</span></span> <span></span> <span><span># Obtain SSL certificate</span></span> <span><span>sudo</span><span> certbot</span><span> certonly</span><span> --standalone</span><span> -d</span><span> {{domain}}</span><span> -m</span><span> {{email}}</span><span> --agree-tos</span><span> --no-eff-email</span><span> --non-interactive</span></span> <span></span> <span><span># Test certificate renewal</span></span> <span><span>sudo</span><span> certbot</span><span> renew</span><span> --dry-run</span></span> <span></span> <span><span># Link SSL certificates to Cockpit</span></span> <span><span>sudo</span><span> ln</span><span> -sf</span><span> /etc/letsencrypt/live/{{domain}}/fullchain.pem</span><span> {{cockpit_cert_dir}}/certificate.cert</span></span> <span><span>sudo</span><span> ln</span><span> -sf</span><span> /etc/letsencrypt/live/{{domain}}/privkey.pem</span><span> {{cockpit_cert_dir}}/certificate.key</span></span> <span></span> <span><span># Restart Cockpit to apply SSL</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> cockpit</span></span> <span></span> <span><span>echo</span><span> "Cockpit security configuration completed!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Secure Cockpit Script</code> does:</p> <ul> <li>Install and configures firewall rules</li> <li>Sets up Let’s Encrypt SSL certificates</li> <li>Enforces HTTPS for Cockpit’s web interface</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{admin_user}}</code>, <code>{{admin_pass}}</code>, <code>{{domain}}</code>, <code>{{email}}</code>, and <code>{{cockpit_cert_dir}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.CCSDnFHU.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>admin_user</code>:</strong> The username for the Cockpit administrator account</li> <li><strong><code>admin_pass</code>:</strong> The password for the Cockpit administrator account</li> <li><strong><code>domain</code>:</strong> The registered domain name for SSL certificate configuration</li> <li><strong><code>email</code>:</strong> The email address associated with the SSL certificate (used for renewal alerts)</li> <li><strong><code>cockpit_cert_dir</code>:</strong> The directory path where Cockpit stores its SSL certificates</li> </ul> <p>Since the variables are setup, proceed to run the scripts with CloudRay.</p> <h2>Running the Scripts to Install Cockpit with CloudRay</h2> <p>Now that everything is setup, you can use CloudRay to automate the installation of Cockpit.</p> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.DWH35jzv.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Automate Cockpit Deployment and Management”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.nwUYtvsi.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.DelW_oGp.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where Cockpit will be installed</li> <li>Script Playlist: Choose the playlist you created (For example “Automate Cockpit Deployment and Management”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.enGxxgCV.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Once the script runs successfully, your Cockpit will be fully deployed. You can now visit your Cockpit using your domain on port <code>9090</code> (<a href="https://cockpit.mydomain.com:9090">https://cockpit.mydomain.com:9090</a>).</p> <img src="/_astro/browser-display.DZ0SaZgI.jpg" alt="Screenshot of the output on the browser" /> <p>Your Cockpit is now deployed and managed with CloudRay.</p> <h2>Troubleshooting</h2> <p>If you encounter issues during Cockpit installation or setup, consider the following:</p> <ul> <li><strong>Cockpit Web Interface Not Loading::</strong> Check if the Cockpit service is running with <code>sudo systemctl status cockpit.socket</code> and restart it using <code>sudo systemctl restart cockpit.socket</code></li> <li><strong>SSL Certificate Errors:</strong> Verify certificate validity with <code>sudo certbot certificates</code> and renew if needed using <code>sudo certbot renew</code></li> <li><strong>Firewall Blocking Access:</strong> Ensure port 9090 is open by running <code>sudo firewall-cmd --list-ports | grep 9090</code> and add it if missing: <code>sudo firewall-cmd --add-port=9090/tcp --permanent &amp;&amp; sudo firewall-cmd --reload</code></li> </ul> <p>If the issue persists, consult the <a href="https://cockpit-project.org/documentation.html">official Cockpit documentation</a> for further assistance.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guide</h2> <ul> <li><a href="/articles/install-apache-airflow">Install Apache Airflow</a></li> <li><a href="/articles/install-k3s">Install K3s</a></li> <li><a href="/articles/install-nagios">Install Nagios</a></li> <li><a href="/articles/monitor-remote-hosts-with-nagios">Monitor Remote Host with Nagios</a></li> </ul>Automate the Installation of MongoDBhttps://cloudray.io/articles/install-mongodbhttps://cloudray.io/articles/install-mongodbLearn how to automate MongoDB installation user creation and scheduled backups on Ubuntu using Bash scripts for secure consistent deploymentsSat, 05 Apr 2025 00:00:00 GMT<p>Automating MongoDB deployment on remote servers can significantly streamline your database management process. <a href="https://app.cloudray.io/">CloudRay</a> provides automation capabilities that eliminate manual configuration steps while ensuring consistency across environments.</p> <p>In this guide, you will learn how to automate MongoDB installations and manage your entire deployment through CloudRay’s centralised platform. Finally, you will learn how to schedule automated backups for data protection.</p> <p>By the end of this article, you will have a fully automated MongoDB deployment with built-in security and maintenance workflows.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#mongodb-installation-script">MongoDB Installation Script</a></li> <li><a href="#user-configuration-script">User Configuration Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#scheduling-mongodb-database-backup-with-cloudrays-schedules-optional">Scheduling MongoDB Database Backup with CloudRay’s Schedules (Optional)</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> </ul> <h2>Create the Automation Script</h2> <p>We’ll use three specialized scripts to automate the MongoDB deployment:</p> <ol> <li><strong>MongoDB Installation Script</strong>: This script will handle MongoDB package installation and basic security configuration</li> <li><strong>User Configuration Script</strong>: This script will creates administrative and application users with proper roles</li> <li><strong>Backup Script</strong>: This script will configures automated database backups with retention policies</li> </ol> <p>Let’s begin with the MongoDB MongoDB Installation Script.</p> <h3>MongoDB Installation Script</h3> <p>To create the MongoDB Installation Script, you need to follow these steps:</p> <img src="/_astro/installation-script.CS2Giq54.jpg" alt="Screenshot of adding a new setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>MongoDB Installation Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit immediately on errors</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install MongoDB</span></span> <span><span>curl</span><span> -fsSL</span><span> https://www.mongodb.org/static/pgp/server-{{mongo_version}}.asc</span><span> |</span><span> sudo</span><span> gpg</span><span> -o</span><span> /usr/share/keyrings/mongodb-server-{{mongo_version}}.gpg</span><span> --dearmor</span></span> <span></span> <span><span>echo</span><span> "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-{{mongo_version}}.gpg ] https://repo.mongodb.org/apt/ubuntu noble/mongodb-org/{{mongo_version}} multiverse"</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/apt/sources.list.d/mongodb-org-{{mongo_version}}.list</span></span> <span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> mongodb-org</span></span> <span></span> <span><span># Start and enable MongoDB</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> mongod</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> mongod</span></span> <span></span> <span><span># Verify installation</span></span> <span><span>mongod</span><span> --version</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> mongod</span></span> <span></span> <span><span># Enable authentication in MongoDB configuration</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> '/#security:/a security:\n authorization: enabled'</span><span> /etc/mongod.conf</span></span> <span></span> <span><span># Restart MongoDB to apply changes</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> mongod</span></span> <span></span> <span><span>echo</span><span> "MongoDB installation and configuration completed successfully!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>MongoDB Installation Script</code> does:</p> <ul> <li>Adds MongoDB’s official repository with GPG verification</li> <li>Installs the latest MongoDB packages</li> <li>Enables and starts the MongoDB service</li> <li>Configures mandatory authentication</li> <li>Verifies successful installation</li> </ul> <h3>User Configuration Script</h3> <p>After MongoDB installation, we need to properly configure database users with least-privilege access. This script creates:</p> <ul> <li>An administrative user with global management privileges</li> <li>An application user with restricted database access</li> </ul> <p>Similarly, follow these steps to create the user configuration script:</p> <img src="/_astro/setup-database-script.D4UVJotq.jpg" alt="Screenshot of setting up the database" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>User Configuration Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create admin user</span></span> <span><span>mongosh</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>use admin</span></span> <span></span> <span><span>db.createUser({</span></span> <span><span> user: "{{admin_user}}",</span></span> <span><span> pwd: "{{admin_pass}}",</span></span> <span><span> roles: [</span></span> <span><span> { role: "userAdminAnyDatabase", db: "admin" },</span></span> <span><span> { role: "readWriteAnyDatabase", db: "admin" },</span></span> <span><span> { role: "dbAdminAnyDatabase", db: "admin" }</span></span> <span><span> ]</span></span> <span><span>})</span></span> <span></span> <span><span>exit</span></span> <span><span>EOF</span></span> <span></span> <span><span># Authenticate with admin user and create application user</span></span> <span><span>mongosh</span><span> -u</span><span> "{{admin_user}}"</span><span> -p</span><span> "{{admin_pass}}"</span><span> --authenticationDatabase</span><span> admin</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>use {{db_name}}</span></span> <span></span> <span><span>db.createUser({</span></span> <span><span> user: "{{db_user}}",</span></span> <span><span> pwd: "{{db_pass}}",</span></span> <span><span> roles: [ { role: "readWrite", db: "{{db_name}}" } ]</span></span> <span><span>})</span></span> <span></span> <span><span>exit</span></span> <span><span>EOF</span></span> <span></span> <span><span># Insert test data</span></span> <span><span>mongosh</span><span> -u</span><span> "{{db_user}}"</span><span> -p</span><span> "{{db_pass}}"</span><span> --authenticationDatabase</span><span> "{{db_name}}"</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>use {{db_name}}</span></span> <span></span> <span><span>db.messages.insertOne({ message: "Greetings from CloudRay" })</span></span> <span></span> <span><span>exit</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "MongoDB users have been successfully created!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>User Configuration Script</code> does:</p> <ul> <li>Creates an administrative user with: user management privileges, cluster management rights, and global read/write access</li> <li>Create an application-specific user with: restriction to single database and only read/write privileges (no admin rights)</li> <li>Insert test data to verify functionality</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{mongo_version}}</code>, <code>{{admin_user}}</code>, <code>{{admin_pass}}</code>, <code>{{db_user}}</code>, <code>{{db_pass}}</code>, <code>{{db_name}}</code>, and <code>{{backup_dir}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.ULhI9fsK.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>mongo_version</code>:</strong> This is the MongoDB version to be installed</li> <li><strong><code>admin_user</code>:</strong> This is the administrative user</li> <li><strong><code>admin_pass</code>:</strong> This is the password of the administrative user</li> <li><strong><code>db_user</code>:</strong> This is the application user</li> <li><strong><code>db_pass</code>:</strong> The application password</li> <li><strong><code>db_name</code>:</strong> The application database name</li> <li><strong><code>backup_dir</code>:</strong> The backup directory of MongoDB</li> </ul> <p>Since the variables are setup, proceed with running the scripts with CloudRay</p> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.DfmWvJte.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “MongoDB Setup Automation and Management”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.BqDzQlNP.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.DFqsyTkv.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where you need MongoDB to be installed</li> <li>Script Playlist: Choose the playlist you created (For example “MongoDB Setup Automation and Management”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.BcmYGRZa.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Your MongoDB is now seamlessly setup and managed with CloudRay. That’s it! Happy deploying!</p> <h2>Scheduling MongoDB Database Backup with CloudRay’s Schedules (Optional)</h2> <p>CloudRay also offers <a href="https://cloudray.io/docs/schedules">Schedules</a>, allowing you to execute scripts automatically at specific intervals or times. This feature is useful for tasks such as automating database backups.</p> <p>To automate MongoDB database backups with CloudRay, you first need to create a backup script that the scheduler will execute.</p> <p>You can follow similar steps as the previous ones to create the backup script:</p> <img src="/_astro/database-backup-script.DDcbvsnV.jpg" alt="Screenshot of backing up the database" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Backup MongoDB Database</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Ensure backup directory exists</span></span> <span><span>mkdir</span><span> -p</span><span> "{{backup_dir}}"</span></span> <span></span> <span><span># Perform compressed backup</span></span> <span><span>mongodump</span><span> --authenticationDatabase</span><span> "{{db_name}}"</span><span> \</span></span> <span><span> -u</span><span> "{{db_user}}"</span><span> \</span></span> <span><span> -p</span><span> "{{db_pass}}"</span><span> \</span></span> <span><span> --db</span><span> "{{db_name}}"</span><span> \</span></span> <span><span> --gzip</span><span> \</span></span> <span><span> --archive=</span><span>"{{backup_dir}}/{{db_name}}_$(</span><span>date</span><span> +%F_%H-%M-%S).gz"</span></span> <span></span> <span><span># Confirm backup success</span></span> <span><span>if</span><span> [ </span><span>$?</span><span> -eq</span><span> 0</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "Backup successful: {{backup_dir}}/{{db_name}}_$(</span><span>date</span><span> +%F_%H-%M-%S).gz"</span></span> <span><span> # Optional: Delete backups older than 30 days</span></span> <span><span> find</span><span> "{{backup_dir}}"</span><span> -name</span><span> "{db_name}_*.gz"</span><span> -type</span><span> f</span><span> -mtime</span><span> +30</span><span> -delete</span></span> <span><span>else</span></span> <span><span> echo</span><span> "Backup failed!"</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span></code><span></span><span></span></pre> <p>This is what the <code>Backup MongoDB Database</code> does:</p> <ul> <li>Compressed backups with timestamps</li> <li>Automatic retention policy (30 days)</li> <li><strong>Checks if the backup was successful:</strong> If successful, it prints the backup path; otherwise, it reports failure and exits with an error code.</li> </ul> <p>For example, if you want to back up your database on the first day of every month at 12:00 AM, you can configure a CloudRay schedule to handle this automatically.</p> <p>Here are the steps to achieve this:</p> <ol> <li><strong>Navigate to Schedules:</strong> In your CloudRay dashboard, go to the “Schedules” tab.</li> </ol> <img src="/_astro/locating-schedule.ZeGZeXGN.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Click “Add Schedule”:</strong> Start creating a new schedule.</li> </ol> <img src="/_astro/Setup-schedules.BdhMV52i.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Submit Schedule:</strong> Click “Submit” to activate your new schedule.</li> </ol> <img src="/_astro/schedules-enabled.B32gMdZv.jpg" alt="Screenshot of the location of enabled schedule" /> <p>CloudRay will automatically execute the backup script at the scheduled time, ensuring that your database is regularly backed up without manual intervention.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/setting-up-postgres-database">Setting Up PostgreSQL</a></li> <li><a href="/articles/deploy-mysql-server">Deploy MySQL Server</a></li> <li><a href="/articles/deploy-phpmyadmin">Automte PHPMyAdmin Deployment</a></li> </ul>Deploy a Laravel Application with Caddyhttps://cloudray.io/articles/deploy-laravelhttps://cloudray.io/articles/deploy-laravelLearn how to automate and manage the deployment and setup of Laravel application using CloudRayWed, 02 Apr 2025 00:00:00 GMT<p><a href="https://app.cloudray.io/">CloudRay</a> simplifies infrastructure deployment through automation, making it an ideal choice for managing Laravel applications with Caddy. It automates the entire deployment process, reducing manual effort and ensuring a seamless, repeatable setup.</p> <p>In this guide, you will learn the process of deploying a Laravel application with Caddy using CloudRay. You will learn how to create a detailed <a href="/articles/script-automation-guide">automation script</a> for setting up the system, installing dependencies, deploying Laravel, and configuring Caddy as the web server. Caddy simplifies the process by automatically handling HTTPS (SSL/TLS) with Let’s Encrypt, ensuring your application is secure by default.</p> <p>By the end of this guide, you will have a fully functional Laravel application hosted on an optimised server environment with automatic SSL support provided by Caddy.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#system-setup-script">System Setup Script</a></li> <li><a href="#install-composer-and-database-setup-script">Install Composer and Database Setup Script</a></li> <li><a href="#laravel-deployment-script">Laravel Deployment Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#troubleshooting">Troubleshooting</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment. Additionally, if you’re using a different version or a different distribution, adjust the commands accordingly</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Rocky Linux 9</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly</li> </ul> <h2>Create the Automation Script</h2> <p>To streamline the deployment process, you can use three automation scripts</p> <ol> <li><strong>System Setup Script:</strong> Installs all the Lavarel application and systems dependencies</li> <li><strong>Install Composer and Database Setup Script:</strong> Installs composer and setup the Laravel database</li> <li><strong>Laravel Deployment Script:</strong> automates cloning, configuring, and deploying a Laravel app with Caddy</li> </ol> <p>Let’s begin with the System Setup Script</p> <h3>System Setup Script</h3> <p>To create the System Setup Script, you need to follow these steps:</p> <img src="/_astro/system-setup-script.D32qyDzB.jpg" alt="Screenshot of adding a new system setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>System Setup Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update system</span></span> <span><span>sudo</span><span> dnf</span><span> update</span><span> -y</span></span> <span></span> <span><span># Install required software</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> -y</span><span> mariadb-server</span><span> php</span><span> php-fpm</span><span> php-common</span><span> php-xml</span><span> php-mbstring</span><span> php-json</span><span> php-zip</span><span> php-mysqlnd</span><span> curl</span><span> unzip</span><span> nano</span></span> <span></span> <span><span># Start and enable services</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> mariadb</span><span> php-fpm</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> mariadb</span><span> php-fpm</span></span> <span></span> <span><span># Configure PHP-FPM</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> 's/^;listen.owner =.*/listen.owner = www-data/'</span><span> /etc/php-fpm.d/www.conf</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> 's/^;listen.group =.*/listen.group = www-data/'</span><span> /etc/php-fpm.d/www.conf</span></span> <span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> php-fpm</span></span> <span></span> <span><span># Set SELinux to permissive mode</span></span> <span><span>sudo</span><span> setenforce</span><span> 0</span></span> <span></span> <span><span># Add /usr/local/bin to secure path in sudoers</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> '/Defaults\s\+secure_path = /s|\(.*\)|\1:/usr/local/bin|'</span><span> /etc/sudoers</span></span> <span></span> <span><span>echo</span><span> "System setup complete!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>System Setup Script</code> does:</p> <ul> <li>Updates the system to the latest packages</li> <li>Installs MariaDB, PHP, and dependencies required for Laravel</li> <li>Starts and enables services to launch on boot</li> <li>Configures PHP-FPM to use www-data as the owner</li> <li>Configures SELinux to be permissive (optional, for easier troubleshooting)</li> <li>Adds <code>/usr/local/bin</code> to the secure path for sudo commands</li> </ul> <h3>Install Composer and Database Setup Script</h3> <p>Next, you need to install and setup the database for the Laravel application. To do so, follow similar steps as the above:</p> <img src="/_astro/installation-script.Bx3SfPNS.jpg" alt="Screenshot of installing composer and database setup" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Install Composer and Database Setup Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install Composer</span></span> <span><span>curl</span><span> -sS</span><span> https://getcomposer.org/installer</span><span> |</span><span> php</span></span> <span><span>sudo</span><span> mv</span><span> composer.phar</span><span> /usr/local/bin/composer</span></span> <span><span>sudo</span><span> chmod</span><span> +x</span><span> /usr/local/bin/composer</span></span> <span></span> <span><span># Verify Composer installation</span></span> <span><span>composer</span><span> --version</span></span> <span></span> <span><span># Configure MySQL database</span></span> <span><span>sudo</span><span> mysql</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>CREATE DATABASE {{db_name}};</span></span> <span><span>CREATE USER '{{db_user}}'@'localhost' IDENTIFIED BY '{{db_pass}}';</span></span> <span><span>GRANT ALL PRIVILEGES ON {{db_name}}.* TO '{{db_user}}'@'localhost';</span></span> <span><span>FLUSH PRIVILEGES;</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "Composer and MySQL setup complete!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Install Composer and Database Setup Script</code> does:</p> <ul> <li>Installs Composer, a dependency manager for PHP</li> <li>Verifies Composer installation to ensure it’s available system-wide</li> <li>Configures MariaDB database by creating a database, user, and granting permissions</li> </ul> <h3>Laravel Deployment Script</h3> <p>The final script automates the cloning, configuration, and deployment of your Laravel application with Caddy. This script will handle the deployment process, ensuring your application is ready to serve traffic.</p> <p>To create the Laravel Deployment Script, follow these steps:</p> <img src="/_astro/deployment-script.IsmUlkJg.jpg" alt="Screenshot of deploying Laravel application" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Laravel Deployment Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># install git</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> git</span><span> -y</span></span> <span></span> <span><span># Install Caddy</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> 'dnf-command(copr)'</span><span> -y</span></span> <span><span>sudo</span><span> dnf</span><span> copr</span><span> enable</span><span> @caddy/caddy</span><span> -y</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> caddy</span><span> -y</span></span> <span></span> <span><span># Start and enable Caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> caddy</span></span> <span></span> <span><span># Clone Laravel project from GitHub</span></span> <span><span>if</span><span> [ </span><span>!</span><span> -d</span><span> "/var/www/html/{{repo_name}}"</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "Cloning Laravel repository from GitHub..."</span></span> <span><span> sudo</span><span> git</span><span> clone</span><span> https://{{github_access_token}}@github.com/{{github_user}}/{{repo_name}}.git</span><span> /var/www/html/{{repo_name}}</span></span> <span><span>else</span></span> <span><span> echo</span><span> "Repository already exists. Fetching latest changes..."</span></span> <span><span> cd</span><span> /var/www/html/{{repo_name}}</span></span> <span><span> sudo</span><span> git</span><span> fetch</span><span> --all</span></span> <span><span> echo</span><span> "Resetting to the latest version of the main branch..."</span></span> <span><span> sudo</span><span> git</span><span> reset</span><span> --hard</span><span> origin/main</span></span> <span><span>fi</span></span> <span></span> <span><span># Set correct permissions</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> /var/www/html/{{repo_name}}/storage</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> /var/www/html/{{repo_name}}/bootstrap/cache</span></span> <span><span>sudo</span><span> chmod</span><span> -R</span><span> 775</span><span> /var/www/html/{{repo_name}}/storage</span></span> <span><span>sudo</span><span> chmod</span><span> -R</span><span> 775</span><span> /var/www/html/{{repo_name}}/bootstrap/cache</span></span> <span></span> <span><span># Update Laravel environment file</span></span> <span><span>if</span><span> [ </span><span>-f</span><span> "/var/www/html/{{repo_name}}/.env"</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "Updating existing .env file..."</span></span> <span><span> # Update existing variables</span></span> <span><span> sudo</span><span> sed</span><span> -i</span><span> "s|^APP_URL=.*|APP_URL={{domain}}|"</span><span> /var/www/html/{{repo_name}}/.env</span></span> <span><span> sudo</span><span> sed</span><span> -i</span><span> "s|^DB_CONNECTION=.*|DB_CONNECTION=mysql|"</span><span> /var/www/html/{{repo_name}}/.env</span></span> <span><span> sudo</span><span> sed</span><span> -i</span><span> "s|^DB_HOST=.*|DB_HOST=127.0.0.1|"</span><span> /var/www/html/{{repo_name}}/.env</span></span> <span><span> sudo</span><span> sed</span><span> -i</span><span> "s|^DB_PORT=.*|DB_PORT=3306|"</span><span> /var/www/html/{{repo_name}}/.env</span></span> <span><span> sudo</span><span> sed</span><span> -i</span><span> "s|^DB_DATABASE=.*|DB_DATABASE={{db_name}}|"</span><span> /var/www/html/{{repo_name}}/.env</span></span> <span><span> sudo</span><span> sed</span><span> -i</span><span> "s|^DB_USERNAME=.*|DB_USERNAME={{db_user}}|"</span><span> /var/www/html/{{repo_name}}/.env</span></span> <span><span> sudo</span><span> sed</span><span> -i</span><span> "s|^DB_PASSWORD=.*|DB_PASSWORD={{db_pass}}|"</span><span> /var/www/html/{{repo_name}}/.env</span></span> <span><span>else</span></span> <span><span> echo</span><span> "Creating new .env file..."</span></span> <span><span> cat</span><span> &lt;&lt;</span><span>EOL</span><span> |</span><span> sudo</span><span> tee</span><span> /var/www/html/{{repo_name}}/.env</span></span> <span><span>APP_URL={{domain}}</span></span> <span></span> <span><span>DB_CONNECTION=mysql</span></span> <span><span>DB_HOST=127.0.0.1</span></span> <span><span>DB_PORT=3306</span></span> <span><span>DB_DATABASE={{db_name}}</span></span> <span><span>DB_USERNAME={{db_user}}</span></span> <span><span>DB_PASSWORD={{db_pass}}</span></span> <span><span>EOL</span></span> <span><span>fi</span></span> <span></span> <span><span># Run Laravel setup commands</span></span> <span><span>cd</span><span> /var/www/html/{{repo_name}}</span></span> <span><span>sudo</span><span> php</span><span> artisan</span><span> key:generate</span></span> <span><span>sudo</span><span> php</span><span> artisan</span><span> migrate</span></span> <span></span> <span><span># Configure Caddy</span></span> <span><span>cat</span><span> &lt;&lt;</span><span>EOL</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/caddy/Caddyfile</span></span> <span><span>{{domain}} {</span></span> <span><span> root * /var/www/html/{{repo_name}}/public # Serve files from the Laravel public directory</span></span> <span><span> php_fastcgi unix//run/php-fpm/www.sock # Pass PHP requests to PHP-FPM</span></span> <span><span> file_server # Serve static files</span></span> <span><span>}</span></span> <span><span>EOL</span></span> <span></span> <span><span># Start and enable Caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> caddy</span></span> <span></span> <span><span># Install Firewall &amp; Configure Rules</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> -y</span><span> firewalld</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> firewalld</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> firewalld</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --zone=public</span><span> --permanent</span><span> --add-service=http</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --zone=public</span><span> --permanent</span><span> --add-service=https</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --reload</span></span> <span></span> <span><span>echo</span><span> "Laravel deployment and SSL setup complete!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Laravel Deployment Script</code> does:</p> <ul> <li>Installs Git for repository cloning</li> <li>Defines the GitHub repository and target directory</li> <li>Clones the Laravel project or updates it if it already exists</li> <li>Sets proper permissions for storage and cache directories</li> <li>Runs Laravel migration and key generation to prepare the application</li> <li>Deploy the application with Caddy</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{db_name}}</code>, <code>{{db_user}}</code>, <code>{{db_pass}}</code>, <code>{{github_access_token}}</code>, <code>{{github_user}}</code>, <code>{{repo_name}}</code>, and <code>{{domain}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.DsAhf636.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>db_name</code>:</strong> This is the name database for the Laravel application</li> <li><strong><code>db_user</code>:</strong> Database user for the Laravel application</li> <li><strong><code>db_pass</code>:</strong> Password for the database user</li> <li><strong><code>github_access_token</code>:</strong> This is your GitHub personal access token for cloning the repository</li> <li><strong><code>github_user</code>:</strong> This is your GitHub Username</li> <li><strong><code>repo_name</code>:</strong> This is the name of the GitHub repository</li> <li><strong><code>domain</code>:</strong> Domain name for the Laravel application e.g., <code>myapp.com</code></li> </ul> <p>Since the variables are setup, proceed to run the scripts with CloudRay.</p> <h2>Running the Script with CloudRay</h2> <p>Now that everything is setup, you can use CloudRay to automate the deployment of your Laravel Application</p> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.DWH35jzv.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Automate Deployment and Management of Laravel Application”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.BEHXJPql.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.B4YClK_t.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where your Laravel application will be installed</li> <li>Script Playlist: Choose the playlist you created (For example “Automate Deployment and Management of Laravel Application”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.3GeHxe40.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Your Laravel Application is now seamlessly deployed and managed with CloudRay. That’s it! Happy deploying!. You can access it by visiting <code>http://myapp.com</code></p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li><strong>Caddy Fails to Start:</strong> Check the Caddy service status with <code>sudo systemctl status caddy</code> and restart it using <code>sudo systemctl restart caddy</code></li> <li><strong>PHP-FPM Not Working:</strong> Ensure PHP-FPM is running with <code>sudo systemctl status php-fpm</code> and restart it using <code>sudo systemctl restart php-fpm</code></li> <li><strong>SSL Certificate Not Issued:</strong> Verify your domain’s DNS records and ensure ports 80 and 443 are open in the firewall.</li> <li><strong>Laravel Application Not Loading:</strong> Check the <code>.env</code> file for correct database credentials and ensure the storage and <code>bootstrap/cache</code> directories have the correct permissions.</li> <li><strong>Database Connection Issues:</strong> Verify the database credentials in the <code>.env</code> file and ensure MariaDB is running with <code>sudo systemctl status mariadb</code></li> </ul> <p>If the issue persists, consult the <a href="https://caddyserver.com/docs/">Caddy Documentation</a> or the <a href="https://laravel.com/docs/12.x/readme">Laravel Documentation</a> for further assistance.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/deploy-ruby-on-rails">Deploy Ruby on Rails</a></li> <li><a href="/articles/deploy-jenkins-with-docker-compose">Deploy Express Application</a></li> <li><a href="/articles/deploy-sonarqube">Deploy SonarQube</a></li> <li><a href="/articles/deploy-static-website-from-github">Deploy Static Website from GitHub</a></li> <li><a href="/articles/deploy-nextjs-application">Deploy Next.js Application</a></li> <li><a href="/articles/deploy-phpmyadmin">How to Deploy phpMyAdmin</a></li> </ul>Deploy Your Next.js Applicationhttps://cloudray.io/articles/deploy-nextjs-applicationhttps://cloudray.io/articles/deploy-nextjs-applicationLearn how to deploy your Next.js application to an Ubuntu server using a reusable Bash script and CloudRay’s centralised automation platform.Mon, 31 Mar 2025 00:00:00 GMT<p><a href="https://app.cloudray.io/">CloudRay</a> offers a streamlined approach to automating and managing deployments across various environments. In this guide, we will walk through the process of deploying a Next.js application using CloudRay automation Bash scripts.</p> <p>By the end of this article, you will have a fully functional Next.js application running behind a Caddy reverse proxy with process management handled by PM2.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#setup-nextjs-script">Setup Nextjs Script</a></li> <li><a href="#deploy-nextjs-script">Deploy Nextjs Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a> <ul> <li><a href="#run-the-setup-nextjs-script">Run the Setup Nextjs Script</a></li> <li><a href="#run-the-deploy-nextjs-script">Run the Deploy Nextjs Script</a></li> </ul> </li> <li><a href="#troubleshooting">Troubleshooting</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly</li> </ul> <h2>Create the Automation Script</h2> <p>To automate the setup and deployment, you’ll need two Bash scripts:</p> <ol> <li><strong>Setup Nextjs Script</strong>: This script will install the application dependencies and build the Next.js application</li> <li><strong>Deploy Nextjs Script</strong>: This script will automate the deployment processes by installing and configuring Caddy as a reverse proxy and secure the domain</li> </ol> <p>Let’s begin with the Setup Nextjs Script</p> <h3>Setup Nextjs Script</h3> <p>To create the Setup Nextjs Script, you need to follow these steps:</p> <img src="/_astro/setup-script.dsMsMrOV.jpg" alt="Screenshot of adding a new setup script for Next.js" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Setup Nextjs Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install Node.js and npm</span></span> <span><span>echo</span><span> "Installing Node.js and npm..."</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nodejs</span><span> npm</span></span> <span></span> <span><span># Install PM2 globally</span></span> <span><span>echo</span><span> "Installing PM2..."</span></span> <span><span>sudo</span><span> npm</span><span> install</span><span> -g</span><span> pm2</span></span> <span></span> <span><span># Clone the Next.js application</span></span> <span><span>echo</span><span> "Cloning Next.js repository..."</span></span> <span><span>sudo</span><span> mkdir</span><span> -p</span><span> /var/www</span></span> <span><span>cd</span><span> /var/www</span></span> <span><span>sudo</span><span> git</span><span> clone</span><span> https://{{github_access_token}}@github.com/{{github_user}}/{{repo_name}}.git</span></span> <span></span> <span><span># Install dependencies and build the application</span></span> <span><span>cd</span><span> {{repo_name}}</span></span> <span><span>echo</span><span> "Installing dependencies and building the application..."</span></span> <span><span>npm</span><span> install</span></span> <span><span>npm</span><span> run</span><span> build</span></span> <span></span> <span><span>echo</span><span> "Setup completed successfully!"</span></span></code><span></span><span></span></pre> <p>Here is what the <code>Setup Nextjs Script</code> does:</p> <ul> <li>Installs Node.js, npm, and PM2</li> <li>Clones the Next.js repository from GitHub</li> <li>Installs dependencies and builds the Next.js application</li> </ul> <h3>Deploy Nextjs Script</h3> <p>Next, to automate the deployment process, create the Deploy Nextjs Script by following these steps:</p> <img src="/_astro/deploy-script.5TRL6P72.jpg" alt="Screenshot of deploying Next.js" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Deploy Nextjs Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install dependencies and Caddy</span></span> <span><span>echo</span><span> "Installing Caddy and dependencies..."</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> debian-keyring</span><span> debian-archive-keyring</span><span> apt-transport-https</span><span> curl</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key'</span><span> |</span><span> sudo</span><span> gpg</span><span> --dearmor</span><span> -o</span><span> /usr/share/keyrings/caddy-stable-archive-keyring.gpg</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt'</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/apt/sources.list.d/caddy-stable.list</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> &amp;&amp; </span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> caddy</span></span> <span></span> <span><span># Configure Caddy reverse proxy</span></span> <span><span>echo</span><span> "Configuring Caddy reverse proxy..."</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; /etc/caddy/Caddyfile"</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>{{domain}} {</span></span> <span><span> reverse_proxy localhost:3000</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span># Restart Caddy service</span></span> <span><span>echo</span><span> "Restarting Caddy..."</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> caddy</span></span> <span></span> <span><span># Navigate to the repository directory</span></span> <span><span>cd</span><span> /var/www/{{repo_name}}</span></span> <span></span> <span><span># Fetch the latest changes from the remote repository</span></span> <span><span>echo</span><span> "Fetching latest changes from GitHub..."</span></span> <span><span>git</span><span> fetch</span><span> --all</span></span> <span></span> <span><span># Reset the local repository to match the remote main branch</span></span> <span><span>echo</span><span> "Resetting to the latest version of the main branch..."</span></span> <span><span>git</span><span> reset</span><span> --hard</span><span> origin/main</span></span> <span></span> <span><span># Install dependencies and build the application</span></span> <span><span>echo</span><span> "Installing dependencies and building the application..."</span></span> <span><span>npm</span><span> install</span></span> <span><span>npm</span><span> run</span><span> build</span></span> <span></span> <span><span># Start the Next.js application using PM2</span></span> <span><span>echo</span><span> "Starting Next.js application..."</span></span> <span><span>pm2</span><span> start</span><span> npm</span><span> --name</span><span> "nextjs"</span><span> --</span><span> start</span></span> <span></span> <span><span># Configure PM2 to restart on boot</span></span> <span><span>pm2</span><span> startup</span></span> <span><span>pm2</span><span> save</span></span> <span></span> <span><span>echo</span><span> "Deployment completed successfully!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Deploy Nextjs Script</code> does:</p> <ul> <li>Installs Caddy and sets up a reverse proxy to the Next.js application</li> <li>Fetches the latest changes from all branches in the remote repository and resets the local repository to match the latest commit on the <code>main</code> branch of the remote repository</li> <li>Starts the Next.js application using PM2</li> <li>Ensures the Next.js process restarts automatically on reboot</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{github_access_token}}</code>, <code>{{github_user}}</code>, <code>{{repo_name}}</code>, and <code>{{domain}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.BAAjsIE_.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>github_access_token</code>:</strong> This is your GitHub personal access token for cloning the repository</li> <li><strong><code>github_user</code>:</strong> This is your GitHub Username</li> <li><strong><code>repo_name</code>:</strong> This is the name of the GitHub repository</li> <li><strong><code>domain</code>:</strong> Your application’s domain name, e.g., <code>myapp.example.com</code></li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. If you prefer to run them individually, follow these steps:</p> <h3>Run the Setup Nextjs Script</h3> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>To run the <code>Setup Nextjs Script</code>, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Setup Nextjs Script”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-setup-script.DwmV8YM5.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/result-setup-script.D0LHLum0.jpg" alt="Screenshot of the output of the setup Nextjs script" /> <p>CloudRay will automatically connect to your server, run the <code>Setup Nextjs Script</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <h3>Run the Deploy Nextjs Script</h3> <p>After successfully running the <code>Setup Nextjs Script</code>, the next step is to execute the Deployment Script using CloudRay’s Runlogs. This script will configure and deploy your application on the server.</p> <p>To run the Deployment Script, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <ul> <li>Server: Select the same server used earlier.</li> <li>Script: Choose the “Deploy Nextjs Script”.</li> <li>Variable Group: Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-deploy-script.psGU7BTk.jpg" alt="Screenshot of running the deploy Next.js script" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to deploy your application.</li> </ol> <img src="/_astro/result-deploy-script.8HPrGkYK.jpg" alt="Screenshot of the output of the Deployment script" /> <p>Your Next.js application is now securely deployed and managed with CloudRay. That’s it! Happy deploying!</p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li>Ensure your domain is properly registered and set the A record to point to your server’s IP address</li> <li>Verify that all necessary environment variables are correctly set on your Cloudray variable Group</li> <li>Check that your firewall allows traffic on the required ports (e.g., 80, 443)</li> </ul> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>How to Deploy SonarQube on Ubuntuhttps://cloudray.io/articles/deploy-sonarqubehttps://cloudray.io/articles/deploy-sonarqubeLearn how to automate SonarQube setup on Ubuntu with reusable Bash scripts for PostgreSQL, Java installation, and configuration via CloudRaySun, 30 Mar 2025 00:00:00 GMT<p><a href="https://app.cloudray.io/">CloudRay</a> simplifies cloud infrastructure automation by allowing users to execute Bash scripts on remote servers.</p> <p>In this guide, you will learn how to automate the deployment of SonarQube on an Ubuntu server using CloudRay. SonarQube is an open-source platform for continuous inspection of code quality, providing static code analysis and detecting bugs, vulnerabilities, and code smells.</p> <p>By the end of this guide, SonarQube will be running on your Ubuntu server, fully automated through CloudRay.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#install-and-configure-postgresql-script">Install and Configure PostgreSQL Script</a></li> <li><a href="#install-java">Install Java</a></li> <li><a href="#install-and-configure-sonarqube-script">Install and Configure SonarQube Script</a></li> <li><a href="#configure-system-script">Configure System Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#troubleshooting">Troubleshooting</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> </ul> <h2>Create the Automation Script</h2> <p>To streamline the setup and management processes, you’ll need four Bash scripts:</p> <ol> <li><strong>Install and Configure PostgreSQL:</strong> This script installs PostgreSQL, creates a SonarQube database, and sets up a user with appropriate privileges</li> <li><strong>Install Java:</strong> This script installs Java 17, which is required to run SonarQube</li> <li><strong>Install and Configure SonarQube:</strong> This script downloads, extracts, and configures SonarQube to work with PostgreSQL</li> <li><strong>Configure System:</strong> This script adjusts system settings for performance optimization and sets up SonarQube as a systemd service</li> </ol> <p>Let’s begin with the installation and configuration of PostgreSQL.</p> <h3>Install and Configure PostgreSQL Script</h3> <p>o create the Install and Configure PostgreSQL script, follow these steps:</p> <img src="/_astro/install-configure-postgres.KoGBxxSm.jpg" alt="Screenshot of script for installing and configuring postgresql" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install and Configure PostgreSQL</code>. You can give it any name of your choice.</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on any error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>echo</span><span> "Updating package lists..."</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span>echo</span><span> "Adding PostgreSQL repository..."</span></span> <span><span>sudo</span><span> sh</span><span> -c</span><span> 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" &gt; /etc/apt/sources.list.d/pgdg.list'</span></span> <span></span> <span><span>echo</span><span> "Adding PostgreSQL repository key..."</span></span> <span><span>wget</span><span> -qO-</span><span> https://www.postgresql.org/media/keys/ACCC4CF8.asc</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/apt/trusted.gpg.d/pgdg.asc</span><span> &amp;</span><span>&gt;</span><span>/dev/null</span></span> <span></span> <span><span>echo</span><span> "Updating package lists..."</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span>echo</span><span> "Installing PostgreSQL..."</span></span> <span><span>sudo</span><span> apt-get</span><span> -y</span><span> install</span><span> postgresql</span><span> postgresql-contrib</span></span> <span></span> <span><span>echo</span><span> "Enabling and starting PostgreSQL..."</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> postgresql</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> postgresql</span></span> <span></span> <span><span>echo</span><span> "Setting password for the 'postgres' user..."</span></span> <span><span>echo</span><span> "{{db_postgres_pass}}"</span><span> |</span><span> sudo</span><span> passwd</span><span> --stdin</span><span> postgres</span></span> <span></span> <span><span>echo</span><span> "Switching to PostgreSQL user to configure database..."</span></span> <span><span>sudo</span><span> -u</span><span> postgres</span><span> bash</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>createuser {{db_user}}</span></span> <span><span>psql -c "ALTER USER {{db_user}} WITH ENCRYPTED PASSWORD '{{db_pass}}';"</span></span> <span><span>psql -c "CREATE DATABASE {{db_name}} OWNER {{db_user}};"</span></span> <span><span>psql -c "GRANT ALL PRIVILEGES ON DATABASE {{db_name}} TO {{db_user}};"</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "PostgreSQL installation and database setup completed!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Install and Configure PostgreSQL</code> does:</p> <ul> <li>Updates the package lists to get the latest version of available packages</li> <li>Adds the official PostgreSQL repository to the system</li> <li>Downloads and adds the PostgreSQL repository’s GPG key for verifying package authenticity</li> <li>Updates the package lists again to include the newly added PostgreSQL repository</li> <li>Installs PostgreSQL and additional utilities for managing the database</li> <li>Enables PostgreSQL to start automatically on system boot and starts the PostgreSQL service</li> <li>Switches to the PostgreSQL user (<code>postgres</code>) and performs the following: creates a new database user with a password, creates a database and assigns its owner, and grants full access to the user on the database</li> </ul> <h3>Install Java</h3> <p>After the installation and configuration of PostgreSQL, you need to install Java.</p> <p>Similarly, follow these steps to install Java:</p> <img src="/_astro/install-java.D_aYxutV.jpg" alt="Screenshot of installing java" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Install Java</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on any error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>echo</span><span> "Downloading Java 17 repository key..."</span></span> <span><span>wget</span><span> -O</span><span> -</span><span> https://packages.adoptium.net/artifactory/api/gpg/key/public</span><span> |</span><span> tee</span><span> /etc/apt/keyrings/adoptium.asc</span></span> <span></span> <span><span>echo</span><span> "Adding Adoptium repository for Java 17..."</span></span> <span><span>echo</span><span> "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(</span><span>awk</span><span> -F=</span><span> '/^VERSION_CODENAME/{print$2}' /etc/os-release) main"</span><span> |</span><span> tee</span><span> /etc/apt/sources.list.d/adoptium.list</span></span> <span></span> <span><span>echo</span><span> "Updating package lists..."</span></span> <span><span>apt</span><span> update</span></span> <span></span> <span><span>echo</span><span> "Installing Java 17..."</span></span> <span><span>apt</span><span> install</span><span> temurin-17-jdk</span><span> -y</span></span> <span></span> <span><span>echo</span><span> "Setting Java 17 as the default..."</span></span> <span><span>sudo</span><span> update-alternatives</span><span> --set</span><span> java</span><span> /usr/lib/jvm/temurin-17-jdk-amd64/bin/java</span></span> <span></span> <span><span>echo</span><span> "Verifying Java installation..."</span></span> <span><span>/usr/bin/java</span><span> --version</span></span> <span></span> <span><span>echo</span><span> "Java 17 installation completed!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Install Java</code> does:</p> <ul> <li>Downloads and saves the GPG key for the Adoptium repository (Java 17 provider) and adds the Adoptium repository to the system</li> <li>Updates the package lists to recognize the newly added Adoptium repository</li> <li>Installs Java 17 from the Adoptium repository and sets Java 17 as the default version for the system</li> </ul> <h3>Install and Configure SonarQube Script</h3> <p>Moving forward, you need to create the install and configure SonarQube script.</p> <p>You need to follow similar steps:</p> <img src="/_astro/install-configure-sonarqube.BmPaZjyy.jpg" alt="Screenshot of installing and configuring sonarqube" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install and Configure SonarQube</code></li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on any error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>echo</span><span> "Installing required utilities..."</span></span> <span><span>sudo</span><span> apt-get</span><span> install</span><span> zip</span><span> -y</span></span> <span></span> <span><span>echo</span><span> "Downloading SonarQube..."</span></span> <span><span>sudo</span><span> wget</span><span> https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.9.0.65466.zip</span></span> <span></span> <span><span>echo</span><span> "Extracting SonarQube archive..."</span></span> <span><span>sudo</span><span> unzip</span><span> sonarqube-9.9.0.65466.zip</span><span> -d</span><span> /opt</span></span> <span><span>sudo</span><span> mv</span><span> /opt/sonarqube-9.9.0.65466</span><span> /opt/sonarqube</span></span> <span></span> <span><span>echo</span><span> "Creating SonarQube user..."</span></span> <span><span>sudo</span><span> groupadd</span><span> {{sonar_user}}</span></span> <span><span>sudo</span><span> useradd</span><span> -c</span><span> "User to run SonarQube"</span><span> -d</span><span> /opt/sonarqube</span><span> -g</span><span> {{sonar_user}}</span><span> {{sonar_user}}</span></span> <span></span> <span><span>echo</span><span> "Setting permissions..."</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> {{sonar_user}}:{{sonar_user}}</span><span> /opt/sonarqube</span></span> <span></span> <span><span>echo</span><span> "Configuring SonarQube database connection..."</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> "s/^#sonar.jdbc.username=/sonar.jdbc.username={{db_user}}/"</span><span> {{sonarqube_config}}</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> "s/^#sonar.jdbc.password=/sonar.jdbc.password={{db_pass}}/"</span><span> {{sonarqube_config}}</span></span> <span><span>echo</span><span> "sonar.jdbc.url=jdbc:postgresql://localhost:5432/{{db_name}}"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{sonarqube_config}}</span></span> <span></span> <span><span>echo</span><span> "SonarQube installation completed!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Install and Configure SonarQube</code> does:</p> <ul> <li>Installs the zip package, which is required for extracting SonarQube files</li> <li>Downloads the SonarQube package and extracts the SonarQube package into the <code>/opt</code> directory</li> <li>Renames the extracted folder for easier access</li> <li>Creates a new user group. Then, creates a new user and assigns them to the new group</li> <li>Changes ownership of the SonarQube directory to the sonar user</li> <li>Configures the SonarQube database username and database</li> <li>Sets the SonarQube database connection URL</li> </ul> <h3>Configure System Script</h3> <p>Finally, you create the script to configure the system for SonarQube</p> <p>Similarly, follow these steps to create the configure system script:</p> <img src="/_astro/configure-system.CC7jecTr.jpg" alt="Screenshot of setting up the database" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Configure System Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on any error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>echo</span><span> "Configuring system limits for SonarQube..."</span></span> <span><span>echo</span><span> -e</span><span> "sonarqube - nofile 65536\nsonarqube - nproc 4096"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{limits_config}}</span></span> <span></span> <span><span>echo</span><span> "Updating system configuration..."</span></span> <span><span>echo</span><span> "vm.max_map_count = 262144"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{sysctl_config}}</span></span> <span><span>sudo</span><span> sysctl</span><span> -p</span></span> <span></span> <span><span>echo</span><span> "Creating SonarQube systemd service..."</span></span> <span><span>cat</span><span> &lt;&lt;</span><span>EOF</span><span> |</span><span> sudo</span><span> tee</span><span> {{sonar_service}}</span></span> <span><span>[Unit]</span></span> <span><span>Description=SonarQube service</span></span> <span><span>After=syslog.target network.target</span></span> <span></span> <span><span>[Service]</span></span> <span><span>Type=forking</span></span> <span></span> <span><span>ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start</span></span> <span><span>ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop</span></span> <span></span> <span><span>User=sonar</span></span> <span><span>Group=sonar</span></span> <span><span>Restart=always</span></span> <span></span> <span><span>LimitNOFILE=65536</span></span> <span><span>LimitNPROC=4096</span></span> <span></span> <span><span>[Install]</span></span> <span><span>WantedBy=multi-user.target</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "Reloading systemd daemon..."</span></span> <span><span>sudo</span><span> systemctl</span><span> daemon-reload</span></span> <span></span> <span><span>echo</span><span> "Starting and enabling SonarQube service..."</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> sonar</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> sonar</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> sonar</span><span> --no-pager</span></span> <span></span> <span><span>echo</span><span> "rebooting the server"</span></span> <span></span> <span><span>echo</span><span> "System configuration for SonarQube completed!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Configure System Script</code> does:</p> <ul> <li>Increases file and process limits for SonarQube to enhance performance</li> <li>Adjusts kernel memory settings for better performance</li> <li>Creates a systemd service file for managing SonarQube as a background service</li> <li>Reloads systemd to recognize the new SonarQube service</li> <li>Starts the SonarQube service and enables SonarQube to start automatically on boot</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{sonar_user}}</code>, <code>{{db_user}}</code>, <code>{{db_name}}</code>, <code>{{db_user}}</code>, <code>{{db_pass}}</code>, <code>{{sonarqube_config}}</code>, <code>{{limits_config}}</code>, <code>{{sysctl_config}}</code>, <code>{{sonar_service}}</code>, and <code>{{db_postgres_pass}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.CH0OHTb3.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>sonar_user</code>:</strong> Defines the username and group name for the SonarQube service, which is used to run and manage SonarQube securely</li> <li><strong><code>db_user</code>:</strong> This specifies the PostgreSQL database username for SonarQube</li> <li><strong><code>db_name</code>:</strong> This sets the name of the PostgreSQL database used by SonarQube</li> <li><strong><code>db_pass</code>:</strong> The password for the SonarQube database user</li> <li><strong><code>db_postgres_pass</code>:</strong> Specifies the password for the PostgreSQL <code>postgres</code> superuser account</li> <li><strong><code>sonarqube_config</code>:</strong> Points to the SonarQube configuration file where database and application settings are stored</li> <li><strong><code>limits_config</code>:</strong> This specifies the system limits configuration file that controls resource limits for users and processes</li> <li><strong><code>sysctl_config</code>:</strong> This represents the system configuration file that manages kernel parameters like memory and file descriptor limits</li> <li><strong><code>sonar_service</code>:</strong> This defines the systemd service file used to manage the SonarQube service as a background process</li> </ul> <p>Since the variables are setup, proceed with running the scripts with CloudRay</p> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.CH4Vk0si.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “SonarQube Deployment Automation and Management”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.DSDvxfIt.jpg" alt="Screenshot of creating script playlist" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.CtdkZNox.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where you need PostgreSQL to be installed</li> <li>Script Playlist: Choose the playlist you created (For example “SonarQube Deployment Automation and Management”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.Byk870TC.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Once the script runs successfully, your SonarQube will be fully deployed. You can now visit your SonarQube using your server’s public IP address on port <code>9000</code> (<a href="https://server-ip-address.com">https://server-ip-address.com</a>).</p> <img src="/_astro/browser-display.CKzka2q7.jpg" alt="Screenshot of the output on the browser" /> <p>Your SonarQube is now deployed and managed with CloudRay. That’s it! Happy deploying!</p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li><strong>PostgreSQL Installation Fails:</strong> Verify repository configuration, GPG key, and <code>postgres</code> user permissions. Check logs with <code>journalctl -u postgresql</code></li> <li><strong>SonarQube Fails to Start:</strong> Check database credentials in <code>sonar.properties</code>, increase system limits, and ensure port 9000 is free. Use <code>sudo tail -f /opt/sonarqube/logs/sonar.log</code> to monitor logs</li> <li><strong>Web Interface Inaccessible:</strong> Open port <code>9000</code> on the firewall, confirm SonarQube is running, and verify the server’s IP or domain</li> </ul> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/ruby/deploy-delayed-jobs">Deploy Delayed Jobs</a></li> <li><a href="/articles/deploy-ruby-on-rails">Deploy Ruby on Rails</a></li> <li><a href="/articles/deploy-jenkins-with-docker-compose">Deploy Express Application</a></li> <li><a href="/articles/deploy-laravel">Deploy Laravel</a></li> <li><a href="/articles/deploy-static-website-from-github">Deploy Static Website from GitHub</a></li> <li><a href="/articles/deploy-nextjs-application">Deploy Next.js Application</a></li> <li><a href="/articles/deploy-phpmyadmin">How to Deploy phpMyAdmin</a></li> </ul>How to Deploy phpMyAdmin on Ubuntuhttps://cloudray.io/articles/deploy-phpmyadminhttps://cloudray.io/articles/deploy-phpmyadminLearn how to automate securing and deploying a hardened phpMyAdmin admin interface using reusable Bash scripts in CloudRay for MySQL or MariaDBWed, 19 Mar 2025 00:00:00 GMT<p>Installing and hardening phpMyAdmin can be a multi-step job which includes setting up packages, configuring web-server rules, enabling SSL, and tightening access controls.</p> <p>In this tutorial we will automate the whole workflow with <a href="https://app.cloudray.io/">CloudRay</a>, turning those steps into a repeatable Bash script. When you are done, you will have a secure phpMyAdmin instance ready for MySQL or MariaDB administration—and you can redeploy it the same way on any server.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#setup-mysql-script">Setup MySQL Script</a></li> <li><a href="#install-phpmyadmin-script">Install phpMyAdmin Script</a></li> <li><a href="#secure-phpmyadmin-script">Secure phpMyAdmin Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#troubleshooting">Troubleshooting</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Rocky Linux 9</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly</li> </ul> <h2>Create the Automation Script</h2> <p>To automate the installation of Nagios, you’ll need three Bash scripts:</p> <ol> <li><strong>Setup MySQL Script:</strong> Installs and configures MySQL server, creates a database, and sets up a user with appropriate privileges</li> <li><strong>Install phpMyAdmin Script:</strong> Downloads, installs, and configures phpMyAdmin to provide a web-based interface for managing MySQL databases</li> <li><strong>Secure phpMyAdmin Script:</strong> Implements security measures such as restricting access, setting up authentication, and configuring firewalls to protect the phpMyAdmin instance</li> </ol> <p>Let’s begin with the setting up of MySQL</p> <h3>Setup MySQL Script</h3> <p>To create the Setup MySQL Script, you need to follow these steps:</p> <img src="/_astro/setup-script.tuD3Sjsl.jpg" alt="Screenshot of adding a new setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Setup MySQL Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update system and install MySQL server</span></span> <span><span>sudo</span><span> dnf</span><span> update</span><span> -y</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> mysql-server</span><span> -y</span></span> <span></span> <span><span># Start and enable MySQL service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> mysqld</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> mysqld</span></span> <span></span> <span><span># Create MySQL database and user</span></span> <span><span>sudo</span><span> mysql</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>CREATE DATABASE {{db_name}};</span></span> <span><span>CREATE USER '{{db_user}}'@'localhost' IDENTIFIED BY '{{db_password}}';</span></span> <span><span>GRANT ALL PRIVILEGES ON my_company.* TO '{{db_user}}'@'localhost';</span></span> <span><span>FLUSH PRIVILEGES;</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "✅ MySQL server setup completed successfully!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Setup MySQL Script</code> does:</p> <ul> <li>Updates the system packages to the latest versions</li> <li>Installs the MySQL server package</li> <li>tarts MySQL immediately and ensures it starts on boot</li> <li>Creates a Database and a user</li> <li>Grants all privileges on the database to the new user</li> <li>Flushes privileges to apply the changes</li> </ul> <div> <p>IMPORTANT</p> <p>If you’d like to learn more about managing, backing up, and optimizing your MySQL database, check out our comprehensive guide: <a href="/articles/deploy-mysql-server">How to Deploy &amp; Manage a MySQL Server Using CloudRay</a>.</p> </div> <h3>Install phpMyAdmin Script</h3> <p>Next, you need to create the installation script for phpMyAdmin. To do so, follow similar steps as the above:</p> <img src="/_astro/install-script.BeLnGaa2.jpg" alt="Screenshot of installing phpmyadmin" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Install phpMyAdmin Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install phpMyAdmin and required PHP extensions</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> epel-release</span><span> -y</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> phpmyadmin</span><span> -y</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> php-mysqlnd</span><span> php-mbstring</span><span> php-json</span><span> php-xml</span><span> php-curl</span><span> php-zip</span><span> php-common</span><span> -y</span></span> <span></span> <span><span># Restart Apache</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> httpd</span></span> <span></span> <span><span># Install nano (for manual editing if needed)</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> nano</span><span> -y</span></span> <span></span> <span><span># Configure phpMyAdmin access</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> 's/Require local/Require all granted/'</span><span> /etc/httpd/conf.d/phpMyAdmin.conf</span></span> <span></span> <span><span># Restart Apache to apply changes</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> httpd</span></span> <span></span> <span><span># Enable external MySQL connections via SELinux</span></span> <span><span>sudo</span><span> setsebool</span><span> -P</span><span> httpd_can_network_connect</span><span> on</span></span> <span></span> <span><span># Install and configure firewall</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> firewalld</span><span> -y</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> firewalld</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> firewalld</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --permanent</span><span> --add-service=http</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --permanent</span><span> --add-service=https</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --reload</span></span> <span></span> <span><span>echo</span><span> "✅ phpMyAdmin installation and configuration completed successfully!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Install phpMyAdmin Script</code> does:</p> <ul> <li>Enables access to extra packages (including phpMyAdmin) and necessary PHP extensions for phpMyAdmin</li> <li>Ensures the webserver is running and reloads configurations</li> <li>Access configuration to allow external access (not just local)</li> <li>Allows Apache to connect to the MySQL database</li> <li>Opens HTTP (80) and HTTPS (443) ports for external access</li> </ul> <h3>Secure phpMyAdmin Script</h3> <p>To secure your phpMyAdmin instance, follow these steps:</p> <img src="/_astro/secure-script.CpJXjqFr.jpg" alt="Screenshot of script for securing phpmyadmin" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Secure phpMyAdmin Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install HTTP authentication tools</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> httpd-tools</span><span> -y</span></span> <span></span> <span><span># Create htpasswd file</span></span> <span><span>sudo</span><span> htpasswd</span><span> -bc</span><span> /etc/httpd/.htpasswd</span><span> {{auth_user}}</span><span> '{{auth_password}}'</span></span> <span></span> <span><span># Allow override for phpMyAdmin in Apache</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> '/&lt;Directory \/usr\/share\/phpMyAdmin\/&gt;/,/&lt;\/Directory&gt;/ s/Require all granted/&amp;\n AllowOverride All/'</span><span> /etc/httpd/conf.d/phpMyAdmin.conf</span></span> <span></span> <span><span># Create .htaccess for phpMyAdmin security</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> 'cat &lt;&lt;EOF &gt; /usr/share/phpMyAdmin/.htaccess</span></span> <span><span>AuthType Basic</span></span> <span><span>AuthName "Restricted Access to phpMyAdmin"</span></span> <span><span>AuthUserFile /etc/httpd/.htpasswd</span></span> <span><span>Require valid-user</span></span> <span><span>EOF'</span></span> <span></span> <span><span># Restart Apache to apply changes</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> httpd</span></span> <span></span> <span><span># Confirm Apache is running</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> httpd</span></span> <span></span> <span><span>echo</span><span> "✅ phpMyAdmin secured successfully!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Secure phpMyAdmin Script</code> does:</p> <ul> <li>Installs the <code>htpasswd</code> command for basic authentication</li> <li>Creates a new password file (<code>/etc/httpd/.htpasswd</code>)</li> <li>Enables user authentication via <code>.htaccess</code></li> <li>Sets up basic authentication using the credentials stored in <code>/etc/httpd/.htpasswd</code></li> <li>Restarts Apache and checks its status</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{db_name}}</code>, <code>{{db_user}}</code>, <code>{{db_password}}</code>, <code>{{auth_user}}</code>, and <code>{{auth_password}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>.</p> <p>This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.DMK06g4h.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>db_name</code>:</strong> This is the name of the database</li> <li><strong><code>db_user</code>:</strong> The name of the user</li> <li><strong><code>db_password</code>:</strong> The password of the for the database user</li> <li><strong><code>auth_user</code>:</strong> The username of your basic authentication for phpMyAdmin</li> <li><strong><code>auth_password</code>:</strong> The password of your basic authentication for phpMyAdmin</li> </ul> <p>Since the variables are setup, proceed to run the scripts with CloudRay.</p> <h2>Running the Script with CloudRay</h2> <p>Now that everything is setup, you can use CloudRay to automate deploying and securing phpMyAdmin.</p> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.DWH35jzv.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Automate Deployment and Management of phpMyAdmin”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.CF-ElF-A.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.t75HbUVw.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where Nagios will be installed</li> <li>Script Playlist: Choose the playlist you created (For example “Automate Deployment and Management of phpMyAdmin”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.DIBWqucL.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Your phpMyAdmin is now seamlessly deployed and managed with CloudRay. That’s it! Happy deploying!. You can access it by visiting <code>http://&lt;nagios-server-ip&gt;/phpmyadmin</code></p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li><strong>phpMyAdmin Web Interface Not Loading:</strong> Ensure Apache is running with <code>sudo systemctl status httpd</code> and restart it using <code>sudo systemctl restart httpd</code>. Also, verify that the firewall allows HTTP (port 80) and HTTPS (port 443) traffic.</li> <li><strong>MySQL Service Not Starting:</strong> Check the MySQL service status with <code>sudo systemctl status mysqld</code> and restart it using <code>sudo systemctl restart mysqld</code>. Ensure the service is enabled to start on boot with sudo systemctl enable mysqld</li> <li><strong>Authentication Issues:</strong> If you cannot log in to phpMyAdmin, verify the <code>.htaccess</code> file in <code>/usr/share/phpMyAdmin/</code> is correctly configured and that the <code>/etc/httpd/.htpasswd</code> file contains the correct credentials</li> </ul> <p>If the issue persists, consult the <a href="https://www.phpmyadmin.net/docs/">phpMyAdmin Documentation</a> for further assistance.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/install-mongodb">Install MongoDB</a></li> <li><a href="/articles/deploy-mysql-server">Deploy MySQL Server</a></li> <li><a href="/articles/setting-up-postgres-database">Setting Up PostgreSQL</a></li> </ul>Deploy a Static Website Without a GitHub Repositoryhttps://cloudray.io/articles/deploy-static-websitehttps://cloudray.io/articles/deploy-static-websiteLearn how to automate the setting up and deployment of a simple static website without a GitHub repository using CloudRayWed, 12 Mar 2025 00:00:00 GMT<p><a href="https://app.cloudray.io/">CloudRay</a> provides an automation-friendly approach to deploying static websites on Ubuntu servers, making it a great choice for hosting static sites.</p> <p>In this guide, you will learn the process of deploying a simple static website(HTML/CSS) using CloudRay. We’ll create an automation Bash script to set up the website structure, configure Caddy as the web server, and deploy the website with a script. By the end, your website will be live on a custom domain with HTTPS support.</p> <div> <p>NOTE</p> <p>If you want to learn how to deploy a static website from GitHub using CloudRay, check out our guide on <a href="/articles/deploy-static-website-from-github">How to Deploy a Static Website From GitHub Using CloudRay</a>.</p> </div> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#website-setup-script">Website Setup Script</a></li> <li><a href="#website-deployment-script">Website Deployment Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a> <ul> <li><a href="#run-the-setup-script">Run the setup script</a></li> <li><a href="#run-the-deployment-script">Run the Deployment script</a></li> </ul> </li> <li><a href="#troubleshooting">Troubleshooting</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can modify these scripts based on your environment and deployment needs</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> <li>Your domain name is “mywebsite.com” (replace it with your actual domain)</li> </ul> <h2>Create the Automation Script</h2> <p>This guide is ideal for very small projects or quick prototypes. It involves writing HTML and CSS directly in a script.</p> <p>To simplify deployment, we will create two Bash scripts:</p> <ol> <li><strong>Website Setup Script</strong>: This script will create the website structure with HTML and CSS files</li> <li><strong>Website Deployment Script</strong>: This script will install Caddy as a web server and deploy the website</li> </ol> <p>Let’s begin with the simple website setup.</p> <h3>Website Setup Script</h3> <p>To create the website setup script, you need to follow these steps:</p> <img src="/_astro/website-setup.BlXMtXXn.jpg" alt="Screenshot of adding a new website setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Website Setup Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create website directory</span></span> <span><span>mkdir</span><span> my-static-website</span></span> <span><span>cd</span><span> my-static-website</span></span> <span></span> <span><span># Create index.html</span></span> <span><span>cat</span><span> &lt;&lt;</span><span>EOF</span><span> &gt;</span><span> index.html</span></span> <span><span>&lt;!DOCTYPE html&gt;</span></span> <span><span>&lt;html lang="en"&gt;</span></span> <span><span>&lt;head&gt;</span></span> <span><span> &lt;meta charset="UTF-8"&gt;</span></span> <span><span> &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt;</span></span> <span><span> &lt;title&gt;Greetings from CloudRay&lt;/title&gt;</span></span> <span><span> &lt;link rel="stylesheet" href="styles.css"&gt;</span></span> <span><span>&lt;/head&gt;</span></span> <span><span>&lt;body&gt;</span></span> <span><span> &lt;header&gt;</span></span> <span><span> &lt;h1&gt;Greetings from CloudRay!&lt;/h1&gt;</span></span> <span><span> &lt;/header&gt;</span></span> <span><span> &lt;main&gt;</span></span> <span><span> &lt;p&gt;This is a simple static website deployed on Ubuntu using CloudRay!&lt;/p&gt;</span></span> <span><span> &lt;/main&gt;</span></span> <span><span> &lt;footer&gt;</span></span> <span><span> &lt;p&gt;&amp;copy; 2025 My Static Website&lt;/p&gt;</span></span> <span><span> &lt;/footer&gt;</span></span> <span><span>&lt;/body&gt;</span></span> <span><span>&lt;/html&gt;</span></span> <span><span>EOF</span></span> <span></span> <span><span># Create styles.css</span></span> <span><span>cat</span><span> &lt;&lt;</span><span>EOF</span><span> &gt;</span><span> styles.css</span></span> <span><span>body {</span></span> <span><span> font-family: Arial, sans-serif;</span></span> <span><span> margin: 0;</span></span> <span><span> padding: 0;</span></span> <span><span> background-color: #f4f4f4;</span></span> <span><span> color: #333;</span></span> <span><span>}</span></span> <span></span> <span><span>header {</span></span> <span><span> background-color: #333;</span></span> <span><span> color: #fff;</span></span> <span><span> padding: 20px;</span></span> <span><span> text-align: center;</span></span> <span><span>}</span></span> <span></span> <span><span>main {</span></span> <span><span> padding: 20px;</span></span> <span><span> text-align: center;</span></span> <span><span>}</span></span> <span></span> <span><span>footer {</span></span> <span><span> background-color: #333;</span></span> <span><span> color: #fff;</span></span> <span><span> text-align: center;</span></span> <span><span> padding: 10px;</span></span> <span><span> position: fixed;</span></span> <span><span> bottom: 0;</span></span> <span><span> width: 100%;</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span># Go back to home directory</span></span> <span><span>cd</span><span> ~</span></span> <span></span> <span></span> <span><span>echo</span><span> "Setup completed!!!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Website Setup Script</code> does:</p> <ul> <li>Creates a directory named <code>my-static-website</code> to store the website files</li> <li>Navigates into the newly created directory</li> <li>Generates an <code>index.html</code> file with basic HTML content</li> <li>Creates a <code>styles.css</code> file for styling</li> <li>Moves back to the home directory after setting up the files</li> </ul> <h3>Website Deployment Script</h3> <p>After setting up the simple website, you can deploy the website using Caddy.</p> <p>Similarly, follow these steps to create the deployment script:</p> <img src="/_astro/deploy-website-script.ZzVPfySx.jpg" alt="Screenshot of website deployment script" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Website Deployment Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># install caddy</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> debian-keyring</span><span> debian-archive-keyring</span><span> apt-transport-https</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key'</span><span> |</span><span> sudo</span><span> gpg</span><span> --dearmor</span><span> -o</span><span> /usr/share/keyrings/caddy-stable-archive-keyring.gpg</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt'</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/apt/sources.list.d/caddy-stable.list</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> caddy</span></span> <span></span> <span><span># Creating the website directory</span></span> <span><span>mkdir</span><span> -p</span><span> /var/www</span></span> <span></span> <span><span># Copy files to website directory</span></span> <span><span>cp</span><span> -r</span><span> my-static-website</span><span> {{website_dir}}</span></span> <span></span> <span><span># Configure Caddyfile</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; {{caddyfile}}"</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>{{your_domain}} {</span></span> <span><span> root * {{website_dir}}</span></span> <span><span> file_server</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span># Grant permission to the server</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> {{website_dir}}</span></span> <span><span>sudo</span><span> chmod</span><span> -R</span><span> 755</span><span> {{website_dir}}</span></span> <span></span> <span><span># Reload Caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> caddy</span></span> <span></span> <span><span>echo</span><span> "Website deployed! Visit: http://mywebsite.com or https://mywebsite.com"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Website Deployment Script</code> does:</p> <ul> <li>Installs the necessary dependencies for the Caddy web server, and install Caddy server</li> <li>Copies the website files from <code>my-static-website</code> to <code>/var/www/my-static-website</code></li> <li>Configures Caddy to serve the website using a domain name (mywebsite.com)</li> <li>Changes ownership of the website directory to www-data (web server user)</li> <li>Sets permissions to allow public access to the files</li> <li>Reloads the Caddy server to apply changes</li> </ul> <p>Next, you need to create a variable group and run these scripts using CloudRay.</p> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{website_dir}}</code>, <code>{{caddyfile}}</code>, and <code>{{your_domain}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.DA57_ibA.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>website-dir</code>:</strong> This specifies the directory where the website files are located</li> <li><strong><code>caddyfile</code>:</strong> This specifies the path to the Caddy web server’s configuration file</li> <li><strong><code>your_domain</code>:</strong> This specifies your registered domain</li> <li><strong><code>your_repo_url</code>:</strong> The URL of your GitHub repository (Example, <a href="https://github.com/GeoSegun/html-css-website.git">https://github.com/GeoSegun/html-css-website.git</a>)</li> </ul> <p>Since the variables are setup, you can now run the scripts with CloudRay.</p> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. If you prefer to run them individually, follow these steps:</p> <h3>Run the setup script</h3> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>To run the <code>Website Setup Script</code>, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Website Setup Script”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-setup-script.D9mJJ20l.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/result-set-up-script.CYkiRDid.jpg" alt="Screenshot of the output of the set-up-script" /> <p>CloudRay will automatically connect to your server, run the <code>Website Setup Script</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <h3>Run the Deployment script</h3> <p>After successfully running the <code>Website Setup Script</code>, the next step is to execute the <code>Website Deployment Script</code> using CloudRay’s Runlogs. This script will configure and deploy your website on the server.</p> <p>To run the Deployment Script, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <ul> <li>Server: Select the same server used earlier.</li> <li>Script: Choose the “Website Deployment Script”.</li> <li>Variable Group: Select the variable group you created earlier.</li> </ul> <img src="/_astro/execute-deployment-script.BFiYxS0l.jpg" alt="Screenshot of running the Deployment script" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to deploy your application.</li> </ol> <img src="/_astro/result-deployment-script.DagGUfoi.jpg" alt="Screenshot of the output of the Deployment script" /> <p>Once the script runs successfully, your application will be fully deployed. You can now visit your application securely using your configured domain (<a href="https://yourdomain.com">https://yourdomain.com</a>).</p> <img src="/_astro/browser-display.DjYfWIf-.jpg" alt="Screenshot of the output on the browser" /> <p>Your simple website (HTML/CSS) is now securely deployed and managed with CloudRay. That’s it! Happy deploying!</p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li>Ensure your domain is properly registered and set the A record to point to your server’s IP address</li> <li>Verify that all necessary environment variables are correctly set on your CloudRay variable Group</li> </ul> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div>Deploy a Static Website From a GitHub Repositoryhttps://cloudray.io/articles/deploy-static-website-from-githubhttps://cloudray.io/articles/deploy-static-website-from-githubLearn how to automate the setting up and deployment of a simple static website from a GitHub repository using CloudRayTue, 11 Mar 2025 00:00:00 GMT<p><a href="https://app.cloudray.io/">CloudRay</a> simplifies the process of deploying static websites directly from a GitHub repository. This method is ideal for larger projects or teams, as it allows you to maintain version control and collaborate seamlessly. In this guide, you’ll learn how to clone a GitHub repository containing your website files, configure Caddy as the web server, and deploy your site with CloudRay’s automation-friendly tool.</p> <p>By the end, your static website will be live on a custom domain with automatic HTTPS support.</p> <div> <p>NOTE</p> <p>If you want to deploy a static website without using GitHub, follow our guide on <a href="/articles/deploy-static-website">How to Deploy a Static Website Without a GitHub Repository Using CloudRay</a>.</p> </div> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-a-static-website-deployment-script">Create a Static Website Deployment Script</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can modify these scripts based on your environment and deployment needs</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> <li>Your domain name is “mywebsite.com” (replace it with your actual domain)</li> </ul> <h2>Create a Static Website Deployment Script</h2> <p>You need to create a script to deploy a static website from your GitHub repository. Follow these steps to achieve this:</p> <img src="/_astro/updated-deploy-website.DapQm-vx.jpg" alt="Screenshot of an updated deployment script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Static Website Deployment Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install Caddy</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> debian-keyring</span><span> debian-archive-keyring</span><span> apt-transport-https</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key'</span><span> |</span><span> sudo</span><span> gpg</span><span> --dearmor</span><span> -o</span><span> /usr/share/keyrings/caddy-stable-archive-keyring.gpg</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt'</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/apt/sources.list.d/caddy-stable.list</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> caddy</span></span> <span></span> <span><span># Create the website directory</span></span> <span><span>mkdir</span><span> -p</span><span> /var/www</span></span> <span></span> <span><span># Clone or update the GitHub repository</span></span> <span><span>if</span><span> [ </span><span>-d</span><span> "{{website_dir}}/.git"</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "Updating existing repository..."</span></span> <span><span> cd</span><span> "{{website_dir}}"</span></span> <span><span> git</span><span> fetch</span><span> --all</span></span> <span><span> git</span><span> reset</span><span> --hard</span><span> origin/main</span></span> <span><span>else</span></span> <span><span> echo</span><span> "Cloning repository..."</span></span> <span><span> git</span><span> clone</span><span> "{{your_repo_url}}"</span><span> "{{website_dir}}"</span></span> <span><span>fi</span></span> <span></span> <span><span># Configure Caddyfile</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> "cat &gt; {{caddyfile}}"</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>{{your_domain}} {</span></span> <span><span> root * {{website_dir}}</span></span> <span><span> file_server</span></span> <span><span>}</span></span> <span><span>EOF</span></span> <span></span> <span><span># Grant permission to the server</span></span> <span><span>sudo</span><span> chown</span><span> -R</span><span> www-data:www-data</span><span> "{{website_dir}}"</span></span> <span><span>sudo</span><span> chmod</span><span> -R</span><span> 755</span><span> "{{website_dir}}"</span></span> <span></span> <span><span># Reload Caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> caddy</span></span> <span></span> <span><span>echo</span><span> "Website deployed! Visit: http://{{your_domain}} or https://{{your_domain}}"</span></span></code><span></span><span></span></pre> <p>This updated script clones your GitHub repository (or pulls the latest changes if it already exists), configures Caddy to serve the website, and deploys it to your server. Each time you run the script, it ensures your website is synced with the latest changes from the <code>main</code> branch.</p> <h2>Running the Script with CloudRay</h2> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <div> <p>NOTE</p> <p>Ensure you use the same variables from the <a href="/articles/deploy-static-website">Deploy a Static Website Without a GitHub Repository Using CloudRay</a>.</p> </div> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. If you prefer to run them individually, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Static Website Deployment Script”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-deployment-github-script.BZwdAuUX.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/updated-deployment-result.rZGqejps.jpg" alt="Screenshot of the output of the Github deployment script" /> <p>CloudRay will automatically connect to your server, run the <code>Static Website Deployment Script</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <p>Your static website hosted on GitHub is now successfully deployed on the server! You can visit it by navigating to your configured domain (e.g., <a href="http://mywebsite.com">http://mywebsite.com</a> or <a href="https://mywebsite.com">https://mywebsite.com</a>).</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/ruby/deploy-delayed-jobs">Deploy Delayed Jobs</a></li> <li><a href="/articles/deploy-ruby-on-rails">Deploy Ruby on Rails</a></li> <li><a href="/articles/deploy-jenkins-with-docker-compose">Deploy Express Application</a></li> <li><a href="/articles/deploy-laravel">Deploy Laravel</a></li> <li><a href="/articles/deploy-static-website-from-github">Deploy Static Website from GitHub</a></li> <li><a href="/articles/deploy-nextjs-application">Deploy Next.js Application</a></li> <li><a href="/articles/deploy-phpmyadmin">How to Deploy phpMyAdmin</a></li> </ul>How to Monitor Remote Hosts with Nagioshttps://cloudray.io/articles/monitor-remote-hosts-with-nagioshttps://cloudray.io/articles/monitor-remote-hosts-with-nagiosLearn how to automate setup of Nagios and remote host monitoring using Bash scripts to track system load SSH status and process countsFri, 07 Mar 2025 00:00:00 GMT<div> <p>IMPORTANT</p> <p>Before implementation, make sure you have Nagios deployed on a separate server. However, if you want to automate the installation of Nagios, follow our guide on <a href="/articles/install-nagios">How to Automate the Installation of Nagios</a>.</p> </div> <p>After setting up Nagios, you can extend its capabilities by monitoring remote hosts. This guide covers how to use <a href="https://app.cloudray.io/">CloudRay</a> to automate configuring a remote Linux host and the Nagios server to track key metrics such as system load, active processes, and SSH status.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#setup-nrpe-on-remote-server">Setup NRPE on Remote Server</a></li> <li><a href="#configure-nagios-server-to-monitor-remote-server">Configure Nagios Server to Monitor Remote Server</a></li> <li><a href="#running-the-scripts-with-cloudray">Running the Scripts with CloudRay</a> <ul> <li><a href="#run-the-setup-nrpe-script">Run the Setup NRPE Script</a></li> <li><a href="#configure-nagios-script">Configure Nagios Script</a></li> </ul> </li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes your Nagios is deployed already on <strong>Rocky Linux 9</strong> and the Remote server to Monitor is using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly</li> </ul> <h2>Setup NRPE on Remote Server</h2> <p>To monitor a remote host, start by setting up NRPE on the <code>remote-server</code>. Create a new script in CloudRay:</p> <img src="/_astro/setup-script.Cb-ibeKP.jpg" alt="Screenshot of adding the setup script" /> <ol> <li>Go to Scripts → New Script</li> <li>Name it <code>Setup NRPE Script</code></li> <li>Add the following script:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update system packages</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span># Install Nagios NRPE server and plugins</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> nagios-nrpe-server</span><span> nagios-plugins</span><span> nagios-plugins-basic</span><span> nagios-nrpe-plugin</span></span> <span></span> <span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> "s/^allowed_hosts=.*/allowed_hosts=127.0.0.1,::1,{{nagios_server_ip}}/"</span><span> /etc/nagios/nrpe.cfg</span></span> <span></span> <span><span># Start and enable the NRPE service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> nagios-nrpe-server</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> nagios-nrpe-server</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> nagios-nrpe-server</span><span> --no-pager</span></span> <span></span> <span><span># Check UFW status and configure firewall rules</span></span> <span><span>sudo</span><span> ufw</span><span> status</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> ssh</span></span> <span><span>yes</span><span> |</span><span> sudo</span><span> ufw</span><span> enable</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> 5666/tcp</span></span> <span><span>sudo</span><span> ufw</span><span> reload</span></span> <span></span> <span><span>echo</span><span> "Nagios NRPE setup completed successfully!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Setup NRPE Script</code> does:</p> <ul> <li>Updates all installed packages</li> <li>Installs NRPE and the necessary Nagios plugins</li> <li>Configures NRPE to accept connections from the nagios-server IP address.</li> <li>Starts and enables the NRPE service</li> </ul> <h2>Configure Nagios Server to Monitor Remote Server</h2> <p>After setting up NRPE on the remote server, you need to configure the Nagios server to monitor it.</p> <img src="/_astro/configure-script.BmOsgJBI.jpg" alt="Screenshot of adding the setup script" /> <p>You can follow similar steps as the above:</p> <ol> <li>Go to Scripts → New Script</li> <li>Name it <code>Configure Nagios Script</code></li> <li>Add the following script:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install NRPE plugin and nano</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> -y</span><span> nagios-plugins-nrpe</span><span> nano</span></span> <span></span> <span><span># Update commands.cfg to define check_nrpe command</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> 'cat &lt;&lt;EOF &gt;&gt; /usr/local/nagios/etc/objects/commands.cfg</span></span> <span></span> <span><span>define command {</span></span> <span><span> command_name check_nrpe</span></span> <span><span> command_line /usr/lib64/nagios/plugins/check_nrpe -H \$HOSTADDRESS\$ -c \$ARG1\$</span></span> <span><span>}</span></span> <span><span>EOF'</span></span> <span></span> <span><span># Open firewall for NRPE port</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --add-port=5666/tcp</span><span> --permanent</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --reload</span></span> <span></span> <span><span># Update hosts.cfg with contact, host, and service definitions</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> 'cat &lt;&lt;EOF &gt;&gt; /usr/local/nagios/etc/objects/hosts.cfg</span></span> <span></span> <span><span>define contact {</span></span> <span><span> use generic-contact</span></span> <span><span> contact_name {{nagios_user}}</span></span> <span><span> alias {{nagios_alias}}</span></span> <span><span> email {{nagios_email}}</span></span> <span><span>}</span></span> <span></span> <span><span>define host {</span></span> <span><span> use linux-server</span></span> <span><span> host_name nagios-example</span></span> <span><span> address {{remote_server_ip}}</span></span> <span><span> max_check_attempts 5</span></span> <span><span> check_period 24x7</span></span> <span><span> notification_interval 30</span></span> <span><span> notification_period 24x7</span></span> <span><span>}</span></span> <span></span> <span><span>define service {</span></span> <span><span> use generic-service</span></span> <span><span> host_name nagios-example</span></span> <span><span> service_description SSH</span></span> <span><span> check_command check_ssh</span></span> <span><span> max_check_attempts 3</span></span> <span><span> check_interval 2</span></span> <span><span> retry_interval 1</span></span> <span><span>}</span></span> <span></span> <span><span>define service {</span></span> <span><span> use generic-service</span></span> <span><span> host_name nagios-example</span></span> <span><span> service_description System Load</span></span> <span><span> check_command check_nrpe!check_load</span></span> <span><span> max_check_attempts 5</span></span> <span><span> check_interval 2</span></span> <span><span> retry_interval 1</span></span> <span><span>}</span></span> <span></span> <span><span>define service {</span></span> <span><span> use generic-service</span></span> <span><span> host_name nagios-example</span></span> <span><span> service_description Total Processes</span></span> <span><span> check_command check_nrpe!check_total_procs</span></span> <span><span> max_check_attempts 5</span></span> <span><span> check_interval 2</span></span> <span><span> retry_interval 1</span></span> <span><span>}</span></span> <span><span>EOF'</span></span> <span></span> <span><span># Add hosts.cfg reference in nagios.cfg</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> 'echo "cfg_file=/usr/local/nagios/etc/objects/hosts.cfg" &gt;&gt; /usr/local/nagios/etc/nagios.cfg'</span></span> <span></span> <span><span># Restart Nagios to apply changes</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> nagios</span></span> <span></span> <span><span>echo</span><span> "Nagios server configuration for remote host completed successfully!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Configure Nagios Script</code> does:</p> <ul> <li>Installs the <code>nagios-plugins-nrpe</code> package (which includes the NRPE plugin for remote checks) and <code>nano</code> for editing</li> <li>Adds a <code>check_nrpe</code> command definition to the Nagios <code>commands.cfg</code> file to allow remote checks via NRPE</li> <li>Configures the system firewall to allow incoming TCP traffic on port 5666 (used by NRPE) and reloads firewall settings</li> <li>Updates the <code>hosts.cfg</code> configuration file</li> <li>Adds a reference to the <code>hosts.cfg</code> file in the main Nagios configuration (<code>nagios.cfg</code>) if not already present</li> <li>Restarts the Nagios service to apply all configuration changes</li> </ul> <h2>Running the Scripts with CloudRay</h2> <p>You can automate this entire process using CloudRay by executing the scripts on both the <code>nagios-server</code> and <code>remote-server</code>.</p> <div> <p>NOTE</p> <p>Ensure you use the same variables from the Nagios installation process. This ensures consistency across your Nagios monitoring setup. Follow our detailed guide on <a href="/articles/install-nagios">Automate the Installation of Nagios Using CloudRay</a> for reference.</p> </div> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. If you prefer to run them individually, follow these steps:</p> <h3>Run the Setup NRPE Script</h3> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>To run the <code>Setup NRPE Script</code>, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Setup NRPE Script”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-setup-script.D-aH5lAK.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/result-setup-script.B4ZhSfxq.jpg" alt="Screenshot of the output of the install k3s script" /> <p>CloudRay will automatically connect to your server, run the <code>Setup NRPE Script</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <h3>Configure Nagios Script</h3> <p>Next, you need to run the second script to configure the Nagios server to monitor it.</p> <p>To run the Configure Nagios Script, follow similar steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <ul> <li>Server: Select the same server used earlier.</li> <li>Script: Choose the “Configure Nagios Script”.</li> <li>Variable Group: Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-configure-script.DMsh-uTA.jpg" alt="Screenshot of running the configuration script" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to deploy your application.</li> </ol> <img src="/_astro/result-configure-script.DCslpQDz.jpg" alt="Screenshot of the output of the configuration script" /> <p>After executing the scripts, you verify the Nagios web interface (<code>http://&lt;nagios-server-ip&gt;/nagios</code>) to confirm the new remote host is being monitored successfully.</p>How to Automate the Installation of Nagioshttps://cloudray.io/articles/install-nagioshttps://cloudray.io/articles/install-nagiosLearn how to automate Nagios Core installation on Rocky Linux 9 with reusable Bash scripts for consistent setup deployment and configurationThu, 06 Mar 2025 00:00:00 GMT<p>Automating the installation of Nagios on Rocky Linux 9 ensures consistency, reduces errors, and saves time when setting up monitoring for your environment. <a href="https://app.cloudray.io/">CloudRay</a>, enables seamless setup, deployment, and management of Nagios. By leveraging CloudRay, you can automate the setting up, configurations, and deployment of Nagios.</p> <p>In this guide, you will learn how to automate the installation of Nagios Core on Rocky Linux 9 using CloudRay.</p> <div> <p>IMPORTANT</p> <p>This article focuses on installing Nagios Core on Rocky Linux 9 using CloudRay. If you’re looking to monitor remote hosts using NRPE, check out our companion guide: <a href="/articles/monitor-remote-hosts-with-nagios">How to Monitor Remote Hosts with Nagios Using CloudRay</a></p> </div> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#setup-nagios-script">Setup Nagios Script</a></li> <li><a href="#deploy-nagios-script">Deploy Nagios Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-scripts-to-install-nagios-with-cloudray">Running the Scripts to Install Nagios with CloudRay</a></li> <li><a href="#troubleshooting">Troubleshooting</a></li> <li><a href="#related-guide">Related Guide</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Rocky Linux 9</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly</li> </ul> <h2>Create the Automation Script</h2> <p>To automate the installation of Nagios, you’ll need two Bash scripts:</p> <ol> <li><strong>Setup Nagios Script:</strong> Installs the necessary dependencies and configures Nagios Core</li> <li><strong>Deploy Nagios Script:</strong> Sets up the web interface, enables Nagios, and configures the firewall</li> </ol> <p>Let’s begin with the setting up of Nagios</p> <h3>Setup Nagios Script</h3> <p>To create the Setup Nagios Script, you need to follow these steps:</p> <img src="/_astro/setup-script.3_oOmN9R.jpg" alt="Screenshot of adding a new setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Setup Nagios Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update system packages</span></span> <span><span>sudo</span><span> dnf</span><span> update</span><span> -y</span></span> <span></span> <span><span># Install required dependencies</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> tar</span><span> nano</span><span> gcc</span><span> glibc</span><span> glibc-common</span><span> wget</span><span> perl</span><span> net-snmp</span><span> openssl-devel</span><span> make</span><span> unzip</span><span> gd</span><span> gd-devel</span><span> epel-release</span><span> httpd</span><span> php</span><span> php-cli</span><span> php-common</span><span> php-gd</span><span> -y</span></span> <span></span> <span><span># Create a directory for Nagios and navigate to it</span></span> <span><span>cd</span><span> ~</span></span> <span><span>mkdir</span><span> nagios</span></span> <span><span>cd</span><span> nagios</span></span> <span></span> <span><span># Download Nagios Core</span></span> <span><span>wget</span><span> https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.5.9.tar.gz</span></span> <span></span> <span><span># Ensure tar is installed</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> tar</span><span> -y</span></span> <span></span> <span><span># Extract Nagios</span></span> <span><span>tar</span><span> -xvf</span><span> nagios-4.5.9.tar.gz</span></span> <span><span>cd</span><span> nagios-4.5.9</span></span> <span></span> <span><span># Configure and compile Nagios</span></span> <span><span>./configure</span></span> <span><span>make</span><span> all</span></span> <span></span> <span><span># Create Nagios user and groups</span></span> <span><span>sudo</span><span> make</span><span> install-groups-users</span></span> <span><span>sudo</span><span> usermod</span><span> -a</span><span> -G</span><span> nagios</span><span> apache</span></span> <span></span> <span><span># Install Nagios and its components</span></span> <span><span>sudo</span><span> make</span><span> install</span></span> <span><span>sudo</span><span> make</span><span> install-init</span></span> <span><span>sudo</span><span> make</span><span> install-commandmode</span></span> <span><span>sudo</span><span> make</span><span> install-config</span></span> <span><span>sudo</span><span> make</span><span> install-webconf</span></span> <span></span> <span><span># Verify Nagios configuration</span></span> <span><span>sudo</span><span> /usr/local/nagios/bin/nagios</span><span> -v</span><span> /usr/local/nagios/etc/nagios.cfg</span></span> <span></span> <span><span>echo</span><span> "Nagios core setup completed successfully!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Setup Nagios Script</code> does:</p> <ul> <li>Updates all installed packages to the latest version</li> <li>Installs essential packages like <code>gcc</code>, <code>nano</code>, <code>wget</code>, <code>perl</code>, <code>httpd</code>, and <code>php</code></li> <li>Creates and navigates to a directory for Nagios installation</li> <li>Downloads Nagios Core (version 4.5.9 in this case) and Extracts the Nagios archive</li> <li>Configures and compiles Nagios</li> <li>Creates the <code>nagios</code> user and group and adds the <code>apache</code> user to the <code>nagios</code> group for web interface access</li> <li>Install Nagios components, including the core configuration, init scripts, and web interface</li> </ul> <h3>Deploy Nagios Script</h3> <p>Next, you need to create the deployment script for Nagios. To do so, follow similar steps as the above:</p> <img src="/_astro/deploy-script.Da2A2sAJ.jpg" alt="Screenshot of deploying Nagios" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Deploy Nagios Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create Nagios admin user</span></span> <span><span>sudo</span><span> htpasswd</span><span> -cb</span><span> /usr/local/nagios/etc/htpasswd.users</span><span> {{nagios_user}}</span><span> {{nagios_password}}</span></span> <span></span> <span><span># Set proper permissions</span></span> <span><span>sudo</span><span> chown</span><span> apache:nagios</span><span> /usr/local/nagios/etc/htpasswd.users</span></span> <span></span> <span><span># Start and enable Nagios</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> nagios</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> nagios</span></span> <span></span> <span><span># Confirm Nagios is running</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> nagios</span><span> --no-pager</span></span> <span></span> <span><span># Install and configure FirewallD</span></span> <span><span>sudo</span><span> dnf</span><span> install</span><span> firewalld</span><span> -y</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> firewalld</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> firewalld</span></span> <span></span> <span><span># Open HTTP port</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --add-service=http</span><span> --permanent</span></span> <span><span>sudo</span><span> firewall-cmd</span><span> --reload</span></span> <span></span> <span><span># Enable and restart services</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> httpd</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> nagios</span><span> httpd</span></span> <span></span> <span><span># Install Nagios plugins</span></span> <span><span>cd</span><span> ~/nagios/</span></span> <span><span>wget</span><span> https://nagios-plugins.org/download/nagios-plugins-2.4.11.tar.gz</span></span> <span><span>tar</span><span> -xvf</span><span> nagios-plugins-2.4.11.tar.gz</span></span> <span><span>cd</span><span> nagios-plugins-2.4.11</span></span> <span></span> <span><span>./configure</span></span> <span><span>make</span></span> <span><span>sudo</span><span> make</span><span> install</span></span> <span></span> <span><span># Final restart</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> nagios</span><span> httpd</span></span> <span></span> <span><span>echo</span><span> "Nagios deployment completed successfully!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Deploy Nagios Script</code> does:</p> <ul> <li>Creates a Nagios web admin user with the password</li> <li>Ensures Apache and Nagios have the right permissions</li> <li>Starts, enables, and checks the status of the Nagios service</li> <li>Installs the firewall and opens port 80 for HTTP traffic</li> <li>Downloads and installs Nagios plugins for additional monitoring capabilities</li> <li>Restarts Nagios and the web server to apply changes</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{nagios-user}}</code>, <code>{{nagios-password}}</code>, <code>{{nagios-alias}}</code>, <code>{{nagios-email}}</code>, <code>{{nagios-server-ip}}</code>, and <code>{{remote-server-ip}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.CjBqynGf.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>nagios-user</code>:</strong> This is the Nagios username</li> <li><strong><code>nagios-password</code>:</strong> This is the Nagios password</li> <li><strong><code>nagios-alias</code>:</strong> This is the Nagios alias</li> <li><strong><code>nagios-email</code>:</strong> The email of the Nagios account</li> <li><strong><code>nagios-server-ip</code>:</strong> The IP Address of the Nagios Server</li> <li><strong><code>remote-server-ip</code>:</strong> The IP Address of the remote server to be monitored</li> </ul> <p>Since the variables are setup, proceed to run the scripts with CloudRay.</p> <h2>Running the Scripts to Install Nagios with CloudRay</h2> <p>Now that everything is setup, you can use CloudRay to automate the installation of Nagios.</p> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.DWH35jzv.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Nagios Deployment Automation and Management”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.CTkrR93n.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.D5cJtUlb.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where Nagios will be installed</li> <li>Script Playlist: Choose the playlist you created (For example “Nagios Deployment Automation and Management”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.BPrsHLBi.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Your Nagios is now seamlessly deployed and managed with CloudRay. That’s it! Happy deploying!. You can access it by visiting <code>http://&lt;nagios-server-ip&gt;/nagios</code></p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li><strong>Nagios Web Interface Not Loading:</strong> Ensure Apache is running with <code>sudo systemctl status httpd</code> and restart it using <code>sudo systemctl restart httpd</code></li> <li><strong>Nagios Service Not Starting:</strong> Check the service status with <code>sudo systemctl status nagios</code> and restart it using <code>sudo systemctl restart nagios</code></li> </ul> <p>If the issue persists, consult the <a href="https://www.nagios.org/documentation/">Nagios Core Documentation</a> for further assistance.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guide</h2> <ul> <li><a href="/articles/install-apache-airflow">Install Apache Airflow</a></li> <li><a href="/articles/install-k3s">Install K3s</a></li> <li><a href="/articles/install-cockpit">Install Cockpit</a></li> <li><a href="/articles/monitor-remote-hosts-with-nagios">Monitor Remote Host with Nagios</a></li> </ul>How to Deploy Jenkins with Docker Composehttps://cloudray.io/articles/deploy-jenkins-with-docker-composehttps://cloudray.io/articles/deploy-jenkins-with-docker-composeLearn how to automate Jenkins deployment with Docker Compose and Caddy reverse proxy on Ubuntu using Bash scripts for secure CI/CD setupWed, 05 Mar 2025 00:00:00 GMT<p><a href="https://app.cloudray.io/">CloudRay</a> provides a flexible automation framework to deploy and manage infrastructure and applications. Using CloudRay, you can automate the provisioning of essential services like Jenkins using Docker and Docker Compose, enabling seamless CI/CD pipeline setups. This guide walks through deploying Jenkins using Docker Compose and configuring Caddy as a reverse proxy for improved security and ease of access.</p> <p>By the end of this article, you will have a fully functional Jenkins instance running in Docker, exposed securely via Caddy, and automated through CloudRay.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#install-docker-and-docker-compose">Install Docker and Docker Compose</a></li> <li><a href="#setup-docker-compose-for-jenkins">Setup Docker Compose for Jenkins</a></li> <li><a href="#install-and-configure-caddy">Install and Configure Caddy</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#troubleshooting">Troubleshooting</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific deployment needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> </ul> <h2>Create the Automation Script</h2> <p>To streamline the deployment and management processes, you’ll need three Bash scripts:</p> <ol> <li><strong>Install Docker and Docker Compose</strong>: This script will installs Docker and Docker Compose, adds a new user for Docker management, and sets up Docker to run on startup</li> <li><strong>Set up Docker Compose for Jenkins</strong>: This script sets up a Docker Compose configuration for Jenkins and starts the service</li> <li><strong>Install and Configure Caddy</strong>: This script will installs Caddy, sets up a reverse proxy for Jenkins, and ensures Caddy runs on system boot</li> </ol> <h3>Install Docker and Docker Compose</h3> <p>To create the install Docker and Docker Compose script, you need to follow these steps:</p> <img src="/_astro/installation-script.DrbZD2sG.jpg" alt="Screenshot of adding a new setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install Docker and Docker Compose</code>. You can give it any name of your choice.</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Stop executing the script if any of the commands fail</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update package lists and install required dependencies</span></span> <span><span>sudo</span><span> apt-get</span><span> update</span></span> <span><span>sudo</span><span> apt-get</span><span> install</span><span> ca-certificates</span><span> curl</span><span> -y</span></span> <span></span> <span><span># Add Docker's official GPG key</span></span> <span><span>sudo</span><span> install</span><span> -m</span><span> 0755</span><span> -d</span><span> /etc/apt/keyrings</span></span> <span><span>sudo</span><span> curl</span><span> -fsSL</span><span> https://download.docker.com/linux/ubuntu/gpg</span><span> -o</span><span> /etc/apt/keyrings/docker.asc</span></span> <span><span>sudo</span><span> chmod</span><span> a+r</span><span> /etc/apt/keyrings/docker.asc</span></span> <span></span> <span><span># Add Docker repository to Apt sources</span></span> <span><span>echo</span><span> \</span></span> <span><span> "deb [arch=$(</span><span>dpkg</span><span> --print-architecture</span><span>) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu </span><span>\</span></span> <span><span> $(</span><span>.</span><span> /etc/os-release &amp;&amp; </span><span>echo</span><span> "${</span><span>UBUNTU_CODENAME</span><span>:-</span><span>$VERSION_CODENAME</span><span>}") stable"</span><span> |</span><span> \</span></span> <span><span> sudo</span><span> tee</span><span> /etc/apt/sources.list.d/docker.list</span><span> &gt;</span><span> /dev/null</span></span> <span></span> <span><span># Update package lists and install Docker packages</span></span> <span><span>sudo</span><span> apt-get</span><span> update</span></span> <span><span>sudo</span><span> apt-get</span><span> install</span><span> docker-ce</span><span> docker-ce-cli</span><span> containerd.io</span><span> docker-buildx-plugin</span><span> docker-compose-plugin</span><span> -y</span></span> <span></span> <span><span># Create a new user "user" with the password "kali"</span></span> <span><span>sudo</span><span> adduser</span><span> --gecos</span><span> ""</span><span> --disabled-password</span><span> user</span></span> <span><span>echo</span><span> "user:{{user_password}}"</span><span> |</span><span> sudo</span><span> chpasswd</span></span> <span></span> <span><span># Add "user" to the docker group for Docker permissions</span></span> <span><span>sudo</span><span> usermod</span><span> -aG</span><span> docker</span><span> user</span></span> <span></span> <span><span># Restart and enable Docker service</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> docker</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> docker</span></span> <span></span> <span><span># Download Docker Compose binary</span></span> <span><span>mkdir</span><span> bin</span></span> <span><span>cd</span><span> /bin</span></span> <span><span>wget</span><span> https://github.com/docker/compose/releases/download/v2.28.1/docker-compose-linux-x86_64</span><span> -O</span><span> docker-compose</span></span> <span><span>chmod</span><span> +x</span><span> docker-compose</span></span> <span></span> <span><span># Add Docker Compose to PATH if not already present</span></span> <span><span>if</span><span> !</span><span> grep</span><span> -q</span><span> 'export PATH="${HOME}/bin:${PATH}"'</span><span> ~/.bashrc</span><span>; </span><span>then</span></span> <span><span> echo</span><span> 'export PATH="${HOME}/bin:${PATH}"'</span><span> &gt;&gt;</span><span> ~/.bashrc</span></span> <span><span> source</span><span> ~/.bashrc</span></span> <span><span>fi</span></span> <span></span> <span><span>echo</span><span> "Docker and Docker Compose installation completed successfully!"</span></span></code><span></span><span></span></pre> <p>Below is a breakdown of what each command in the <code>Install Docker and Docker Compose</code> does:</p> <ul> <li>Ensures the system has the required packages for Docker installation</li> <li>Allows secure installation from Docker’s official repository</li> <li>Installs the latest Docker engine, CLI tools, and Compose</li> <li>Creates a user named with password and grants Docker access</li> <li>Enables and starts Docker to ensure it runs on system boot</li> <li>Downloads and configures Docker Compose for container orchestration</li> </ul> <h3>Setup Docker Compose for Jenkins</h3> <p>After the installation of Docker and Docker Compose, you create a script to sets up a Docker Compose configuration for Jenkins and start the service.</p> <p>Similarly, follow these steps to setup Docker Compose for Jenkins:</p> <img src="/_astro/setup-docker-jenkins.Crz_y0JS.jpg" alt="Screenshot of adding a new deploy script" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Setup Docker Compose for Jenkins</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Stop executing the script if any of the commands fail</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create the Jenkins Docker Compose directory</span></span> <span><span>mkdir</span><span> -p</span><span> ~/jenkins-compose</span></span> <span><span>cd</span><span> ~/jenkins-compose</span></span> <span></span> <span><span># Create the docker-compose.yml file</span></span> <span><span>cat</span><span> &lt;&lt;</span><span>EOL</span><span> &gt;</span><span> docker-compose.yml</span></span> <span><span>version: '3.8'</span></span> <span><span>services:</span></span> <span><span> jenkins:</span></span> <span><span> container_name: jenkins</span></span> <span><span> restart: always</span></span> <span><span> image: jenkins/jenkins:lts</span></span> <span><span> ports:</span></span> <span><span> - 8080:8080</span></span> <span><span> volumes:</span></span> <span><span> - jenkins-home:/var/jenkins_home</span></span> <span></span> <span><span>volumes:</span></span> <span><span> jenkins-home:</span></span> <span><span>EOL</span></span> <span></span> <span><span># Enable and start Docker service</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> docker.service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> docker.service</span></span> <span></span> <span><span># Start Jenkins using Docker Compose</span></span> <span><span>docker-compose</span><span> up</span><span> -d</span></span></code><span></span><span></span></pre> <p>This is what the <code>Setup Docker Compose for Jenkins</code> does:</p> <ul> <li>Sets up a dedicated directory for the Jenkins deployment</li> <li>Defines the Jenkins service with persistent storage</li> <li>Ensures Docker is running and Jenkins starts automatically</li> <li>Uses Docker Compose to run Jenkins in detached mode</li> </ul> <h3>Install and Configure Caddy</h3> <p>After setting up Docker Compose for Jenkins, you can secure the deployment using Caddy.</p> <p>To create the script, you need to follow these steps:</p> <img src="/_astro/deploy-jenkins-caddy.D-jzC0x-.jpg" alt="Screenshot of adding a new setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install and Configure Caddy</code></li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Stop executing the script if any of the commands fail</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>#!/bin/bash</span></span> <span></span> <span><span># Install prerequisites for Caddy</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> debian-keyring</span><span> debian-archive-keyring</span><span> apt-transport-https</span></span> <span></span> <span><span># Add Caddy's official GPG key and repository</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key'</span><span> |</span><span> sudo</span><span> gpg</span><span> --dearmor</span><span> -o</span><span> /usr/share/keyrings/caddy-stable-archive-keyring.gpg</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt'</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/apt/sources.list.d/caddy-stable.list</span></span> <span></span> <span><span># Update package lists and install Caddy</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> caddy</span><span> -y</span></span> <span></span> <span><span># Set up the Caddyfile to reverse proxy to Jenkins</span></span> <span><span>sudo</span><span> bash</span><span> -c</span><span> 'cat &lt;&lt;EOL &gt; {{caddyfile}}</span></span> <span><span>{{your_domain}} {</span></span> <span><span> reverse_proxy localhost:8080</span></span> <span><span>}</span></span> <span><span>EOL'</span></span> <span></span> <span><span># Validate and reload Caddy configuration</span></span> <span><span>sudo</span><span> caddy</span><span> validate</span><span> --config</span><span> {{caddyfile}}</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> caddy</span></span></code><span></span><span></span></pre> <p>Below is a breakdown of what each command in the <code>Install and Configure Caddy</code> does:</p> <ul> <li>Downloads and installs the Caddy web server</li> <li>Ensures the installation is secure and from an official source</li> <li>Creates a reverse proxy configuration to expose Jenkins externally</li> <li>Checks for syntax errors and reloads the Caddy service</li> </ul> <h2>Create a Variable Group</h2> <p>Before running the scripts, you need to define values for the placeholders <code>{{user_password}}</code>, <code>{{caddyfile}}</code>, and <code>{{your_domain}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.ChbnP-5n.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>your_domain</code>:</strong> This specifies your registered domain</li> <li><strong><code>caddyfile</code>:</strong> This specifies the path to the Caddy web server’s configuration file</li> <li><strong><code>user_password</code>:</strong> The password of the docker user</li> </ul> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.DWH35jzv.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Jenkins Deployment Automation and Management”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.BwaNTGl5.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.pXCUpZDJ.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where Jenkins will be installed</li> <li>Script Playlist: Choose the playlist you created (For example “Jenkins Deployment Automation and Management”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.CWPsPKB1.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Your Jenkins is now successfully deployed and managed by CloudRay. That’s it! Happy deploying!</p> <p>You can use the command (<code>docker exec -it jenkins cat /var/jenkins_home/secrets/initialAdminPassword</code>) to retrieve the Administrator password.</p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li><strong>Jenkins service not starting:</strong> Ensure Docker is running by checking <code>sudo systemctl status docker</code> and restart if necessary</li> <li><strong>Permission issues:</strong> Ensure the <code>user</code> has Docker permissions by verifying membership in the <code>docker</code> group with <code>groups user</code></li> <li><strong>Caddy reverse proxy not working:</strong> Verify the Caddyfile configuration and ensure your domain points to the server’s IP address</li> <li><strong>Unable to access Jenkins UI:</strong> Check if ports 8080 (Jenkins) and 80/443 (Caddy) are open in your firewall</li> <li><strong>Caddy service failing:</strong> Validate the Caddyfile with <code>sudo caddy validate --config /path/to/Caddyfile</code> and check logs with <code>sudo journalctl -u caddy</code></li> </ul> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/deploy-ruby-on-rails">Deploy Ruby on Rails</a></li> <li><a href="/articles/deploy-express">Deploy Express Application</a></li> <li><a href="/articles/deploy-laravel">Deploy Laravel</a></li> <li><a href="/articles/deploy-sonarqube">Deploy SonarQube</a></li> <li><a href="/articles/deploy-static-website-from-github">Deploy Static Website from GitHub</a></li> <li><a href="/articles/deploy-nextjs-application">Deploy Next.js Application</a></li> </ul>How to Automate the Installation of Apache Airflowhttps://cloudray.io/articles/install-apache-airflowhttps://cloudray.io/articles/install-apache-airflowLearn how to automate installation and configuration of Apache Airflow on Ubuntu using Bash scripts to install PostgreSQL, set up Caddy, and run AirflowWed, 26 Feb 2025 00:00:00 GMT<p><a href="https://app.cloudray.io/">CloudRay</a>, a cloud automation platform, allows you to automate the installation and configuration of Apache Airflow using Bash scripts, ensuring a seamless and repeatable deployment process.</p> <p>In this guide, we will walk through the steps to install and configure Apache Airflow on an Ubuntu 24.04 server using CloudRay. This guide covers:</p> <ul> <li>Writing <a href="/articles/script-automation-guide">automation script</a> for installing Airflow, configuring it, and setting up a reverse proxy with Caddy</li> <li>Running the scripts on a remote server using CloudRay</li> </ul> <p>By the end of this article, you will have a fully functional Apache Airflow setup, accessible through a web UI, and ready to run your workflows.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#creating-installation-and-setup-script">Creating Installation and Setup Script</a></li> <li><a href="#apache-airflow-configuration-script">Apache Airflow Configuration Script</a></li> <li><a href="#start-apache-service-script">Start Apache Service Script</a></li> <li><a href="#install-and-configure-caddy-script">Install and Configure Caddy Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#troubleshooting">Troubleshooting</a></li> <li><a href="#related-guide">Related Guide</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can modify these scripts based on your environment and deployment needs</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> </ul> <h2>Create the Automation Script</h2> <p>To automate the installation of Apache Airflow, you will use four Bash scripts:</p> <ol> <li><strong>Installation and Setup Script</strong>: This script will install required dependencies, sets up the PostgreSQL database, and install Apache Airflow</li> <li><strong>Airflow Configuration Script</strong>: This script will configures Airflow’s database connection and initializes it</li> <li><strong>Start Airflow Script</strong>: This script create an admin user and starts the Airflow services</li> <li><strong>Install and Configure Caddy Script</strong>: This script will installs and configures Caddy as a reverse proxy</li> </ol> <p>Let’s begin with the installation and configuration of Apache Airflow.</p> <h3>Creating Installation and Setup Script</h3> <p>To create the installation and setup script, you need to follow these steps:</p> <img src="/_astro/installation-and-setup-script.D5al2rAp.jpg" alt="Screenshot of adding a new installation and setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install and setup Apache Airflow</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit immediately if a command exits with a non-zero status</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update system packages</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span># Install required packages</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> python3</span><span> python3-venv</span><span> python3-dev</span><span> gcc</span><span> libpq-dev</span><span> postgresql</span><span> postgresql-contrib</span></span> <span></span> <span><span># Start and enable PostgreSQL</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> postgresql</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> postgresql</span></span> <span></span> <span><span># Create Airflow database and user</span></span> <span><span>sudo</span><span> -u</span><span> postgres</span><span> psql</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>CREATE USER {{db_user}} PASSWORD '{{db_pass}}';</span></span> <span><span>CREATE DATABASE {{db_name}};</span></span> <span><span>GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO {{db_user}};</span></span> <span><span>ALTER DATABASE {{db_name}} OWNER TO {{db_user}};</span></span> <span><span>GRANT ALL ON SCHEMA public TO {{db_user}};</span></span> <span><span>EOF</span></span> <span></span> <span><span># Setup Python virtual environment</span></span> <span><span>python3</span><span> -m</span><span> venv</span><span> airflow_env</span></span> <span><span>source</span><span> ~/airflow_env/bin/activate</span></span> <span></span> <span><span># Install Airflow and PostgreSQL connector</span></span> <span><span>pip</span><span> install</span><span> --upgrade</span><span> pip</span></span> <span><span>pip</span><span> install</span><span> psycopg2-binary</span><span> apache-airflow[postgres]</span></span> <span></span> <span><span>echo</span><span> "Installation and setup completed"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Install and setup Apache Airflow</code> does:</p> <ul> <li>Updates system packages</li> <li>Installs Python and PostgreSQL dependencies starts, and enables PostgreSQL</li> <li>Creates an Airflow database and user</li> <li>Sets up a Python virtual environment</li> <li>Installs Apache Airflow and PostgreSQL connector</li> </ul> <p>If you’re looking to back up or manage your Airflow database effectively, check out our <a href="https://cloudray.io/articles/setting-up-postgres-database">PostgreSQL guide</a>.</p> <h3>Apache Airflow Configuration Script</h3> <p>After the installation and setup of Airflow, you need to configure its settings</p> <p>Similarly, follow these steps to setup the database:</p> <img src="/_astro/configure-airflow-script.B9Z9vH15.jpg" alt="Screenshot of Configuring Airflow" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Configure Airflow Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Activate virtual environment</span></span> <span><span>source</span><span> ~/airflow_env/bin/activate</span></span> <span></span> <span><span># Initialize Airflow DB</span></span> <span><span>airflow</span><span> db</span><span> init</span></span> <span></span> <span><span>sed</span><span> -i</span><span> 's/executor = SequentialExecutor/executor = LocalExecutor/'</span><span> {{airflow_cgf}}</span></span> <span><span>sed</span><span> -i</span><span> 's|sql_alchemy_conn = sqlite:////root/airflow/airflow.db|sql_alchemy_conn = postgresql+psycopg2://{{db_user}}:{{db_pass}}@localhost/{{db_name}}|'</span><span> {{airflow_cgf}}</span></span> <span></span> <span><span># Reinitialize Airflow DB</span></span> <span><span>airflow</span><span> db</span><span> init</span></span> <span></span> <span><span>echo</span><span> "Airflow configured successfully"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Configure Airflow Script</code> does:</p> <ul> <li>Activates the Python virtual environment</li> <li>Initializes the Airflow database</li> <li>Modifies the Airflow configuration file to use LocalExecutor instead of the default SequentialExecutor</li> <li>Updates the database connection to use PostgreSQL</li> </ul> <h3>Start Apache Service Script</h3> <p>Moving forward, you need to start Airflow and create an admin user. Here, you follow similar steps as the previous ones to create the backup script:</p> <img src="/_astro/start-apache-service-script.BoWaB8pl.jpg" alt="Screenshot of starting apache service" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Start Apache Service Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Activate virtual environment</span></span> <span><span>source</span><span> ~/airflow_env/bin/activate</span></span> <span></span> <span><span># Create an Airflow admin user</span></span> <span><span>airflow</span><span> users</span><span> create</span><span> \</span></span> <span><span> --username</span><span> {{username}}</span><span> \</span></span> <span><span> --password</span><span> {{password}}</span><span> \</span></span> <span><span> --firstname</span><span> {{firstname}}</span><span> \</span></span> <span><span> --lastname</span><span> {{lastname}}</span><span> \</span></span> <span><span> --role</span><span> Admin</span><span> \</span></span> <span><span> --email</span><span> {{email}}</span></span> <span></span> <span><span># Start Airflow services in the background</span></span> <span><span>nohup</span><span> airflow</span><span> webserver</span><span> -p</span><span> 8080</span><span> &gt;</span><span> webserver.log</span><span> 2&gt;&amp;1</span><span> &amp;</span></span> <span><span>nohup</span><span> airflow</span><span> scheduler</span><span> &gt;</span><span> scheduler.log</span><span> 2&gt;&amp;1</span><span> &amp;</span></span> <span></span> <span><span>echo</span><span> "Airflow started successfully."</span></span></code><span></span><span></span></pre> <p>This is what the <code>Start Apache Service Script</code> does:</p> <ul> <li>Activates the virtual environment</li> <li>Creates an admin user for Airflow</li> <li>Starts the Airflow webserver and scheduler in the background</li> </ul> <h3>Install and Configure Caddy Script</h3> <p>After starting the Airflow service, you can configure caddy server as a reverse proxy and secure the domain. This ensures airflow runs on port 80/443 with HTTPS enabled.</p> <p>Similarly, follow these steps to create the installation and configuration script:</p> <img src="/_astro/install-and-configure-caddy.C_rSp0-O.jpg" alt="Screenshot of installing and configuring Caddy" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Install and Configure Caddy Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install Caddy</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> debian-keyring</span><span> debian-archive-keyring</span><span> apt-transport-https</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key'</span><span> |</span><span> sudo</span><span> gpg</span><span> --dearmor</span><span> -o</span><span> /usr/share/keyrings/caddy-stable-archive-keyring.gpg</span></span> <span><span>curl</span><span> -1sLf</span><span> 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt'</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/apt/sources.list.d/caddy-stable.list</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> caddy</span></span> <span></span> <span><span># Configure Caddy</span></span> <span><span>echo</span><span> "airflow.example.com {</span></span> <span><span> reverse_proxy localhost:8080</span></span> <span><span>}"</span><span> |</span><span> sudo</span><span> tee</span><span> /etc/caddy/Caddyfile</span></span> <span></span> <span><span># Validate and start Caddy</span></span> <span><span>sudo</span><span> caddy</span><span> validate</span><span> --config</span><span> /etc/caddy/Caddyfile</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> caddy</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> caddy</span></span> <span></span> <span><span>echo</span><span> "Caddy installed and configured. You can access Airflow at http://airflow.example.com or https://airflow.example.com"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Install and Configure Caddy Script</code> does:</p> <ul> <li>Installs Caddy from the official repository</li> <li>Configures it as a reverse proxy for Airflow</li> <li>Starts and enables the Caddy service</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{db_name}}</code>, <code>{{db_user}}</code>, <code>{{db_pass}}</code>, <code>{{airflow_cgf}}</code>, <code>{{username}}</code>, <code>{{password}}</code>,<code>{{firstname}}</code>, <code>{{lastname}}</code> and <code>{{email}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.DW7FWMML.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>airflow_cfg</code>:</strong> This is the file path to Airflow’s configuration file, which contains settings for the Airflow environment</li> <li><strong><code>db_name</code>:</strong> This is the name of the database used to store Airflow’s metadata and operational data</li> <li><strong><code>db_user</code>:</strong> The username for accessing the Airflow database</li> <li><strong><code>db_pass</code>:</strong> The password for the database user to authenticate and access the Airflow database</li> <li><strong><code>username</code>:</strong> This is the username for the default admin account in the Airflow web interface</li> <li><strong><code>password</code>:</strong> This is the password for the default admin account to log in to the Airflow web interface</li> <li><strong><code>firstname</code>:</strong> This is the first name associated with the default admin account in Airflow</li> <li><strong><code>lastname</code>:</strong> The last name associated with the default admin account in Airflow</li> <li><strong><code>email</code>:</strong> The email address associated with the default admin account, used for notifications and alerts</li> </ul> <p>Since the variables are setup, proceed with running the scripts with CloudRay.</p> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.CVHog8Rg.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “Automate Installation and Configuration of Airflow”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.CtpDq8P_.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.CI_k48fW.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where you need Apache Airflow to be installed</li> <li>Script Playlist: Choose the playlist you created (For example “Automate Installation and Configuration of Airflow”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.C09S55gB.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Your Apache Airflow is now installed, setup, and managed with CloudRay. That’s it! Happy deploying!</p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li>Ensure your domain is properly registered and set the A record to point to your server’s IP address.</li> <li>Verify that all necessary environment variables are correctly set on your CloudRay variable Group</li> </ul> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guide</h2> <ul> <li><a href="/articles/install-k3s">Install K3s</a></li> <li><a href="/articles/install-cockpit">Install Cockpit</a></li> <li><a href="/articles/install-nagios">Install Nagios</a></li> <li><a href="/articles/monitor-remote-hosts-with-nagios">Monitor Remote Host with Nagios</a></li> </ul>Automate the Installation of K3shttps://cloudray.io/articles/install-k3shttps://cloudray.io/articles/install-k3sLearn how to automate installing and configuring a K3s Kubernetes cluster on Ubuntu 24.04 with Bash scripts for fast consistent deploymentSat, 22 Feb 2025 00:00:00 GMT<p>Setting up a Kubernetes cluster manually can be repetitive and error-prone. This guide walks you through automating the installation of K3s, configuring the cluster, and deploying tools like Helm and the NVIDIA GPU Operator, all using Bash scripts executed through <a href="https://app.cloudray.io/">CloudRay</a>.</p> <p>Whether you’re building a lightweight production cluster or a test environment, the process becomes faster, consistent, and easier to manage</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#install-and-configure-cluster-script">Install and Configure Cluster Script</a></li> <li><a href="#install-helm-and-deploy-nvidia-script">Install Helm and Deploy NVIDIA Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a> <ul> <li><a href="#run-the-install-cluster-script">Run the Install cluster script</a></li> <li><a href="#run-the-install-and-deploy-nvidia-script">Run the Install and Deploy NVIDIA script</a></li> </ul> </li> <li><a href="#troubleshooting">Troubleshooting</a></li> <li><a href="#related-guide">Related Guide</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started with your automation, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly</li> <li>You intend to deploy K3s as a single-node cluster. If using a multi-node setup, additional steps such as node joining must be performed</li> <li>Your server has NVIDIA drivers installed if you plan to use GPU workloads</li> </ul> <h2>Create the Automation Script</h2> <p>To automate the installation, you’ll need two Bash scripts:</p> <ol> <li><strong>Install and Configure Cluster Script</strong>: This script will installs K3s and configures the environment</li> <li><strong>Install Helm and Deploy NVIDIA Script</strong>: This script will installs Helm and deploys the NVIDIA GPU Operator</li> </ol> <p>Let’s begin with the installation and configuration of the cluster.</p> <h3>Install and Configure Cluster Script</h3> <p>To create the Install and Configure Cluster Script, you need to follow these steps:</p> <img src="/_astro/installation-script.Dua9c4AY.jpg" alt="Screenshot of adding a new instalation and configuration script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install and Configure Cluster Script</code>. You can give it any name of your choice</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install K3s</span></span> <span><span>echo</span><span> "Installing K3s..."</span></span> <span><span>curl</span><span> -sfL</span><span> https://get.k3s.io</span><span> |</span><span> sh</span><span> -</span></span> <span></span> <span><span># Create kubeconfig directory</span></span> <span><span>mkdir</span><span> -p</span><span> {{kubeconfig_dir}}</span></span> <span></span> <span><span># Symlink K3s config</span></span> <span><span>ln</span><span> -sf</span><span> /etc/rancher/k3s/k3s.yaml</span><span> {{kubeconfig_dir}}/config</span></span> <span></span> <span><span># Set permissions</span></span> <span><span>sudo</span><span> chmod</span><span> 755</span><span> {{kubeconfig_dir}}/config</span></span> <span></span> <span><span># Allow necessary ports through UFW</span></span> <span><span>echo</span><span> "Configuring firewall rules..."</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> 6443/tcp</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> 30000:32767/tcp</span></span> <span><span>sudo</span><span> ufw</span><span> allow</span><span> 30000:32767/udp</span></span> <span></span> <span><span># Enable K3s service</span></span> <span><span>echo</span><span> "Enabling K3s service..."</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> k3s</span></span> <span><span>sudo</span><span> systemctl</span><span> status</span><span> k3s</span><span> --no-pager</span></span> <span></span> <span><span>echo</span><span> "K3s installation completed!"</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Install and Configure Cluster Script</code> does:</p> <ul> <li>Installs K3s using the official installation script</li> <li>Creates and configures the Kubernetes configuration directory</li> <li>Opens required ports in the firewall</li> <li>Enables and starts the K3s service</li> </ul> <h3>Install Helm and Deploy NVIDIA Script</h3> <p>If you plan to run GPU-accelerated workloads, you can install Helm and deploy the NVIDIA GPU Opertor.</p> <p>Similarly, follow these steps to deploy NVIDIA:</p> <img src="/_astro/install-deploy-nvidia.CckVktSf.jpg" alt="Screenshot of deploying Nvidia" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Install Helm and Deploy NVIDIA Script</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit on error</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Install Helm</span></span> <span><span>echo</span><span> "Installing Helm..."</span></span> <span><span>curl</span><span> https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3</span><span> |</span><span> bash</span></span> <span></span> <span><span># Add NVIDIA Helm repository</span></span> <span><span>echo</span><span> "Adding NVIDIA repository..."</span></span> <span><span>helm</span><span> repo</span><span> add</span><span> nvidia</span><span> https://helm.ngc.nvidia.com/nvidia</span></span> <span><span>helm</span><span> repo</span><span> update</span></span> <span></span> <span><span># Export KUBECONFIG</span></span> <span><span>export</span><span> KUBECONFIG</span><span>=</span><span>{{kubeconfig_dir}}/config</span></span> <span></span> <span><span># Install GPU Operator</span></span> <span><span>echo</span><span> "Deploying NVIDIA GPU Operator..."</span></span> <span><span>helm</span><span> install</span><span> --wait</span><span> gpu-operator</span><span> nvidia/gpu-operator</span><span> --create-namespace</span><span> -n</span><span> gpu-operator</span><span> --set</span><span> driver.enabled=</span><span>false</span></span> <span></span> <span><span>echo</span><span> "GPU Operator installation completed!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Install Helm and Deploy NVIDIA Script</code> does:</p> <ul> <li>Installs Helm, a package manager for Kubernetes</li> <li>Adds the NVIDIA Helm repository and updates it</li> <li>Deploys the NVIDIA GPU Operator to enable GPU workloads in K3s</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{kubeconfig_dir}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.CMWPC-s-.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>kubeconfig_dir</code>:</strong> This is the path where kubernetes configuration files are stored</li> </ul> <p>Since the variables is setup, proceed with running the scripts with CloudRay</p> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. If you prefer to run them individually, follow these steps:</p> <h3>Run the Install cluster script</h3> <p>CloudRay uses Runlogs to execute scripts on your servers while providing real-time logs of the execution process.</p> <p>To run the <code>Install and Configure Cluster Script</code>, follow these steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Fill in the required details:</li> </ol> <ul> <li>Server: Select the server you added earlier.</li> <li>Script: Choose the “Install and Configure Cluster Script”</li> <li>Variable Group (optional): Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-install-k3s-script.DRpVMvuq.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution.</li> </ol> <img src="/_astro/result-install-k3s.Iq8wAvcV.jpg" alt="Screenshot of the output of the install k3s script" /> <p>CloudRay will automatically connect to your server, run the <code>Install and Configure Cluster Script</code>, and provide live logs to track the process. If any errors occur, you can review the logs to troubleshoot the issue.</p> <h3>Run the Install and Deploy NVIDIA script</h3> <p>However, if you wish to run GPU-accelerated workloads, you can run the Install Helm and Deploy NVIDIA Script.</p> <p>To run the Install Helm and Deploy NVIDIA Script, follow similar steps:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <ul> <li>Server: Select the same server used earlier.</li> <li>Script: Choose the “Install Helm and Deploy NVIDIA Script”.</li> <li>Variable Group: Select the variable group you created earlier.</li> </ul> <img src="/_astro/result-install-k3s.Iq8wAvcV.jpg" alt="Screenshot of running the install and deploy nvidia script" /> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to deploy your application.</li> </ol> <img src="/_astro/result-install-k3s.Iq8wAvcV.jpg" alt="Screenshot of the output of the Deployment script" /> <p>Your lightweight Kubernetes cluster with K3s is now deployed and ready for GPU-accelerated workloads with the NVIDIA GPU Operator!</p> <h2>Troubleshooting</h2> <p>If you encounter issues during deployment, consider the following:</p> <ul> <li><strong>K3s Service Not Running:</strong> Check service status with <code>sudo systemctl status k3s</code> and restart with sudo <code>systemctl restart k3s</code></li> <li><strong>Firewall Issues:</strong> Verify that required ports (6443, 30000-32767) are open using <code>sudo ufw status</code></li> <li><strong>NVIDIA GPU Operator Deployment Issues:</strong> Check logs with <code>kubectl logs -n gpu-operator -l app=nvidia-gpu-operator</code></li> </ul> <p>If you still experience issues, refer to the <a href="https://docs.k3s.io/installation">official K3s Guide</a> and <a href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html">NVIDIA GPU Operator Documentation</a> for more details.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guide</h2> <ul> <li><a href="/articles/install-apache-airflow">Install Apache Airflow</a></li> <li><a href="/articles/install-cockpit">Install Cockpit</a></li> <li><a href="/articles/install-nagios">Install Nagios</a></li> <li><a href="/articles/monitor-remote-hosts-with-nagios">Monitor Remote Host with Nagios</a></li> </ul>How to Deploy & Manage a MySQL Serverhttps://cloudray.io/articles/deploy-mysql-serverhttps://cloudray.io/articles/deploy-mysql-serverLearn how to automate the deployment process and management of MySQL Server using CloudRayThu, 20 Feb 2025 00:00:00 GMT<p>In this article, you will learn how to deploy and manage MySQL server on Ubuntu 24.04 using <a href="https://app.cloudray.io/">CloudRay</a>, an automation tool that simplifies server management. This guide will cover setting up MySQL, configuring it for remote access, creating a database with a user, implementing automated backups, and executing these tasks using CloudRay.</p> <p>The deployment strategy involves:</p> <ul> <li>The installation and configuration of MySQL Server</li> <li>Creating databases, user, and sample table</li> <li>Automating Database backups</li> </ul> <p>By the end of this article, you will have a fully functional MySQL server deployed on Ubuntu 24.04, managed seamlessly through CloudRay automation.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#installation-and-configuration-script">Installation and Configuration Script</a></li> <li><a href="#database-setup-script">Database setup Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#scheduling-mysql-database-backup-with-cloudrays-schedules-optional">Scheduling MySQL Database Backup with CloudRay’s Schedules (Optional)</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> </ul> <h2>Create the Automation Script</h2> <p>To streamline the deployment and management processes, you’ll need three Bash scripts:</p> <ol> <li><strong>Installation and Configuration Script</strong>: This script will install MySQL, secure it, and configure it for remote access.</li> <li><strong>Database setup Script</strong>: This script will Creates a MySQL database, user, and a sample table.</li> <li><strong>Backup Script</strong>: This script will automate the backup processes for MySQL database</li> </ol> <h3>Installation and Configuration Script</h3> <p>To create the Installation and configuration script, you need to follow these steps:</p> <img src="/_astro/installation-script.BMUkXuA_.jpg" alt="Screenshot of adding a new setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install and Configure MySQL Server</code>. You can give it any name of your choice.</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Stop executing the script if any of the commands fail</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update system package lists</span></span> <span><span>sudo</span><span> apt</span><span> update</span></span> <span></span> <span><span># Install MySQL Server</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> mysql-server</span></span> <span></span> <span><span># Secure MySQL Installation</span></span> <span><span>printf</span><span> "y\n2\ny\ny\ny\ny\n"</span><span> |</span><span> sudo</span><span> mysql_secure_installation</span></span> <span></span> <span><span># Modify or add port configuration</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> "s/^port\s*=.*/port = 5231/"</span><span> {{mysql_conf}}</span></span> <span></span> <span><span># Check if bind-address exists, modify it; otherwise, add it</span></span> <span><span>if</span><span> grep</span><span> -q</span><span> "^bind-address"</span><span> {{mysql_conf}}</span><span>; </span><span>then</span></span> <span><span> sudo</span><span> sed</span><span> -i</span><span> "s/^bind-address\s*=.*/bind-address = 0.0.0.0/"</span><span> {{mysql_conf}}</span></span> <span><span>else</span></span> <span><span> echo</span><span> "bind-address = 0.0.0.0"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{mysql_conf}}</span></span> <span><span>fi</span></span> <span></span> <span><span># Restart MySQL to apply changes</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> mysql</span></span> <span></span> <span><span>echo</span><span> "MySQL configuration updated successfully!"</span></span></code><span></span><span></span></pre> <p>Below is a breakdown of what each command in the <code>Install and Configure MySQL Server</code> does:</p> <ul> <li>Updates the system’s package list to ensure that the latest versions of software packages are installed</li> <li>Installs the latest version of MySQL Server</li> <li>Secures MySQL by setting a strong password, removing anonymous users, disallowing remote root login, and deleting test databases</li> <li>Changes the default MySQL port to <code>5231</code> and enabling remote connections by setting <code>bind-address = 0.0.0.0</code></li> <li>Restarts MySQL to apply the changes</li> </ul> <h3>Database setup Script</h3> <p>After the installation of MySQL, you need to create a MySQL database, a dedicated user with privileges, and a table if necessary.</p> <p>Similarly, follow these steps to setup the database:</p> <img src="/_astro/setup-database-script.BS8O0kRH.jpg" alt="Screenshot of adding a new deploy script" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Setup MySQL Database</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Enter MySQL and execute commands</span></span> <span><span>sudo</span><span> mysql</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>CREATE DATABASE IF NOT EXISTS {{db_name}};</span></span> <span><span>CREATE USER IF NOT EXISTS '{{db_user}}'@'%' IDENTIFIED BY '{{db_pass}}';</span></span> <span><span>GRANT ALL PRIVILEGES ON {{db_name}}.* TO '{{db_user}}'@'%' WITH GRANT OPTION;</span></span> <span><span>FLUSH PRIVILEGES;</span></span> <span></span> <span><span>USE {{db_name}};</span></span> <span><span>CREATE TABLE IF NOT EXISTS students (</span></span> <span><span> id INT AUTO_INCREMENT PRIMARY KEY,</span></span> <span><span> name VARCHAR(100) NOT NULL,</span></span> <span><span> age INT NOT NULL,</span></span> <span><span> department VARCHAR(100) NOT NULL</span></span> <span><span>);</span></span> <span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "Database {{db_name}}, user {{db_user}}, and students table created successfully!"</span></span></code><span></span><span></span></pre> <p>This is what the <code>Setup MySQL Database</code> does:</p> <ul> <li>Creates the database if it doesn’t exist</li> <li>Creates a MySQL user with a specified password</li> <li>Grants full privileges to the user on the database</li> <li>Reloads privileges to apply changes</li> <li>Creates the students table with columns for id, name, age, and department (you can adjust this based on your need)</li> </ul> <h2>Create a Variable Group</h2> <p>Before running the scripts, you need to define values for the placeholders <code>{{mysql_conf}}</code>, <code>{{db_name}}</code>, <code>{{db_user}}</code> <code>{{db_pass}}</code>, and <code>{{backup_dir}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.CWAt7W6n.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>mysql_conf</code>:</strong> This is the Bash variable that stores the file path of the MySQL configuration file</li> <li><strong><code>db_name</code>:</strong> This is the name of the database</li> <li><strong><code>db_user</code>:</strong> The name of the user</li> <li><strong><code>db_pass</code>:</strong> The password of the for the database user</li> <li><strong><code>backup_dir</code>:</strong> The directory where MySQL database backups will be stored</li> </ul> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.DWH35jzv.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “MySQL Deployment Automation and Management”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.DiCtVsf0.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.CAjrX-Ff.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where MySQL will be installed</li> <li>Script Playlist: Choose the playlist you created (For example “MySQL Deployment Automation and Management”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.B6dUE2T1.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Your MySQL Server is now seamlessly deployed and managed with CloudRay. That’s it! Happy deploying!</p> <h2>Scheduling MySQL Database Backup with CloudRay’s Schedules (Optional)</h2> <p>CloudRay also offers <a href="https://cloudray.io/docs/schedules">Schedules</a>, allowing you to execute scripts automatically at specific intervals or times. This feature is useful for tasks such as automating database backups.</p> <p>To automate MySQL database backups with CloudRay, you first need to create a backup script that the scheduler will execute.</p> <p>You can follow similar steps as the previous ones to create the backup script:</p> <img src="/_astro/database-backup-script.DSui8U35.jpg" alt="Screenshot of adding a new deploy script" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Backup MySQL Database</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>BACKUP_FILE</span><span>=</span><span>"{{backup_dir}}/{{db_name}}_$(</span><span>date</span><span> +%F_%H-%M-%S).sql"</span></span> <span></span> <span><span># Ensure backup directory exists</span></span> <span><span>mkdir</span><span> -p</span><span> {{backup_dir}}</span></span> <span></span> <span><span># Perform database backup</span></span> <span><span>sudo</span><span> mysqldump</span><span> -u</span><span> root</span><span> {{db_name}}</span><span> &gt;</span><span> $BACKUP_FILE</span></span> <span></span> <span><span># Confirm backup success</span></span> <span><span>if</span><span> [ </span><span>$?</span><span> -eq</span><span> 0</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "Backup successful: </span><span>$BACKUP_FILE</span><span>"</span></span> <span><span>else</span></span> <span><span> echo</span><span> "Backup failed!"</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span></code><span></span><span></span></pre> <p>This is what the <code>Setup MySQL Database</code> does:</p> <ul> <li>Ensures the backup directory exists</li> <li>Backs up the database to a timestamped file</li> <li><strong>Checks if the backup was successful:</strong> If successful, it prints the backup path; otherwise, it reports failure and exits with an error code.</li> </ul> <p>For example, if you want to back up your database on the first day of every month at 12:00 AM, you can configure a CloudRay schedule to handle this automatically.</p> <p>Here are the steps to achieve this:</p> <ol> <li><strong>Navigate to Schedules:</strong> In your CloudRay dashboard, go to the “Schedules” tab.</li> </ol> <img src="/_astro/locating-schedule.3eGh4hx5.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Click “Add Schedule”:</strong> Start creating a new schedule.</li> </ol> <img src="/_astro/Setup-schedules.DvHXH1hV.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Submit Schedule:</strong> Click “Submit” to activate your new schedule.</li> </ol> <img src="/_astro/schedules-enabled.jTf4cc5l.jpg" alt="Screenshot of the location of enabled schedule" /> <p>CloudRay will automatically execute the backup script at the scheduled time, ensuring that your database is regularly backed up without manual intervention.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/install-mongodb">Install MongoDB</a></li> <li><a href="/articles/deploy-phpmyadmin">Automte PHPMyAdmin Deployment</a></li> <li><a href="/articles/setting-up-postgres-database">Setting Up PostgreSQL</a></li> </ul>Setting Up a PostgreSQL Database on a Remote Serverhttps://cloudray.io/articles/setting-up-postgres-databasehttps://cloudray.io/articles/setting-up-postgres-databaseLearn how to automate PostgreSQL installation configuration database creation and backups on a remote Ubuntu server using reusable Bash scriptsMon, 17 Feb 2025 00:00:00 GMT<p>Setting up a PostgreSQL database on a remote server can be time-consuming, but automation tools like <a href="https://app.cloudray.io/">CloudRay</a> simplify the process. CloudRay allows you to automate server configurations, setup, and database management through scripting.</p> <p>In this guide, we will walk you through setting up a PostgreSQL database on a remote Ubuntu server using CloudRay. You will learn how to add the server to CloudRay, automating PostgreSQL installation and configuration, creating a database and user, and scheduling backups.</p> <p>By the end of this article, you will have a fully functional PostgreSQL instance that is automated and easily manageable by CloudRay.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-automation-script">Create the Automation Script</a> <ul> <li><a href="#installation-and-configuration-script">Installation and Configuration Script</a></li> <li><a href="#database-setup-script">Database Setup Script</a></li> </ul> </li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#running-the-script-with-cloudray">Running the Script with CloudRay</a></li> <li><a href="#scheduling-postgresql-database-backup-with-cloudrays-schedules-optional">Scheduling PostgreSQL Database Backup with CloudRay’s Schedules (Optional)</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific Installations needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> </ul> <h2>Create the Automation Script</h2> <p>To streamline the setup and management processes, you’ll need three Bash scripts:</p> <ol> <li><strong>Installation and Configuration Script</strong>: This script will install PostgreSQL and modifies its configuration files accordingly</li> <li><strong>Database setup Script</strong>: This script will creates a PostgreSQL database, user, and a sample table</li> <li><strong>Backup Script</strong>: This script will automate the backup processes for PostgreSQL database</li> </ol> <p>Let’s begin with the installation and configuration of PostgreSQL.</p> <h3>Installation and Configuration Script</h3> <p>To create the Installation and configuration script, you need to follow these steps:</p> <img src="/_astro/installation-script.BJc9SVFs.jpg" alt="Screenshot of adding a new setup script" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Install and Configure PostgreSQL Database</code>. You can give it any name of your choice.</li> <li>Copy this code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span># Exit immediately if a command exits with a non-zero status</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Update package lists</span></span> <span><span>sudo</span><span> apt</span><span> update</span><span> -y</span></span> <span></span> <span><span># Install PostgreSQL without interactive prompts</span></span> <span><span>sudo</span><span> apt</span><span> install</span><span> -y</span><span> postgresql</span><span> postgresql-contrib</span></span> <span></span> <span><span># Start and enable PostgreSQL service</span></span> <span><span>sudo</span><span> systemctl</span><span> start</span><span> postgresql</span></span> <span><span>sudo</span><span> systemctl</span><span> enable</span><span> postgresql</span></span> <span></span> <span><span># Modify postgresql.conf to allow external connections</span></span> <span><span>sudo</span><span> sed</span><span> -i</span><span> "s/^#listen_addresses = 'localhost'/listen_addresses = '*' /"</span><span> {{postgres_conf}}</span></span> <span></span> <span><span># Enable logging for PostgreSQL</span></span> <span><span>echo</span><span> "logging_collector = on"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{postgres_conf}}</span></span> <span><span>echo</span><span> "log_directory = '/var/log/postgresql'"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{postgres_conf}}</span></span> <span><span>echo</span><span> "log_filename = 'postgresql-%Y-%m-%d.log'"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{postgres_conf}}</span></span> <span><span>echo</span><span> "log_rotation_age = 1d"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{postgres_conf}}</span></span> <span><span>echo</span><span> "log_rotation_size = 10MB"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{postgres_conf}}</span></span> <span></span> <span><span># Modify pg_hba.conf to allow remote access with password authentication</span></span> <span><span>echo</span><span> "host all all 0.0.0.0/0 md5"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{pg_hba_conf}}</span></span> <span><span>echo</span><span> "host all all ::/0 md5"</span><span> |</span><span> sudo</span><span> tee</span><span> -a</span><span> {{pg_hba_conf}}</span></span> <span></span> <span><span># Reload PostgreSQL to apply authentication changes</span></span> <span><span>sudo</span><span> systemctl</span><span> reload</span><span> postgresql</span></span> <span></span> <span><span># Restart PostgreSQL to apply changes</span></span> <span><span>sudo</span><span> systemctl</span><span> restart</span><span> postgresql</span></span> <span></span> <span><span>echo</span><span> "PostgreSQL installed and configured successfully."</span></span></code><span></span><span></span></pre> <p>Here is a breakdown of what each command in the <code>Install and Configure PostgreSQL Database</code> does:</p> <ul> <li>Updates package lists to ensure we install the latest versions</li> <li>Installs the latest version of PostgreSQL and additional components</li> <li>Starts and enables PostgreSQL to ensure it runs on system startup</li> <li>Modifies <code>postgresql.conf</code> to allow external connections</li> <li>Enables logging for PostgreSQL to store logs in <code>/var/log/postgresql</code> and rotate them daily or when they exceed 10MB</li> <li>Modifies <code>pg_hba.conf</code> to enforce password authentication for all remote connections (both IPv4 and IPv6)</li> <li>Reloads PostgreSQL to apply authentication changes without restarting the database</li> <li>Restarts PostgreSQL to apply changes</li> </ul> <h3>Database Setup Script</h3> <p>After the installation and configuration of PostgreSQL, you set up the postgreSQL database by creating a Postgres database, a dedicated user with privileges, and a table if necessary.</p> <p>Similarly, follow these steps to setup the database:</p> <img src="/_astro/setup-database-script.B-iu_mRo.jpg" alt="Screenshot of setting up the database" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Setup Postgres Database</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create the user and database</span></span> <span><span>sudo</span><span> -u</span><span> postgres</span><span> psql</span><span> &lt;&lt;</span><span>EOF</span></span> <span><span>CREATE USER {{db_user}} PASSWORD '{{db_pass}}';</span></span> <span><span>CREATE DATABASE {{db_name}};</span></span> <span><span>GRANT ALL PRIVILEGES ON DATABASE {{db_name}} TO {{db_user}};</span></span> <span><span>ALTER DATABASE {{db_name}} OWNER TO {{db_user}};</span></span> <span><span>GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO {{db_user}};</span></span> <span><span>GRANT ALL ON SCHEMA public TO {{db_user}};</span></span> <span><span>CREATE TABLE UserGrades (</span></span> <span><span> Name VARCHAR(100),</span></span> <span><span> Grade VARCHAR(50)</span></span> <span><span>);</span></span> <span><span>EOF</span></span> <span></span> <span><span>echo</span><span> "PostgreSQL database {{db_name}} with user {{db_user}} created successfully."</span></span></code><span></span><span></span></pre> <p>This is what the <code>Setup Postgres Database</code> does:</p> <ul> <li>Switches to the <code>postgres</code> user and runs SQL commands</li> <li>Creates a new PostgreSQL user</li> <li>Creates a new database</li> <li>Grants full access to the database</li> <li>Assigns ownership of the database to the user</li> <li>Grants table permissions</li> <li>Creates a sample table</li> </ul> <h2>Create a Variable Group</h2> <p>Now, before running the scripts, you need to define values for the placeholders <code>{{postgres_conf}}</code>, <code>{{pg_hba_conf}}</code>, <code>{{db_name}}</code>, <code>{{db_user}}</code>, and <code>{{db_pass}}</code> used in the scrips. CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use variables dynamically across different servers.</p> <img src="/_astro/variables.B1Ghzl2s.jpg" alt="Screenshot of adding a new variable group" /> <p>To ensure that these values are automatically substituted when the script runs, follow these steps to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>postgres_conf</code>:</strong> This is the Bash variable that stores the file path of the PostgreSQL configuration file</li> <li><strong><code>pg_hba_conf</code>:</strong> This is the Bash variable that stores the file path of the PostgreSQL Host-Based Authentication (HBA) configuration file</li> <li><strong><code>db_name</code>:</strong> This is the name of the database</li> <li><strong><code>db_user</code>:</strong> The name of the user</li> <li><strong><code>db_pass</code>:</strong> The password of the for the database user</li> </ul> <p>Since the variables are setup, proceed with running the scripts with CloudRay</p> <h2>Running the Script with CloudRay</h2> <p>You can choose to run the scripts individually or execute them all at once using <a href="https://cloudray.io/docs/script-playlists">CloudRay’s Script Playlists</a>. Since there are multiple scripts, using CloudRay playlists will help automate the execution sequence and save time.</p> <p>Here are the steps to follow:</p> <ol> <li><strong>Navigate to “Script Playlists”:</strong> Click on the Scripts tab in the CloudRay interface</li> </ol> <img src="/_astro/script-playlist.CGdUSO3y.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Click “Add Script Playlist”:</strong> This initiates the creation of a new playlist</li> <li><strong>Provide a Name:</strong> Give your playlist a unique name (For example “PostgreSQL Setup Automation and Management”)</li> <li><strong>Add Scripts in Order:</strong> Select and add the scripts sequentially</li> </ol> <img src="/_astro/create-script-playlist.tbvF9tvC.jpg" alt="Locate the script playlist in CloudRay interface" /> <ol> <li><strong>Save the Playlist:</strong> Click “create playlist” to store your new playlist.</li> </ol> <p>Once your script playlist is created, proceed with execution:</p> <ol> <li><strong>Navigate to Runlogs</strong>: In your CloudRay project, go to the Runlogs section in the top menu.</li> <li><strong>Create a New Runlog</strong>: Click on New Runlog.</li> <li><strong>Configure the Runlog</strong>: Provide the necessary details:</li> </ol> <img src="/_astro/run-playlist-script.BlyEElkP.jpg" alt="Screenshot of creating a new runlog" /> <ul> <li>Server: Select the server where you need PostgreSQL to be installed</li> <li>Script Playlist: Choose the playlist you created (For example “PostgreSQL Setup Automation and Management”)</li> <li>Variable Group: Select the variable group you set up earlier</li> </ul> <ol> <li><strong>Execute the Script</strong>: Click on <strong>Run Now</strong> to start the execution</li> </ol> <img src="/_astro/result-automation-scripts.BlrUeqRH.jpg" alt="Screenshot of the result of all the script from the script playlist" /> <p>Your PostgreSQL is now seamlessly setup and managed with CloudRay. That’s it! Happy deploying!</p> <h2>Scheduling PostgreSQL Database Backup with CloudRay’s Schedules (Optional)</h2> <p>CloudRay also offers <a href="https://cloudray.io/docs/schedules">Schedules</a>, allowing you to execute scripts automatically at specific intervals or times. This feature is useful for tasks such as automating database backups.</p> <p>To automate PostgreSQL database backups with CloudRay, you first need to create a backup script that the scheduler will execute.</p> <p>You can follow similar steps as the previous ones to create the backup script:</p> <img src="/_astro/database-backup-script.B6_B9Pdh.jpg" alt="Screenshot of backing up the database" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Backup Postgres Database</code></li> <li>Add code:</li> </ol> <pre><code><span><span>#!/bin/bash</span></span> <span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span>BACKUP_FILE</span><span>=</span><span>"{{db_name}}_$(</span><span>date</span><span> +%F_%H-%M-%S).sqlc"</span></span> <span></span> <span><span># Create .pgpass file for authentication</span></span> <span><span>PGPASS_FILE</span><span>=</span><span>"</span><span>$HOME</span><span>/.pgpass"</span></span> <span><span>echo</span><span> "localhost:5432:{{db_name}}:{{db_user}}:{{db_pass}}"</span><span> &gt;</span><span> "</span><span>$PGPASS_FILE</span><span>"</span></span> <span><span>chmod</span><span> 600</span><span> "</span><span>$PGPASS_FILE</span><span>"</span></span> <span></span> <span><span>echo</span><span> "Performing database backup..."</span></span> <span></span> <span><span># Perform database backup</span></span> <span><span>pg_dump</span><span> -h</span><span> localhost</span><span> -U</span><span> "{{db_user}}"</span><span> -d</span><span> "{{db_name}}"</span><span> -p</span><span> 5432</span><span> -F</span><span> c</span><span> -f</span><span> "</span><span>$BACKUP_FILE</span><span>"</span></span> <span></span> <span><span># Confirm backup success</span></span> <span><span>if</span><span> [ </span><span>$?</span><span> -eq</span><span> 0</span><span> ]; </span><span>then</span></span> <span><span> echo</span><span> "Backup successful: </span><span>$BACKUP_FILE</span><span>"</span></span> <span><span>else</span></span> <span><span> echo</span><span> "Backup failed!"</span></span> <span><span> exit</span><span> 1</span></span> <span><span>fi</span></span></code><span></span><span></span></pre> <p>This is what the <code>Setup Postgres Database</code> does:</p> <ul> <li>Defines the path for PostgreSQL authentication</li> <li>Stores database credentials</li> <li>Secures the credentials file</li> <li>Creates a backup of the database in compressed format</li> <li><strong>Checks if the backup was successful:</strong> If successful, it prints the backup path; otherwise, it reports failure and exits with an error code.</li> </ul> <p>For example, if you want to back up your database on the first day of every month at 12:00 AM, you can configure a CloudRay schedule to handle this automatically.</p> <p>Here are the steps to achieve this:</p> <ol> <li><strong>Navigate to Schedules:</strong> In your CloudRay dashboard, go to the “Schedules” tab.</li> </ol> <img src="/_astro/locating-schedule.ZeGZeXGN.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Click “Add Schedule”:</strong> Start creating a new schedule.</li> </ol> <img src="/_astro/Setup-schedules.CIkQleyA.jpg" alt="Screenshot of the location of Schedules in CloudRay's Interface" /> <ol> <li><strong>Submit Schedule:</strong> Click “Submit” to activate your new schedule.</li> </ol> <img src="/_astro/schedules-enabled.C5vQhCYh.jpg" alt="Screenshot of the location of enabled schedule" /> <p>CloudRay will automatically execute the backup script at the scheduled time, ensuring that your database is regularly backed up without manual intervention.</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/install-mongodb">Install MongoDB</a></li> <li><a href="/articles/deploy-mysql-server">Deploy MySQL Server</a></li> <li><a href="/articles/deploy-phpmyadmin">Automte PHPMyAdmin Deployment</a></li> </ul>Deploy Delayed Jobs with Bash scriptshttps://cloudray.io/articles/ruby/deploy-delayed-jobshttps://cloudray.io/articles/ruby/deploy-delayed-jobsDeploy Delayed Job for your Ruby on Rails app using Bash scripts and CloudRay—step-by-step guidance that complements the main Rails deployment guide.Sat, 18 Jan 2025 00:00:00 GMT<p>This guide provides a step-by-step approach to deploying Delayed Jobs for your Ruby on Rails application using Bash scripts and CloudRay. It is intended as a supplement to the main <a href="/articles/deploy-ruby-on-rails">Deploy Ruby on Rails app with Bash scripts</a> guide.</p> <h2>Prerequisites</h2> <p>This guide assumes you have followed the main Rails deployment guide and have already:</p> <ul> <li>Set up a CloudRay account.</li> <li>Added your server to CloudRay.</li> <li>Created the <code>server-setup</code> and <code>deploy-rails-app</code> scripts.</li> <li>Created a variable group with the necessary variables.</li> </ul> <h2>Update the setup script</h2> <p>You’ll need to update the <code>server-setup</code> script to include a systemd service for Delayed Jobs. This service will ensure that Delayed Jobs runs in the background and restarts automatically if it fails.</p> <ol> <li><strong>Open the <code>server-setup</code> script:</strong> In your CloudRay project, navigate to “Scripts” and edit the <code>server-setup</code> script.</li> <li><strong>Add the following code at the end of the script:</strong></li> </ol> <pre><code><span><span># Install a systemd service to run Delayed Job</span></span> <span><span>cat</span><span> &lt;&lt;</span><span>'EOT'</span><span> &gt;</span><span> /etc/systemd/system/{{app_name}}-delayed-job.service</span></span> <span><span>[Unit]</span></span> <span><span>Description={{app_name}}-delayed-job</span></span> <span><span>After=network.target</span></span> <span></span> <span><span>[Service]</span></span> <span><span>Type=simple</span></span> <span><span>User=deploy</span></span> <span><span>EnvironmentFile=/etc/environment</span></span> <span><span>WorkingDirectory=/srv/{{app_name}}</span></span> <span><span>ExecStart=/bin/bash -c "PATH=/home/deploy/.asdf/bin:/home/deploy/.asdf/shims:$PATH asdf exec bundle exec bin/delayed_job start -p {{app_name}} -i 0 --queues=mailers,default --pool=5"</span></span> <span><span>Restart=always</span></span> <span><span>StandardOutput=file:/srv/{{app_name}}/log/delayed_job-stdout.log</span></span> <span><span>StandardError=file:/srv/{{app_name}}/log/delayed_job-stderr.log</span></span> <span></span> <span><span>[Install]</span></span> <span><span>WantedBy=multi-user.target</span></span> <span><span>EOT</span></span></code><span></span><span></span></pre> <h2>Modify the deployment script</h2> <p>You’ll also need to modify the <code>deploy-rails-app</code> script to restart the Delayed Job service after each deployment. This ensures that Delayed Job picks up the latest code changes.</p> <ol> <li><strong>Open the <code>deploy-rails-app</code> script:</strong> In your CloudRay project, navigate to “Scripts” and edit the <code>deploy-rails-app</code> script.</li> <li><strong>Add the following line after the <code>systemctl restart {{app_name}}-puma.service</code> line:</strong></li> </ol> <pre><code><span><span>systemctl</span><span> restart</span><span> {{app_name}}-delayed-job.service</span></span></code><span></span><span></span></pre> <h2>Run the updated scripts</h2> <p>Now that you’ve updated the <code>server-setup</code> script, you need to run it again to apply the changes. After that, you can run the <code>deploy-rails-app</code> script to deploy your Rails application and start Delayed Job.</p> <p>Follow these steps:</p> <ol> <li><strong>Run the <code>server-setup</code> script:</strong> Create a new Runlog for the <code>server-setup</code> script, just like you did in the main Rails deployment guide. This will create the systemd service for Delayed Job.</li> <li><strong>Run the <code>deploy-rails-app</code> script:</strong> Once the server setup is complete, create a new Runlog for the <code>deploy-rails-app</code> script. This will deploy your Rails application and start the Delayed Job worker.</li> </ol> <p>That’s it! Your Delayed Jobs worker is now set up and will be automatically deployed with your Rails application.</p>How to create a GitHub Access Token for deploymenthttps://cloudray.io/articles/create-github-access-tokenhttps://cloudray.io/articles/create-github-access-tokenLearn how to create a GitHub fine-grained Personal Access Token to securely access your repositories during deployment.Fri, 17 Jan 2025 00:00:00 GMT<p>This guide explains how to create a GitHub fine-grained Personal Access Token that you can use for deployments. This token allows CloudRay to securely clone your repositories during the deployment process.</p> <h2>Steps to create a token</h2> <ol> <li> <p><strong>Go to GitHub Settings:</strong></p> <ul> <li>Log in to your GitHub account</li> <li>Click your profile picture in the top right</li> <li>Select “Settings”</li> </ul> </li> <li> <p><strong>Navigate to Developer Settings:</strong></p> <ul> <li>Scroll to the bottom of the left sidebar</li> <li>Click “Developer settings”</li> </ul> </li> <li> <p><strong>Access Personal Access Tokens:</strong></p> <ul> <li>Select “Fine-grained tokens”</li> <li>Click “Generate new token”</li> </ul> </li> <li> <p><strong>Configure the token:</strong></p> <ul> <li>Name: Give your token a descriptive name (e.g., “CloudRay Deployments”)</li> <li>Expiration: Choose an expiration date (recommended: 1 year)</li> <li>Repository access: Select “Only select repositories” and choose your deployment repositories</li> <li>Permissions: <ul> <li>Repository permissions: <ul> <li>“Contents”: Read-only (needed for cloning)</li> <li>“Metadata”: Read-only (required by default)</li> </ul> </li> </ul> </li> </ul> </li> <li> <p><strong>Generate and copy the token:</strong></p> <ul> <li>Click “Generate token”</li> <li><strong>Important:</strong> Copy the token immediately. GitHub will only show it once.</li> </ul> </li> </ol> <h2>Using the token</h2> <p>When setting up your CloudRay variable group for deployments:</p> <ol> <li>Create a variable named <code>github_access_token</code></li> <li>Paste your GitHub token as the value</li> </ol> <p>The token will be used in the format: <code>https://{token}@github.com/{username}/{repo}</code> when cloning repositories.</p> <h2>Organization requirements</h2> <p>If your repository belongs to an organisation:</p> <ol> <li>The organisation must allow fine-grained tokens</li> <li>An organisation owner needs to enable fine-grained tokens: <ul> <li>Go to Organization Settings &gt; Third-party Access &gt; Personal Access Tokens</li> <li>Click “Set organisation token policy”</li> <li>Enable “Allow fine-grained personal access tokens”</li> </ul> </li> </ol> <h2>Security considerations</h2> <ul> <li>Never share your token or commit it to your repository</li> <li>Use the minimum required permissions</li> <li>Regularly rotate your tokens by creating new ones and deleting old ones</li> <li>If a token is compromised, revoke it immediately in your GitHub settings</li> </ul> <h2>Token expiration</h2> <p>When your token expires:</p> <ol> <li>Create a new token following the steps above</li> <li>Update the <code>github_access_token</code> variable in your CloudRay variable group</li> <li>The old token will automatically become invalid after expiration</li> </ol> <p>That’s it! Your GitHub fine-grained Access Token is now ready to use with CloudRay deployment scripts.</p>Deploy Express & Node on Ubuntu 24.04https://cloudray.io/articles/deploy-expresshttps://cloudray.io/articles/deploy-expressLearn how to automate express app deployment on Ubuntu using bash scripts to streamline setup deployment and updates in your CloudRay dashboardSat, 04 Jan 2025 00:00:00 GMT<p>Deploy your Express.js application seamlessly with CloudRay. This guide walks you through setting up and automating the deployment process, ensuring your app is always up-to-date.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-setup-script">Create the setup script</a></li> <li><a href="#create-the-deployment-script">Create the Deployment Script</a></li> <li><a href="#create-a-variable-group">Create a Variable Group</a></li> <li><a href="#run-the-setup-script">Run the setup script</a></li> <li><a href="#run-the-deployment-script">Run the deployment script</a></li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you haven’t done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server.</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific deployment needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes that the database is hosted on a separate server. Installing and configuring the database on the same server as the Rails application is not within the scope of this document. You can refer to other articles for instructions on setting up a database server.</li> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> </ul> <p>Once your server is added, proceed to the next step.</p> <h2>Create the setup script</h2> <img src="/_astro/new-setup-script.BZPj1SA7.png" alt="Screenshot of adding a new setup script" /> <p>To streamline the deployment process, you’ll need two Bash scripts:</p> <ol> <li><strong>Setup Script</strong>: You’ll run this once when setting up a new server to install dependencies and configure services.</li> <li><strong>Deployment Script</strong>: You can run this anytime you want to update your application with new code.</li> </ol> <p>In this section, we’ll create the <strong>Setup Script</strong>.</p> <p>The setup script prepares your server by:</p> <ul> <li>Creating a deploy user that runs the Node server using pm2 as a non-privileged service</li> <li>Installing Caddy web server for automatic HTTPS</li> <li>Setting up asdf to install Node.js</li> <li>Creating a systemd service to start pm2 at boot</li> <li>Setting up the repository for the first time</li> </ul> <p>To create the setup script:</p> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>Setup Express Server</code></li> <li>Copy this code:</li> </ol> <pre><code><span><span># Stop executing the script if any of the commands fail</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create a deploy user if it doesn't exist</span></span> <span><span># This is the user that will run the Express app</span></span> <span><span>if</span><span> [ </span><span>!</span><span> -d</span><span> /home/deploy ]; </span><span>then</span></span> <span><span> echo</span><span> "Creating deploy user"</span></span> <span><span> adduser</span><span> --disabled-password</span><span> --gecos</span><span> ""</span><span> deploy</span></span> <span><span>fi</span></span> <span></span> <span><span># We'll deploy the Express app to /srv</span></span> <span><span># Let's ensure the deploy user has the right permissions</span></span> <span><span>echo</span><span> "Setting up /srv directory"</span></span> <span><span>chown</span><span> deploy:deploy</span><span> -R</span><span> /srv</span></span> <span></span> <span><span># Install the essential packages</span></span> <span><span>echo</span><span> "Installing essential packages"</span></span> <span><span>export</span><span> DEBIAN_FRONTEND</span><span>=</span><span>noninteractive</span></span> <span><span>apt-get</span><span> update</span></span> <span><span>apt-get</span><span> install</span><span> -y</span><span> git</span><span> caddy</span></span> <span></span> <span><span>echo</span><span> "Configuring Caddy to serve the Express app"</span></span> <span><span>cat</span><span> &gt;</span><span> /etc/caddy/Caddyfile</span><span> &lt;&lt;</span><span>'EOT'</span></span> <span><span>{{app_domain}} {</span></span> <span><span> reverse_proxy localhost:{{express_port}}</span></span> <span><span>}</span></span> <span><span>EOT</span></span> <span></span> <span><span># We'll install asdf to manage node version</span></span> <span><span>su</span><span> -l</span><span> deploy</span><span> &lt;&lt;</span><span>'EOT'</span></span> <span><span> set -e</span></span> <span><span> if [ ! -d "$HOME/.asdf" ]; then</span></span> <span><span> echo "Installing asdf"</span></span> <span><span> git clone https://github.com/asdf-vm/asdf.git ~/.asdf</span></span> <span><span> echo "source $HOME/.asdf/asdf.sh" &gt;&gt; ~/.bashrc</span></span> <span><span> else</span></span> <span><span> echo "Updating asdf"</span></span> <span><span> cd ~/.asdf</span></span> <span><span> git pull</span></span> <span><span> fi</span></span> <span></span> <span><span> source $HOME/.asdf/asdf.sh</span></span> <span></span> <span><span> if ! grep -q "legacy_version_file" ~/.asdfrc; then</span></span> <span><span> echo "legacy_version_file = yes" &gt;&gt; ~/.asdfrc</span></span> <span><span> fi</span></span> <span><span> asdf plugin add nodejs</span></span> <span><span> echo "Installing Node.js version {{node_version}}"</span></span> <span><span> asdf install nodejs {{node_version}}</span></span> <span><span> asdf reshim</span></span> <span><span> asdf global nodejs {{node_version}}</span></span> <span><span> npm install -g pm2</span></span> <span><span> echo "Finished installing Node.js"</span></span> <span><span>EOT</span></span> <span></span> <span><span>echo</span><span> "Creating the pm2 startup service"</span></span> <span><span>export</span><span> ASDF_DIR</span><span>=</span><span>/home/deploy/.asdf</span></span> <span><span>export</span><span> ASDF_DATA_DIR</span><span>=</span><span>/home/deploy/.asdf</span></span> <span><span>source</span><span> /home/deploy/.asdf/asdf.sh</span></span> <span><span>asdf</span><span> shell</span><span> nodejs</span><span> {{node_version}}</span></span> <span><span>pm2</span><span> startup</span><span> systemd</span><span> -u</span><span> deploy</span><span> --hp</span><span> /home/deploy</span></span> <span></span> <span><span>echo</span><span> "Setting up the repository for the first time"</span></span> <span><span>su</span><span> -l</span><span> deploy</span><span> &lt;&lt;</span><span>'EOT'</span></span> <span><span> set -e</span></span> <span><span> source "$HOME/.asdf/asdf.sh"</span></span> <span><span> mkdir -p /srv/{{app_name}}</span></span> <span><span> cd /srv/{{app_name}}</span></span> <span><span> if [ ! -d "/srv/{{app_name}}/.git" ]; then</span></span> <span><span> git clone https://{{github_access_token}}@github.com/{{github_repo_name}} .</span></span> <span><span> fi</span></span> <span><span> npm install</span></span> <span><span> pm2 stop {{app_name}} || true</span></span> <span><span> env PATH="$(asdf where nodejs)/bin:$PATH" pm2 start npm --name {{app_name}} -- start</span></span> <span><span> pm2 save</span></span> <span><span>EOT</span></span> <span></span> <span><span>systemctl</span><span> restart</span><span> caddy.service</span></span> <span></span> <span><span>echo</span><span> "Done."</span></span></code><span></span><span></span></pre> <div> <p>TIP</p> <p>If you find this script too long to manage, you can break it into smaller scripts and use <a href="/docs/script-playlists">Script Playlists</a> to run them together.</p> </div> <h2>Create the Deployment Script</h2> <img src="/_astro/new-deploy-script.B8gogpLh.png" alt="Screenshot of adding a new deploy script" /> <p>You’ll use this script whenever you want to update your application. The script:</p> <ul> <li>Pulls latest code from your Git repository’s main branch</li> <li>Installs app dependencies</li> <li>Restarts the Node server using pm2</li> </ul> <p>To create the deployment script:</p> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>Deploy Express App</code></li> <li>Add code:</li> </ol> <pre><code><span><span>set</span><span> -e</span></span> <span></span> <span><span>su</span><span> -l</span><span> deploy</span><span> &lt;&lt;</span><span>'EOT'</span></span> <span><span>set -e</span></span> <span><span>. "$HOME/.asdf/asdf.sh"</span></span> <span><span>cd /srv/{{app_name}}</span></span> <span><span>git fetch --all</span></span> <span><span>git reset --hard origin/main</span></span> <span><span>npm install</span></span> <span><span>pm2 restart {{app_name}}</span></span> <span><span>EOT</span></span> <span></span> <span><span>echo</span><span> "🚀 Deployed {{app_name}} at https://{{app_domain}}"</span></span></code><span></span><span></span></pre> <h2>Create a Variable Group</h2> <img src="/_astro/new-variable-group.Dtr3HvNX.png" alt="Screenshot of adding a new variable group" /> <p>Our scripts use variables like <code>{{app_name}}</code>, <code>{{app_domain}}</code>, and <code>{{node_version}}</code> because CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use placeholders in your scripts, making them dynamic and reusable across different servers.</p> <p>To provide values for these variables, you’ll need to create a <a href="https://cloudray.io/docs/variable-groups">variable group</a>. Here’s how:</p> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”.</li> <li><strong>Create a new Variable Group:</strong> Click on “New Variable Group”.</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>app_name</code>:</strong> Your application’s name (use underscores instead of spaces), e.g., <code>my_app</code></li> <li><strong><code>app_domain</code>:</strong> Your application’s domain name, e.g., <code>myapp.example.com</code></li> <li><strong><code>node_version</code>:</strong> The version of Node.js you want to install. E.g., <code>20.0.0</code></li> <li><strong><code>express_port</code>:</strong> The port your Express application listens on (e.g., <code>3000</code>)</li> <li><strong><code>github_access_token</code>:</strong> Your GitHub personal access token for cloning the repository.</li> <li><strong><code>github_repo_name</code>:</strong> Your GitHub repository in the format <code>username/repository</code>, e.g., <code>yourusername/your-repo</code>.</li> </ul> <h2>Run the setup script</h2> <img src="/_astro/new-setup-runlog.DVgfShht.png" alt="Screenshot of creating a new runlog" /> <img src="/_astro/setup-runlog-output.CTEN3tC8.png" alt="Screenshot of setup runlog" /> <p>To run the <code>Setup Express Server</code> script, you’ll use a Runlog in CloudRay. A Runlog allows you to execute scripts on your servers and provides detailed logs of the execution process.</p> <p>Here’s how to create and run a Runlog:</p> <ol> <li><strong>Navigate to Runlogs:</strong> In your CloudRay project, go to “Runlogs” in the top menu.</li> <li><strong>Create a new Runlog:</strong> Click on “New Runlog”.</li> <li><strong>Fill in the form:</strong></li> </ol> <ul> <li><strong>Server:</strong> Select the server you added earlier.</li> <li><strong>Script:</strong> Choose the <code>Setup Express Server</code> script.</li> <li><strong>Variable Group:</strong> Select the variable group you created earlier.</li> </ul> <ol> <li><strong>Run the script:</strong> Click on “Run” to execute the script on your server.</li> </ol> <p>CloudRay will connect to your server, run the <code>Setup Express Server</code> script, and show you the live output as the script executes.</p> <h2>Run the deployment script</h2> <img src="/_astro/deploy-runlog-output.Dt6WP1VY.png" alt="Screenshot of deploy runlog" /> <p>You can run the <code>deploy-rails-app</code> script whenever you want to deploy a new version of your application. Here’s how to create a Runlog for the deployment script:</p> <ol> <li><strong>Navigate to Runlogs:</strong> In your CloudRay project, go to “Runlogs” in the top menu.</li> <li><strong>Create a new Runlog:</strong> Click on “New Runlog”.</li> <li><strong>Fill in the form:</strong></li> </ol> <ul> <li><strong>Server:</strong> Select the server you added earlier.</li> <li><strong>Script:</strong> Choose the <code>Deploy Express App</code> script.</li> <li><strong>Variable Group:</strong> Select the variable group you created earlier.</li> </ul> <ol> <li><strong>Run the script:</strong> Click on “Run” to execute the script on your server.</li> </ol> <p>CloudRay will connect to your server, run the <code>Deploy Express App</code> script, and show you the live output as the script executes.</p> <p>That’s it! Happy deploying!</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/deploy-ruby-on-rails">Deploy Ruby on Rails</a></li> <li><a href="/articles/deploy-jenkins-with-docker-compose">Deploy Jenkins with Docker Compose</a></li> <li><a href="/articles/deploy-laravel">Deploy Laravel</a></li> <li><a href="/articles/deploy-sonarqube">Deploy SonarQube</a></li> <li><a href="/articles/deploy-static-website-from-github">Deploy Static Website from GitHub</a></li> <li><a href="/articles/deploy-nextjs-application">Deploy Next.js Application</a></li> </ul>Deploy Ruby on Rails app on Ubuntu 24.04https://cloudray.io/articles/deploy-ruby-on-railshttps://cloudray.io/articles/deploy-ruby-on-railsUse CloudRay to automate Ruby on Rails deployment with Bash scripts. Learn how to create a schedule to deploy your Ruby on Rails app automatically.Thu, 02 Jan 2025 00:00:00 GMT<p>This guide walks through automating Rails deployments with <a href="https://app.cloudray.io/">CloudRay</a>. You will configure Puma, Caddy and ASDF to create a production environment where updates deploy with a single command, handling code updates, dependencies, assets and migrations automatically.</p> <p>The setup runs under a dedicated <code>deploy</code> user with systemd for process management, while Caddy provides HTTPS and static file serving. All infrastructure is defined in reusable Bash scripts, making your deployment process consistent and version-controlled.</p> <p>This guide focuses on deploying Rails applications with Puma. For deploying background workers (e.g., Delayed Jobs), refer to our <a href="/articles/ruby/deploy-delayed-jobs">Deploy Deplayed Jobs guide</a>.</p> <h2>Contents</h2> <ul> <li><a href="#adding-servers-to-cloudray">Adding Servers to CloudRay</a></li> <li><a href="#assumptions">Assumptions</a></li> <li><a href="#create-the-setup-script">Create the setup script</a></li> <li><a href="#create-the-deployment-script">Create the deployment script</a></li> <li><a href="#create-a-variable-group">Create a variable group</a></li> <li><a href="#run-the-setup-script">Run the setup script</a></li> <li><a href="#run-the-deployment-script">Run the deployment script</a></li> <li><a href="#troubleshooting">Troubleshooting</a> <ul> <li><a href="#running-out-of-memory-when-installing-ruby">Running out of memory when installing Ruby</a></li> </ul> </li> <li><a href="#related-guides">Related Guides</a></li> </ul> <h2>Adding Servers to CloudRay</h2> <p>Before getting started, make sure your target servers are connected to CloudRay. If you have not done this yet, follow our <a href="https://cloudray.io/docs/servers">servers docs</a> to add and manage your server</p> <div> <p>NOTE</p> <p>This guide uses Bash scripts, providing a high degree of customisation. You can adapt the scripts to fit your specific deployment needs and environment.</p> </div> <h2>Assumptions</h2> <ul> <li>This guide assumes that the database is hosted on a separate server. Installing and configuring the database on the same server as the Rails application is not within the scope of this document. For database setup, see our <a href="/articles/setting-up-postgres-database">PostgreSQL</a>, <a href="/articles/deploy-mysql-server">MySQL</a>, or <a href="https://cloudray.io/articles/install-mongodb">MongoDB</a> guides.</li> <li>This guide assumes you’re using <strong>Ubuntu 24.04 LTS</strong> as your server’s operating system. If you’re using a different version or a different distribution, adjust the commands accordingly.</li> </ul> <h2>Create the setup script</h2> <p>To streamline the deployment process, you’ll need two Bash scripts:</p> <ol> <li><strong>Setup Script</strong>: You’ll run this once when setting up a new server to install dependencies and configure services.</li> <li><strong>Deployment Script</strong>: You can run this anytime you want to update your application with new code.</li> </ol> <p>In this section, we’ll create the <strong>Setup Script</strong>.</p> <p>The setup script prepares your server by:</p> <ul> <li>Creating a deploy user that runs Rails server as a non-privileged service</li> <li>Installing Caddy web server for automatic HTTPS</li> <li>Setting up asdf to install Ruby and Node.js</li> <li>Creating a systemd service to manage the Rails server</li> <li>Setting up the repository for the first time</li> </ul> <p>To create the setup script:</p> <img src="/_astro/setup-script.BthucFaC.jpg" alt="Screenshot of adding a new setup script for Rails application" /> <ol> <li>Go to <strong>Scripts</strong> in your CloudRay project</li> <li>Click <strong>New Script</strong></li> <li>Name: <code>setup-rails-server</code></li> <li>Copy this code:</li> </ol> <pre><code><span><span># Stop executing the script if any of the commands fail</span></span> <span><span>set</span><span> -e</span></span> <span></span> <span><span># Create a deploy user if it doesn't exist</span></span> <span><span># This is the user that will run the Rails application</span></span> <span><span>if</span><span> [ </span><span>!</span><span> -d</span><span> /home/deploy ]; </span><span>then</span></span> <span><span> echo</span><span> "Creating deploy user"</span></span> <span><span> adduser</span><span> --disabled-password</span><span> --gecos</span><span> ""</span><span> deploy</span></span> <span><span>fi</span></span> <span></span> <span><span># We'll deploy the Rails app to /srv</span></span> <span><span># let's ensure the deploy user has the right permissions</span></span> <span><span>echo</span><span> "Setting up /srv directory"</span></span> <span><span>chown</span><span> deploy:deploy</span><span> -R</span><span> /srv</span></span> <span></span> <span><span># Update the server</span></span> <span><span>echo</span><span> "Updating the server"</span></span> <span><span>export</span><span> DEBIAN_FRONTEND</span><span>=</span><span>noninteractive</span></span> <span><span>apt-get</span><span> update</span></span> <span><span>apt-get</span><span> dist-upgrade</span><span> -y</span></span> <span></span> <span><span>echo</span><span> "Installing essential packages"</span></span> <span><span>apt-get</span><span> install</span><span> -y</span><span> git</span><span> caddy</span></span> <span></span> <span><span># Additional packages for Nokogiri</span></span> <span><span>echo</span><span> "Installing additional packages for Ruby &amp; Nokogiri"</span></span> <span><span>apt-get</span><span> install</span><span> -y</span><span> build-essential</span><span> patch</span><span> zlib1g-dev</span><span> liblzma-dev</span><span> libyaml-dev</span><span> libreadline-dev</span><span> libffi-dev</span></span> <span></span> <span><span># Remove these lines if you don't need MySQL libraries for the mysql gem</span></span> <span><span>echo</span><span> "Installing packages for mysql2"</span></span> <span><span>apt-get</span><span> install</span><span> -y</span><span> libmysqlclient-dev</span></span> <span></span> <span><span># Remove these lines if you don't need PostgreSQL libraries for the pg gem</span></span> <span><span>echo</span><span> "Installing packages for pg"</span></span> <span><span>apt-get</span><span> install</span><span> -y</span><span> libpq-dev</span></span> <span></span> <span><span>echo</span><span> "Configuring Caddy to serve the Rails application"</span></span> <span><span>cat</span><span> &gt;</span><span> /etc/caddy/Caddyfile</span><span> &lt;&lt;</span><span>'EOT'</span></span> <span><span>{{app_domain}} {</span></span> <span><span> root * /srv/{{app_name}}/public</span></span> <span><span> encode zstd gzip</span></span> <span><span> file_server</span></span> <span><span> @notStatic not file</span></span> <span><span> reverse_proxy @notStatic unix//srv/{{app_name}}/tmp/puma.sock</span></span> <span><span>}</span></span> <span><span>EOT</span></span> <span></span> <span><span>su</span><span> -l</span><span> deploy</span><span> &lt;&lt;</span><span>'EOT'</span></span> <span><span> set -e</span></span> <span><span> if [ ! -d "$HOME/.asdf" ]; then</span></span> <span><span> echo "Installing asdf"</span></span> <span><span> git clone https://github.com/asdf-vm/asdf.git ~/.asdf</span></span> <span><span> echo "source $HOME/.asdf/asdf.sh" &gt;&gt; ~/.bashrc</span></span> <span><span> else</span></span> <span><span> echo "Updating asdf"</span></span> <span><span> cd ~/.asdf</span></span> <span><span> git pull</span></span> <span><span> fi</span></span> <span></span> <span><span> source $HOME/.asdf/asdf.sh</span></span> <span></span> <span><span> if ! grep -q "legacy_version_file" ~/.asdfrc; then</span></span> <span><span> echo "legacy_version_file = yes" &gt;&gt; ~/.asdfrc</span></span> <span><span> fi</span></span> <span><span> asdf plugin add ruby</span></span> <span><span> asdf plugin add nodejs</span></span> <span><span> echo "Installing Ruby version {{ruby_version}}"</span></span> <span><span> asdf install ruby {{ruby_version}}</span></span> <span><span> echo "Installing Node.js version {{node_version}}"</span></span> <span><span> asdf install nodejs {{node_version}}</span></span> <span><span> asdf reshim</span></span> <span><span> echo "Finished installing Ruby and Node.js"</span></span> <span><span>EOT</span></span> <span></span> <span><span>echo</span><span> "Creating the {{app_name}}-rails-server service"</span></span> <span><span>cat</span><span> &lt;&lt;</span><span>'EOT'</span><span> &gt;</span><span> /etc/systemd/system/{{app_name}}-rails-server.service</span></span> <span><span>[Unit]</span></span> <span><span>Description={{app_name}}-rails</span></span> <span><span>After=network.target</span></span> <span></span> <span><span>[Service]</span></span> <span><span>Type=simple</span></span> <span><span>User=deploy</span></span> <span><span>Environment=RAILS_ENV=production</span></span> <span><span>EnvironmentFile=/etc/environment</span></span> <span><span>WorkingDirectory=/srv/{{app_name}}</span></span> <span><span>ExecStart=/bin/bash -lc "source $HOME/.asdf/asdf.sh &amp;&amp; rails server -b /srv/{{app_name}}/tmp/puma.sock"</span></span> <span><span>Restart=always</span></span> <span></span> <span><span>[Install]</span></span> <span><span>WantedBy=multi-user.target</span></span> <span><span>EOT</span></span> <span></span> <span><span>echo</span><span> "Enabling the {{app_name}}-rails-server service"</span></span> <span><span>systemctl</span><span> enable</span><span> {{app_name}}-rails-server.service</span></span> <span></span> <span><span>echo</span><span> "Setting up the repository for the first time"</span></span> <span><span>su</span><span> -l</span><span> deploy</span><span> &lt;&lt;</span><span>'EOT'</span></span> <span><span> set -e</span></span> <span><span> source "$HOME/.asdf/asdf.sh"</span></span> <span><span> mkdir -p /srv/{{app_name}}</span></span> <span><span> cd /srv/{{app_name}}</span></span> <span><span> if [ ! -d "/srv/{{app_name}}/.git" ]; then</span></span> <span><span> git clone https://{{github_access_token}}@github.com/{{github_repo_name}} .</span></span> <span><span> fi</span></span> <span><span>EOT</span></span> <span></span> <span><span>echo</span><span> "Done."</span></span></code><span></span><span></span></pre> <div> <p>TIP</p> <p>If you find this script too long to manage, you can break it into smaller scripts and use <a href="/docs/script-playlists">Script Playlists</a> to run them together.</p> </div> <h2>Create the deployment script</h2> <p>You’ll use this script whenever you want to update your application. Here is what the script does:</p> <ul> <li>Pulls latest code from your Git repository’s main branch</li> <li>Updates Ruby dependencies and installs new gems</li> <li>Precompiles JavaScript/CSS assets for production</li> <li>Runs any pending database migrations safely</li> <li>Restarts the Rails server</li> </ul> <p>To create the deployment script:</p> <img src="/_astro/deploy-script.DmvhVKn0.jpg" alt="Screenshot of deploying Rails application" /> <ol> <li>Go to <strong>Scripts</strong> &gt; <strong>New Script</strong></li> <li>Name: <code>deploy-rails-app</code></li> <li>Add code:</li> </ol> <pre><code><span><span>set</span><span> -e</span></span> <span></span> <span><span>su</span><span> -l</span><span> deploy</span><span> &lt;&lt;</span><span>'EOT'</span></span> <span><span>set -e</span></span> <span><span>. "$HOME/.asdf/asdf.sh"</span></span> <span><span>cd /srv/{{app_name}}</span></span> <span><span>git fetch --all</span></span> <span><span>git reset --hard origin/main</span></span> <span><span>bundle</span></span> <span><span>echo "{{rails_master_key}}" &gt; config/master.key</span></span> <span><span>RAILS_ENV=production bundle exec rails assets:precompile</span></span> <span><span>RAILS_ENV=production bundle exec rails db:migrate</span></span> <span><span>EOT</span></span> <span></span> <span><span>echo</span><span> "Restarting processes"</span></span> <span><span>systemctl</span><span> restart</span><span> {{app_name}}-rails-server.service</span></span> <span></span> <span><span>echo</span><span> "🚀 Deployed {{app_name}} at {{app_domain}}"</span></span></code><span></span><span></span></pre> <h2>Create a variable group</h2> <p>Our scripts use variables like <code>{{app_name}}</code>, <code>{{app_domain}}</code>, <code>{{ruby_version}}</code>, and <code>{{node_version}}</code> because CloudRay processes all scripts as <a href="https://shopify.github.io/liquid/">Liquid templates</a>. This allows you to use placeholders in your scripts, making them dynamic and reusable across different servers.</p> <p>To provide values for these variables, you’ll need to create a variable group. Here’s how:</p> <img src="/_astro/variables.CJabdYrG.jpg" alt="Screenshot of adding a new variable group" /> <ol> <li><strong>Navigate to Variable Groups:</strong> In your CloudRay project, go to “Scripts” in the top menu and click on “Variable Groups”</li> <li><strong>Create a new Variable Group:</strong> Click on “New Variable Group”</li> <li><strong>Add the following variables:</strong></li> </ol> <ul> <li><strong><code>app_name</code>:</strong> Your application’s name (use underscores instead of spaces), e.g., <code>my_app</code></li> <li><strong><code>app_domain</code>:</strong> Your application’s domain name, e.g., <code>myapp.example.com</code></li> <li><strong><code>ruby_version</code>:</strong> The version of Ruby you want to install. E.g., <code>3.4.3</code></li> <li><strong><code>node_version</code>:</strong> The version of Node.js you want to install. E.g., <code>23.10.0</code></li> <li><strong><code>rails_master_key</code>:</strong> Your Rails master key from <code>config/master.key</code>.</li> <li><strong><code>github_access_token</code>:</strong> Your GitHub personal access token for cloning the repository.</li> <li><strong><code>github_repo_name</code>:</strong> Your GitHub repository in the format <code>username/repository</code>, e.g., <code>yourusername/your-repo</code>.</li> </ul> <h2>Run the setup script</h2> <p>To run the <code>setup-rails-server</code> script, you’ll use a Runlog in CloudRay. A Runlog allows you to execute scripts on your servers and provides detailed logs of the execution process.</p> <p>Here’s how to create and run a Runlog:</p> <ol> <li><strong>Navigate to Runlogs:</strong> In your CloudRay project, go to “Runlogs” in the top menu.</li> <li><strong>Create a new Runlog:</strong> Click on “New Runlog”.</li> <li><strong>Fill in the form:</strong></li> </ol> <ul> <li><strong>Server:</strong> Select the server you added earlier.</li> <li><strong>Script:</strong> Choose the <code>setup-rails-server</code> script.</li> <li><strong>Variable Group:</strong> Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-setup-script.BzKCKisM.jpg" alt="Screenshot of creating a new runlog" /> <ol> <li><strong>Run the script:</strong> Click on “Run” to execute the script on your server.</li> </ol> <img src="/_astro/result-setup-script.dAPfOPoz.jpg" alt="Screenshot of the output of the setup Nextjs script" /> <p>CloudRay will connect to your server, run the <code>setup-rails-server</code> script, and show you the live output as the script executes.</p> <h2>Run the deployment script</h2> <p>You can run the <code>deploy-rails-app</code> script whenever you want to deploy a new version of your application. Here’s how to create a Runlog for the deployment script:</p> <ol> <li><strong>Navigate to Runlogs:</strong> In your CloudRay project, go to “Runlogs” in the top menu.</li> <li><strong>Create a new Runlog:</strong> Click on “New Runlog”.</li> <li><strong>Fill in the form:</strong></li> </ol> <ul> <li><strong>Server:</strong> Select the server you added earlier.</li> <li><strong>Script:</strong> Choose the <code>deploy-rails-app</code> script.</li> <li><strong>Variable Group:</strong> Select the variable group you created earlier.</li> </ul> <img src="/_astro/run-deploy-script.BW92uLis.jpg" alt="Screenshot of running the deploy Next.js script" /> <ol> <li><strong>Run the script:</strong> Click on “Run” to execute the script on your server.</li> </ol> <img src="/_astro/result-deploy-script.BlJXerbF.jpg" alt="Screenshot of the output of the Deployment script" /> <p>CloudRay will connect to your server, run the <code>deploy-rails-app</code> script, and show you the live output as the script executes.</p> <h2>Troubleshooting</h2> <h3>Running out of memory when installing Ruby</h3> <p>If you encounter an “out of memory” error during the Ruby installation using <code>asdf install ruby</code>, it’s likely because the <code>/tmp</code> directory is mounted in RAM with limited space. The build process requires more space than is available in the RAM-mounted <code>/tmp</code>.</p> <p>To resolve this issue, you can disable the RAM mount for <code>/tmp</code> so it uses disk storage instead.</p> <p>Create a new script with following code and run it on your server:</p> <pre><code><span><span>set</span><span> -e</span></span> <span><span>echo</span><span> "Disable mounting /tmp in RAM otherwise ruby-build will run out of memory"</span></span> <span><span>systemctl</span><span> mask</span><span> tmp.mount</span></span> <span><span>reboot</span></span></code><span></span><span></span></pre> <p>That’s it! Happy deploying!</p> <div> <div> <a href="https://app.cloudray.io/f/auth/sign-up"><span></span><span> <span> Get Started with CloudRay </span> </span></a> </div> </div> <h2>Related Guides</h2> <ul> <li><a href="/articles/ruby/deploy-delayed-jobs">Deploy Delayed Jobs</a></li> <li><a href="/articles/deploy-jenkins-with-docker-compose">Deploy Express Application</a></li> <li><a href="/articles/deploy-sonarqube">Deploy SonarQube</a></li> <li><a href="/articles/deploy-static-website-from-github">Deploy Static Website from GitHub</a></li> <li><a href="/articles/deploy-nextjs-application">Deploy Next.js Application</a></li> <li><a href="/articles/deploy-phpmyadmin">How to Deploy phpMyAdmin</a></li> <li><a href="/articles/deploy-laravel">Deploy Laravel</a></li> </ul>