dbWatch https://www.dbwatch.com/ Database monitoring and management from dbWatch Wed, 11 Feb 2026 13:01:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://www.dbwatch.com/wp-content/uploads/2022/03/cropped-favicon-1-32x32.webp dbWatch https://www.dbwatch.com/ 32 32 Database Decommissioning Checklist for SQL Server https://www.dbwatch.com/blog/database-decommissioning-checklist/ Tue, 27 Jan 2026 08:32:01 +0000 https://www.dbwatch.com/?p=22190 A practical SQL Server database decommissioning checklist that shows how to safely find and retire unused databases.

The post Database Decommissioning Checklist for SQL Server appeared first on dbWatch.

]]>

Setting up databases on SQL Server is fast and easy. Someone has a great idea, and database DB-42 is made. Eventually, DB-42 fades out of use. Then, like hundreds of other databases, it needs decommissioning. However, database decommissioning can be like trying to untangle several knots of dependencies during a fire drill.

Let’s start with an example: ten years ago, I helped a client decommission several of their databases. We isolated a set of seemingly unused databases. It was impossible to find owners, so we made many manual usage checks. One of them hadn’t been touched for eight months. We disconnected it and made a backup. Four months later, we heard from a confused scientist asking, “Where is my research?” Once a year, she logged into that database and added the compiled yearly data. We promptly restored the database.

That’s the problem with database decommissioning: It’s easy to prove that a database is used, but surprisingly hard to prove it’s unused. Manual checks can miss seasonal access, background jobs, and service accounts.

We decided there must be a better way to track database activity. So, we built a monitoring job called: database is not in use collector.  

While the name’s descriptive, it’s a mouthful. For this article, we’ll call it the Usage Tracking Job. It records all database activity in detail to answer that vital question: Has anyone actually used this database?

Below is a practical database decommissioning checklist for SQL Server: how to prove non-usage with evidence, take a restorable final backup, run an offline grace period, and retire from the database without nasty surprises.

Old computer hardware gives the idea of database decommissioning

1. Define the Database Decommission Decision

You don’t want to be guessing later, so start by looking at the types of databases you have, then deciding what’s the best course of action for the database types: migrate, consolidate, retire, or archive. Then you’ll want to identify who owns the data, if possible.

If you don’t know who owns the data, don’t panic. It can be that the database was assigned to someone who has left the company or moved to a different department. In cases where there’s no official owner, you simply have to retire or archive unused databases.

No matter if you know the owner, you’ll need to track your actions. Before moving ahead, open a change request with the database name, instance, environment, target date, and the rollback plan.

 2. Identify Unused Databases in SQL Server

The goal here is to ensure that, as in the example in the intro, a once-a-year user doesn’t show up. Manually, this involves checking:

  • Database memory change
  • New logins
  • Temp table changes
  • Read and write changes

To do this, as well as the dbUseCollector Job, check these points every 10 minutes for a year. We’re joking, that’s not physically possible. You could write your own monitoring job, or just make an educated guess, make a backup, take it offline.

For dbWatch users, simply go to Managed Jobs View and turn on the the Usage Tracking Job, as it’s not enabled by default. Then run it for the set period you decided in Step 1. Review the the Usage Tracking Job report after six to 12 months and confirm no activity. In our experience (anecdotal), after six months of no activity, less than one in ten databases need to be restored; after 12 months less than one in 100 need to be restored. You decide the amount of risk you’d like to take.

How dbWatch works for database decommissioning, screenshot of where to turn on the databases not in use collector.
How to turn on Databases Not In Use Collector.

You might want to take an ‘evidence snapshot’ of the report to show that there is no activity and add it to the change request (RFC).

Consolidating Databases as Part of a Decommissioning Project

In large SQL Server estates, decommissioning is also one of the steps that makes consolidation possible. Before you move workloads onto fewer servers, you remove the dead weight of unused databases and forgotten dependencies.

With the Usage Tracking Job, turns consolidation from ‘lift and shift everything’ into a triage. You decommission the unused databases, then migrate and consolidate what’s actually in use, with evidence to back up your choices.

A screen shot of the dbUseCollector in a test environment.
An example of dbUseCollector in a test environment.

3. Get Database Decommission Approval: Ownership Retention and Governance

In a perfect world, you could send an email to the entire company, and all database owners would let you know the names of their databases and what’s on them.

Haha. Many non-tech employees just access an application and don’t realize there’s a database behind it. Others forget they asked for a database to be made. And don’t forget the people who have left the company – it’s highly unlikely ownership of the databases they worked with has been reassigned.

If you find ownership, find out this key information:

  • Confirm what they know about the databases
    • Annual/seasonal processes
    • Regulatory reporting/audits/month-end or year-end routines
    • External users, vendors, or integration
  • Agree on the retirement decision and sign-off conditions
    • How long the database must show ‘no use’ before action
    • How long to keep offline but restorable
  • Decide retention and archive requirements
    • Retention period for backup storage
    • Storage location and access control
    • Encryption requirements
    • Who authorizes restore requests
    • What does the final deletion mean after retention

Keep in mind, if you actually find an owner it’s unlikely that you’ll need to delete the database. When someone remembers it, they are usually using it.

4. Discover Dependencies to Catch the Silent Consumers

Even if you don’t find owners or logins, you can still have background usage taking place.

  • Searching for SQL statements referring to that database
  • Search for SQL Agent jobs like:
    • SSIS or ETL pipelines, scheduled tasks, data loads, report subscriptions
    • Check for scripts written by ex-employees or consultants that are still running.
  • Identify external users, vendors or integrated endpoints
  • Check cross-database dependencies such as linked servers

As an alternative, dbCollector Job tracks connections that make changes. If they don’t make changes, you will see that someone connected without taking action, the job doesn’t trace if they are reading data. In theory, it could be possible to track everything, BUT a job with that detail would use too many resources on your system.

We’ll end this section with a bit of humor, a DBA we spoke to for this story cheekily said, “You can just back up and take your whole system offline and restore it as people scream. It’ll be 100% downtime, but you’ll be 100% sure that only used systems are restored.”

5. Make Final Backup for Decommissioning a Database: Verify and Restore Proof

Now’s that the first four steps are complete, it’s time for action.

  • Take the system offline
  • Take a final full backup, plus required logs, if the company policy requires it, plus all user configuration, privileges and access
  • Validate back up integrity, using the organization standard
  • Perform a restore validation

Finally, get out that change record you started in #1 and record: backup location, encryption keys, and/or certificates, restore notes and how to restore it.

6. Keep Databases Offline for the Grace Period

Now apply the wait time determined in the stakeholder conversation (or a logical amount of time if there is no owner.) Now simply wait and see if there are access attempts or complaints.

If a request is made, bring things back online. Record who needed it, why, and any further action needed.

A man works on writing a database decommissioning checklist.

7. Remove the Decommissioned Database

You’ve waited for the agreed time, and there’s been no complaint or action. Now it’s time to drop the database and remove all remaining references. If your company keeps inventory or documentation, update it.

Finally, put your final backup into storage for the agreed retention period. Schedule an alert for when the period ends and then deletes backups using the agreed secure process. The last step is to record the destruction of evidence and link it to the original change record.

Database Decommissioning Checklist Recap

If you take only one thing from this checklist, make it this: Make and use that change log. It offers proof that the database was unused over a full business cycle and attaches it to the change record.

When you can show who accessed what and when (or that nobody did), the database decommission conversation stops being guesswork and becomes a controlled change.

See dbWatch helps with decommissioning

Schedule a demo and see the steps.

The post Database Decommissioning Checklist for SQL Server appeared first on dbWatch.

]]>
Copy to a Test Environment From Production https://www.dbwatch.com/blog/copy-test-environment/ Mon, 26 Jan 2026 13:28:47 +0000 https://www.dbwatch.com/?p=22201 Automate the cloning of production SQL Server databases to test through a controlled backup to shared storage and restore workflow. Schedule repeatable refresh jobs, keep environments current for patch and release testing.

The post Copy to a Test Environment From Production appeared first on dbWatch.

]]>

Everyone knows that it’s important to have an updated version of production for your test environment. However, actually making sure that you have an updated copy for the test environment is another story. It’s easy to overlook stale test data until it hurts. A query that tested perfectly can be released and cause rollbacks that nobody saw coming.

Keeping a test environment fresh, usually isn’t difficult, but it can eat time because it involves recurring manual work. When it’s done across multiple targets with deadlines and interruptions it can be a weekly drag.

This post covers a common and proven approach to moving production to test, via backup and restore cloning, and explains how dbWatch Control Center’s dbCopy automates the process. We’ll start with practical use cases, then follow with a tech-light explaination.

Three Use Cases for dbCopy

With dbCopy you can clone one database to many targets, many databases to one target, or one-to-one, depending on how your environments are set up. It automates a controlled backup-to-shared-storage and restore workflow, so cloning becomes scheduled, repeatable, and trackable across environments.

1. Up-to-date Copy of Test Environment for Safer Patch and Release Testing

For most DBA teams, refreshing test from production is a reoccurring time sink. The work can eat into weekends, if it’s expected that the DBA delivers the fresh copy of Friday’s production to use as a test environment on Monday morning at 8 am.

The pain usually looks like this:

  • Long-running manual workflows, often outside of normal hours
  • Avoidable mistakes created by small variations of paths, naming or permissions
  • Learning about a failure long after something went wrong.
  • Having the dev and test stalled while the environment catches up.

That’s why many teams want the refresh to be predictable, trackable and automated instead of a weekly fire.

Automate Cloning to Test Environment

dbWatch Control Center dbCopy is built to deploy production to test through a controlled backup to shared storage and restore workflow.

In practice, dbCopy lets you:

  • Define a source (production) and one or more targets (test/development)
  • Run the refresh as a repeatable job instead of a manual routine
  • Schedule refreshes so teams start the week (or day) with a current environment

dbCopy is part of the Automated Maintenance Package in dbWatch Control Center. Once it’s configured, it turns a slow and repetitive DBA task into a job that runs consistently, saving DBAs hours of work.

2. Be Ready for Disaster Recovery with Automated Standby Clone

Disaster Recovery (DR) routines aren’t always static or permanant. They tend to drift: new databases get added, restore paths change, storage fills up, someone changes the backup location, and the “standby” is slowly made useless.

Some DBA teams use dbCopy to keep an up-to-date standby clone of production databases on a separate SQL Server system (or environment) using a repeatable backup-and-restore refresh. You can run it on whatever cadence fits your recovery objectives, so the standby stays current without a manual refresh becoming another weekly task.

3. Validate Restore Ability as an Early Warning Signal

Most DBA teams lose sleep over restores, not backups. Restore failures often show up at the worst time, and the root causes are usually boring-but-deadly with issues from missing permissions to broken access to shared storage or not enough disk space.

Because dbCopy uses a backup-and-restore workflow as part of cloning, it can function as a continuous restore test. If a scheduled clone fails at the restore step, that failure is an early warning that something in your restore path is broken—or getting worse—before you discover it during an incident.

In practice, this helps surface problems like:

  • Backups not readable from shared storage (share/network/access drift)
  • Restore failing due to disk capacity, path changes, or file placement issues
  • Permissions or service accounts drifting on the target host
  • Restore duration increasing unexpectedly (RTO risk creeping up)

Having validation exercises the restore process often enough that you find issues while they’re still routine fixes.

Note on Sensitive Data in Test Environments

A cloned test environment contains production data, if sensitive information is in production, then the test environment must be treated as sensitive too. Data masking or removal is not handled by dbCopy and would need to be done after cloning if required.

What dbCopy Means for your Team

If you routinely copy test environment data from production, the work is rarely “hard” but it is repetitive and time-consuming—especially when it has to happen on a regular schedule or across many SQL Server databases. dbCopy automates turning deployment production to test into a controlled backup-to-shared-storage and restore workflow, so cloning becomes a repeatable job instead of a weekly manual task.

Tired of refreshing the test environment?

Book a demo and see how dbCopy can automate your production-to-test cloning.

Disclaimer: This section repeats of the information above, just explained in a less technical manner)

Tech-light section: Why Copying to a Test Environment Matters

Copying production data into a test environment sounds like a niche technical task, but it affects how safely and quickly a company can ship changes. When teams deploy to test environment, the test database needs to reflect production closely enough that the results are meaningful. If test data is outdated or unrealistic, problems show up late, often right before go-live, when fixes are expensive and disruptive.

Copy test environment routines tend to hurt in the same predictable ways: they repeat, they take time, and the organization becomes dependent on one or two people to make “test ready” happen. Even with scripts, the task still needs consistency and attention, especially when it runs weekly or across many databases.

dbCopy addresses that by turning deployment production to test into a controlled, repeatable job in dbWatch Control Center (under Maintenance for SQL Server). The goal is automate the process.

The Value of Copying to the Test Environment

Technical teams need to have an up-to-date copy in the test environment, so they know if new application versions and patches work. When DBAs have dbCopy the process is automated. The technical teams can rely on their test results and DBAs save time.

For the business, the benefit is simpler: fewer delays, fewer last-minute rollbacks, and less operational risk tied to change. For DBAs, cloning production to test and development can easily eat up three to six hours of manual work. When that work is manual and frequent, it becomes a bottleneck. When it’s automated, teams start the week with an environment that’s ready for testing without needing someone to carve out half a day to get it done.

dbCopy also clones using a backup-to-shared-storage and restore approach. That has a side benefit: if restores start failing during cloning, it can act as an early warning that restore problems may exist more broadly, which is valuable long before an actual incident forces a restore under pressure.

Make copying a test environment reliable, scheduled, and easy.

Request a demo today.

Clarification about dbCopy

dbCopy copies from production into the test environment. dbWatch does not currently support deploying from development to live production. Also, you need to provide your own data masking; if production data is sensitive, the test environment must be handled accordingly, and any masking or data removal happens afterwards.

The post Copy to a Test Environment From Production appeared first on dbWatch.

]]>
Q4 2025 Release Notes: Key Improvements https://www.dbwatch.com/blog/summary-2025-release-notes/ Thu, 18 Dec 2025 11:55:08 +0000 https://www.dbwatch.com/?p=22080 We're wrapping up the year with one final, large release. Discover the highlights in the blog below.

The post Q4 2025 Release Notes: Key Improvements appeared first on dbWatch.

]]>

Welcome to our Q4 release.

We know release notes aren’t page-turners. So, here’s the short version with the most useful new features and fixes. Careful readers might also spot a small bonus hidden between the lines.

You’ll find the exhaustive list of fixes on the Wiki Release Notes page. 

Improved Management with Instance Offline View

In previous versions of dbWatch there was little information about lost instances. While you could clearly see the connection was broken, you couldn’t see details about what happened before the connection was lost.

It made it hard to answer questions like

  • What jobs were running when we lost the contact with this instance?
  • What was the last known status of those jobs?
  • When did we last successfully collect data from this instance?
  • Do we have any indication of why the connection was lost?

That’s why we’re introducing the Instance Offline View. It presents the last state that the dbWatch Server collected from the instance, before it went offline. It gives you a snapshot to work from when you can’t access the information stored on the instance.

Offline instance view screenshot
The new offline instance view.

Why it matters for DBAs and operations teams

Offline incidents no longer mean a complete blackout. Instance Offline View helps you understand impact and trouble shoot faster once the instance is back online.

With Instance Offline View you can

  • Click on an offline instance and see the last known list of jobs.
  • View the last recorded status and state of those jobs.
  • See the timestamp for when those jobs last ran.
  • Get relevant connection errors.

Read: All the release notes for Instance Offline View.

Policy Based Deployment Provide Centralized dbWatch Deployment

If you have dbWatch installed on several clients and servers, you need to individually sign into each machine and run the install and upgrade. Policy Based Deployment lets Windows users prepare everything centrally and then deploy dbWatch upgrades centrally to all your hosts.

With Policy Based Deployment on Windows you can

  • Deploy dbWatch to new servers and clients from a central point
  • Define and target specific sets of machines for install or upgrade
  • Upgrade existing dbWatch servers in one operation

Why it matters for DBAs and operations teams

Upgrading and scaling should be an easy process. Policy Based Deployment gives you a smoother and more flexible way to install and upgrade dbWatch software, so you’ll have less manual work. Plus it makes life for DBAs in very large environments easier. 

Linux users: We provide the Linux versions in repositories for either Ubuntu or Red Hat, so it’s part of the normal upgrade deployment for your server.

Read: Complete release notes, including Policy Based Deployment.

Secure remote access with Cloud Router

While not new in this release, dbWatch Cloud Router is worth a quick reminder if you manage databases across multiple networks or customer environments. Cloud Router gives you secure, encrypted, outbound-only connections between dbWatch installations in different network environments, so you don’t have to open inbound firewall rules or maintain a tangle of VPNs.

With Cloud Router you can:

  • Replace per-customer VPN setups with a single, scalable access model
  • Keep customer environments isolated in separate networks while still managing them centrally in dbWatch Control Center
  • Use outbound connections from each site, reducing exposure from inbound firewall openings
  • Log and audit access to database servers for security and compliance reviews

Cloud Router is already available for dbWatch Control Center. If you’re an MSP or manage many separate security zones, it can simplify access, reduce VPN-related risk, and give you a clearer overview of all environments.

Read more: Cloud Router feature page.

dbWatch MSP story: How a Managed Service Provider for Databases Found a Secure Management Method  

Small and Satisfying Improvement

Sometimes the best improvements are the quiet ones. This release adds a safer way to handle one-off configuration jobs without accidentally turning them into recurring tasks.

Safer One-time Jobs – Manual Scheduler

Some configuration jobs should never run on a schedule. With Manual Scheduler, you can create one-time jobs such as migration or clean-up tasks and be sure they only run when you explicitly trigger them.

Get a dbWatch Sticker

Dear reader, you’ve made it this far. As a reward we’re offering a Keep Calm and Query On sticker to the first 10 people to drop us an email. Write sticker please in the subject line and a business mailing address in the body.

For GDPR: we will not be using this information for any advertising, and all information will be deleted after sending. Due to the holidays and slow mail, stickers may arrive at the end of January.

Better Microsoft SQL Performance First View

In this release, we have made a series of improvements to Microsoft SQL performance analysis in dbWatch Control Center.  dbWatch users with large workloads asked to see more information faster about SQL statements.

Use a new SQL Event Collector job for deeper tracing

In addition to the existing SQL statistics job that collects performance data every five minutes, there is a new SQL event collector job. This job uses a different capture method and records long running statements together with user, host, session and application information. You can enable it on selected databases when you need extra detail about who is running a statement and who is affected by poor performance.

Learn the details about how Event Collector works.

See statement text faster with Session Statistics

The main list of resource demanding SQL statements now shows the actual statement text directly on the line. You can move the mouse over the last column to see the full statement, instead of opening each row one by one.

Learn more about the details of Session Statistics.

Parameters and configuration of Session Statistics.

Understand where a statement comes from

The view now gives clearer information about which database each statement belongs to, and includes improvements to filtering in the main table. This makes it easier to work through long lists of statements. 

Equity Columns
See equity columns.

Work with missing index recommendations in a readable way

Missing index details are no longer shown as raw XML. When you click the warning icon, the information is presented in a formatted table that shows the table, the columns involved, and the recommendation. From the same menu, you can choose to create the suggested index directly. 

The new view of missing indexes.
How you can see missing indexes.

Handle very large numbers of statements more efficiently

Under the hood, we have reworked how the data is joined and retrieved, so the SQL performance view handles hundreds of thousands of statements more efficiently than before.

Filter field available in management views

We have added a database filter field to the management views, making it easier to quickly narrow down long lists of SQL Statements. 

Filter Field screen shot update.
The new filter field.

Why it matters for DBAs and performance teams

These changes make the SQL performance tools easier to use when you have You can see what each query is doing, understand its context, act on missing index recommendations, and work more efficiently with large workloads.

Read more in the release notes

Keep control of Brent Ozar maintenance scripts across your farm

Many SQL Server teams use Brent Ozar’s free maintenance procedures alongside dbWatch. The procedures work well, but in larger environments it is hard to see where they are installed, which versions you have, and whether anything has been duplicated.

This release adds a helper job in dbWatch that gives you an overview of Brent Ozar modules across your instances.

With the new helper job, you can

  • Scan your instances and databases to see where Brent Ozar’s maintenance procedures are installed
  • See which modules are in use, such as backup, index maintenance and related procedures, and on which instances they run
  • Detect duplicates when the procedures are installed in more than one database on the same instance and get a warning
  • Check the versions that are installed and see the code from within dbWatch, with an option to drop procedures if needed
  • Use Brent Ozar’s scripts together with dbWatch maintenance, while keeping a clear overview so they do not conflict in your environment

Why it matters for DBAs: If you already rely on Brent Ozar’s maintenance scripts, dbWatch now helps you keep track of where they run and which versions you have. That makes it easier to govern these procedures in larger environments and to use them alongside dbWatch without losing control.

Read more in the release notes. 

Easier database copy between SQL Server instances

The maintenance module in dbWatch Control Center now includes a new DB Copy feature for Microsoft SQL Server. It consists of two new jobs plus management and farm views. It’s available for customers with the Automated Maintenance Module.

With DB Copy, you can:

  • Copy databases between SQL Server instances using dbWatch scheduled jobs
  • Manage cross instance copies from the maintenance module
  • Have an overview of the status of what’s happening when and where

Why it matters for DBAs and operations teams

Setting up and maintaining database copies between instances is often a manual, error prone job, especially when you want it to run regularly. DB Copy lets you use dbWatch to handle copy and restore backups as part of your normal maintenance setup, so you get a repeatable automated process instead of custom scripts.

Read more about it in the full release notes.

What’s Next

Coming up in 2026 dbWatch Roadmap, we have work planned on several jobs and some module work, as well as your requests that will come up as the year progresses.

Some highlights that are expected to launch

  • Teams/Slack integration
  • Compliance module PostgreSQL and Oracle
  • Integrate, schedule and get status feedback on you own scripts    
  • OS monitoring

Finally, we need to end with a thank you. Thank you to all the customers who pointed out problems and helped us see where new features can save everyone a ton of time.

Features and Fixes

Book a demo and walk through the updates.

The post Q4 2025 Release Notes: Key Improvements appeared first on dbWatch.

]]>
Eliminate Database Alert to Fix Time  https://www.dbwatch.com/blog/database-alert-to-fix-time/ Thu, 04 Dec 2025 11:56:12 +0000 https://www.dbwatch.com/?p=21983 How dbWatch Control Center speeds up database alert to fix time by creating a direct path from monitoring to action, cutting out VPN hops and extra tools.

The post Eliminate Database Alert to Fix Time  appeared first on dbWatch.

]]>

It’s a tale of workflow interrupt due to the time between the database alert to the fix time.

Every Database Administrator (DBA) has experienced it. You open the in-box on Monday morning and find an alert notice.  Unfortunately, this isn’t one of the several systems you constantly need to log into, it’s a database you’ve monitored for years without issues. Until today.

For many DBAs there’s a delay between noting the issue and fixing it. And it’s not the troubleshooting that usually eats up the time, it’s the journey from seeing the problem to being in the right place to do something about it.

Challenges of Logging into Database Management

Often logging in involves a chain of steps: VPN => remote desktop jump station => server => start the management tool (if it’s installed.)

Then it’s time to hope that your password is still valid. An invalid password triggers another chain of steps.

If you’re an MSP, likely the chain is longer, starting with looking up credentials for the customer system.

By the time you’re finally ready to act, ten to thirty minutes may have passed. On a busy day, you might only have time to fix two to four issues, simply because each alert requires a long detour before fixing can start.

Workflow with dbWatch

In dbWatch Control Center, monitoring and management are in the same place making the database alert to fix time fast. From the monitoring view, you’ll see a warning on an instance. Click on the instance to enter management in that exact system. In the first menu you’ll see the alert, with the same warning you saw in monitoring.

From there you can:

  • Fix the underlying issue directly on that instance
  • Rerun the job to confirm that the warning is cleared
  • Open a report if you need more statistics about the problem
If you need to run commands, you can jump from the instance straight into a worksheet for that system and run SQL there. You stay inside the same tool the whole time. The interface is structured to feel familiar, following patterns DBAs know from vendor tools. This lowers the learning curve and makes it easier to navigate across platforms.

Result

The alert is the same. The difference is how long it takes from the database alert to the fixing time. By removing the chain of VPN, jump station, server and local tools, dbWatch cuts away the overhead between monitoring and management.   DBAs spend more of their day fixing issues instead of logging in and hunting for the right system. For MSPs managing many environments, that shift can be the difference between clearing a handful of alerts and clearing most of what arrived that day.

Faster Workflows

Try dbWatch Control Center and experience a one-click workflow from alert to fix.

The post Eliminate Database Alert to Fix Time  appeared first on dbWatch.

]]>
Cross-platform Database Software https://www.dbwatch.com/blog/cross-platfrom-database-monitoring-the-best/ Mon, 03 Nov 2025 13:06:20 +0000 https://www.dbwatch.com/?p=21785 Software for cross-platform database monitoring gives teams full visibility across mixed estates.

The post Cross-platform Database Software appeared first on dbWatch.

]]>

Early on, most companies run a single database platform, so investing in cross-platform database software feels unnecessary. As systems grow and age, environments diversify. Mergers and new applications introduce additional platforms, which makes cross-platform database software essential for monitoring and management.

For example, a hospital DBA doesn’t have much choice when it comes to database platforms. The X-ray machine may come with MSSQL Server, but the MRI machine has PostgreSQL.

DBAs may have a preferred database platform in a company, but when the company acquires another company, suddenly an Oracle database is added. Let’s not dig into issues at universities – you can simply imagine the legacy combinations that occur there. 

Whatever the situation, the result is similar: a new platform and tool are added. And suddenly there’s a new language with a new pain given to the DBA.

In this blog, you’ll find the journey of two dbWatch customers, a large MSP and a smaller distributor, and see how they apply cross-platform monitoring in their everyday lives. 

There’s a reason why these customers are anonymous. When names aren’t used real underlying issues can be discussed with candor, DBA to DBA. They’ve disclosed the real state of their business before they started cross-platform database monitoring and how their work has changed since using a monitoring tool.

A graphic visualization of what a clean cross-plaform database monitoring looks like.

Large Managed Service Provider Needs Cross-Platform Database Software 

Our first anonymous example is a large MSP. We’ll call them Safely Managed Databases. The challenges they’ve experienced as an MSP closely parallel those of enterprise DBA teams. Safely Managed Databases has a diverse portfolio of clients across five different platforms:  Oracle, Microsoft SQL Server, PostgreSQL, MySQL, and Sybase ASE over 11 different locations.

There were three main issues for Safely Managed Databases: VPN problems, scaling pain, and a lack of overview for their access control. 

VPN Failure

Safely Managed Databases uses VPNs to connect to its customers. Unfortunately, clients used the same VPN software and share the same address, causing  issues with conflicting network address space.  They had to use jump stations to separate the client systems additionally not all VPNs could be connected simultaneously.

Because of all the network fragmentation, different VPN’s and jump stations, they lacked overview of all their clients.

Incident System Failed to Scale

When they installed a new database, the DBA added their own monitoring script. Soon the alerting system turned into a fragmented mess because each script had a different threshold and unique warnings and alerts. With each alert, a new ticket was sent to the DBA responsible.

Most days, an alert appeared every 20 seconds, adding up to over 4,000 daily alerts. There was never time to dive into the issue and discover if it was a critical alert. The ticket could be for something crucial, like a blocked database, or something irrelevant, like an unimportant script that had never worked but triggered alerts every 10 minutes.

Soon alert fatigue lead to all alerts being ignored. They only found problems when systems crashed or noticed blocked queries when someone complained. When they did know about a problem, they had to log on to the correct platform, remembering to log out of the conflicting VPNs.

Access Control

Role-Based Access Control (RBAC) was, quite honestly, a mess. It was easy to grant access, but difficult to track who had it. Removing someone’s access involved resetting database passwords, a painful process that always locked out someone or some script that should have had access. 

In addition, each platform had its own roles, naming, and permission granularity. This resulted in too many users’ orphaned accounts after off-boarding, and there was no single view of who could do what. 

In addition, there wasn’t any overarching auditing to track who made which changes when. They had an Excel sheet that was forgotten half the time. 

Cross Platform Database Monitoring for Managed Service Providers (Or Enterprises)

When Safely Managed Databases decided to look for a tool, they knew the most critical factor for them was the cross-platform database monitoring capability. They needed to see everything in one place. They also wanted a better, more secure solution than VPNs for connecting to their clients. 

Replace VPNs with Cloud Router for Secure Connections and Access Control

One of the reasons that Safely Managed Databases chose dbWatch was their Cloud Router add-on. This allowed them to replace VPNs with a secure one-way connection from their environments to the Cloud Router. All the clients are connected at the same time, and they each have their own security bubble. 

Instead of handing out passwords, the DBAs can generate a dbWatch account and control what type of access is granted and where the account holder has access. 

Preventing Alert Fatigue with Cross-platform Templates

Each morning used to start with an inbox stuffed with alerts, but now the team only hears about true issues.  When they set up dbWatch, they used templates with built-in across-the-board adjustments, so the alerts were consistent across all their database instances. The standardized alert thresholds filter out the non-essential problems. 

With the thresholds in place, they fine-tuned the templates. For example, they’ve adjusted the timing. On the weekends, critical alarms are sent to a weekend on-call email account. The non-critical alarms, like warnings on development systems, are waiting in the inbox on Monday morning. Non-DBA issues, like disk alarms, are sent directly to the person responsible, saving everyone time. 

Having a clear cross-platform monitoring overview has helped them move into a more proactive workflow. They can see upcoming issues, prioritize them, and schedule when to fix them before it’s a ‘red alert’. Now the DBAs are on top of the workload instead of crushed under it.

Cross-platform Monitoring Provides Team Resiliency 

Prior to dbWatch they couldn’t help each other directly with cross-platform problems. Now they can work in each other’s areas without issues. They have safer handoffs and coverage when someone is out. 

Regional Auto Parts distributor with 200 Percent Platform Fragmentation.

Now let’s move on to a smaller company, a regional auto-parts distributor. This too is an anonymous company, who is off record to report real problems. We’ll call them Custom Auto Parts. The reason why they needed cross-platform capabilities is mostly related to platform fragmentation.

A man working in a auto-parts warehouse to show how cross platform database monitoring works for regional suppliers.

Lack of overview

When they started 20 years ago, they had an Oracle database. A decade later, they added an MS SQL database, and five years ago they added a PostgreSQL.  Working with three platforms is a challenge for the sole DBA. While the work needed is the same, the commands, like ‘kill session’ aren’t. 

The DBA did his best to manually check three platforms daily. However, when he put out a fire or was on holiday, checks were skipped.

While Custom Auto Parts is a small company, fragmentation has increased by 200% and the headache from the organizational database fragmentation is proportionally just as large.  

Time to resolution

It took a long time to reach a resolution because of fragmentation. From the website, they could see that data wasn’t syncing. But, because data was pushed in multiple directions, it was difficult to find the root cause when data didn’t arrive. Sometimes when a query doesn’t work, the DBA had to look through each database to find the root cause of the problem.  

Cross-platform Database Monitoring Benefits for Smaller Companies

The new tool was deployed in one day. It brings all platforms into one view so the sole DBA is not switching between consoles. Defined checks run consistently, even when work is hectic or someone is away. Custom Auto Parts is now seeing issues sooner and acting with more confidence.

Cross-platform Visibility when Monitoring

With a small diverse environment, it’s tough for one DBA to learn many platforms and keep them updated, working in different places on different tools. 

Where before they had to run checks manually, now they have everything seen in a single pane of glass. What used to take five hours a week is now just a glance at the screen. 

Time to resolution Resolved

The team implemented a customized monitoring job to alert when the synchronization of data failed. They could quickly locate the issue and its location. 

An artist's idea of a train station with cross platform database software as destinations.

What Changed with Cross Platform Database Monitoring

Safely Managed Databases and Custom Auto Parts were able to replace scattered tools with one console for alerts, history, and reporting. The result was a faster, quicker, and more predictable operation. 

A single console reduces operational risk and total effort while improving stakeholder confidence.

Lower risk

Fewer blind spots

  • Consistent coverage during leave
  • Clear audit trails

Lower costs

  • Fewer overlapping tools
  • Shorter report prep
  • Accessibility

Monitoring Made Easy

Find out how cross-platform monitoring can make your life easier.

FAQ Cross-platform Questions

What is cross-platform database monitoring?

  • It is the ability to see and manage database health, alerts, and trends for many platforms, clouds, and sites in one console with consistent workflows.

How does operational database fragmentation affect teams?

  • It slows investigations, increases missed checks, and raises tool costs. A unified console restores visibility and makes daily work consistent.

How does database estate management help leadership?

  • It provides a single source of truth for inventory, health, and trends across systems and locations. Reports are consistent and ready for audits.

The post Cross-platform Database Software appeared first on dbWatch.

]]>
Top 5 SolarWinds Alternatives for Database Monitoring https://www.dbwatch.com/blog/solarwinds-alternatives/ Thu, 02 Oct 2025 08:19:35 +0000 https://www.dbwatch.com/?p=21553 Explore five strong SolarWinds alternatives and find the software that best fits what your company needs.

The post Top 5 SolarWinds Alternatives for Database Monitoring appeared first on dbWatch.

]]>

Currently, many organizations are looking for SolarWinds alternatives to improve database monitoring and management. Some maybe evaluating alternatives due to costs, other’s may have scaling issues or other challenges. The five Solarwinds alternatives reviewed here all are good  and each have areas where they excel.  

For each tool,  you’ll get an overview of how it supports database administrators (DBAs) in their daily work.

If you’d like to get straight to the point, just click on the alternative you’d like to find out more about:

    1. dbWatch Control Center
    2. Quest Foglight
    3. Redgate
    4. IDERA
    5. Datadog

The evaluation criteria

This comparison reflects publicly available information as of October 2025. Products change frequently, if you spot an inaccuracy, please let us know and we will correct it.

For each alternative tool we’re looking at the six factors listed below. At the end of the article, you’ll find a table to use as a quick guide.

1. Database Monitoring

Monitoring from a single console eliminates repeated sign ins across instances, detects issues before users feel them, and proves history to help you spot patterns.  Remember to look at cross-platform compatibilities. The best monitoring tools, work regardless of your environment fragmentation.

2. Database Management

Managing from a single console reduces repeated sign ins, makes changes consistent, and speeds remediation when issues appear. Look for cross-platform administration with bulk actions, role based access, guardrails like previews and approvals, and the ability to run common maintenance on demand.

3. Database Scalability

Growth is constant. Tools should let you deploy and immediately monitor thousands of databases quickly. Make sure to check both technical scalability and pricing so costs remain predictable as you expand.

4. Task Automation

Automations handle routine checks and responses the same way every time. This reduces the DBA workload and shortens time to resolve.

5. Database Security and Compliance

Security and compliance features help you control who can see what, prove that controls are working, and collect evidence for audits. Even if your team is not regulated today, these basics protect the estate.

6. Database Reporting

Reporting turns metrics into decisions. Scheduled reports and shareable dashboards keep operations, management, and auditors aligned without extra manual work.

These six factors are key when comparing SolarWinds alternatives so you can determine which best supports your environment.

SolarWinds Today

Before diving into the alternatives to SolarWinds, here’s a quick reminder of what the platform offers today.

SolarWinds gives database teams three main paths:

Database Performance Analyzer (DPA) and SQL Sentry

These are query-centric tools with drill-downs for waits, blocking and deadlocks, plus historical context and annotations.

SolarWinds Observability

Observability monitors self-hosted and managed databases supporting  MongoDB, MySQL, PostgreSQL,  Redis, and SQL Server.

Logo for dbwatch as an alternative to solarwinds.

1 dbWatch Control Center

dbWatch Control Center stands out among SolarWinds alternatives due to its scalability and broad cross-platform coverage. It provides  database monitoring and management across any location and many platforms, giving DBAs one console for daily work.

The add-on packages offer capabilities like deeper performance analysis and security and compliance for policy checks, access control, and audit-friendly reporting. The Cloud Router add-on provides secure, governed access to environments without opening broad network paths.

The software is installed on-premises. For cloud-only environments, dbWatch can be put in the cloud with a Linux or Windows machine hosted in the cloud.

Best for teams with mixed environments that want one console for monitoring and hands on management. It scales well, with the possibility for custom jobs.  For very large environments, Farm View dashboards provide full views of single metrics. Multi-site organizations and managed service providers can get secure remote access through the Cloud Router add-on.

Free trial for 90-days

Pricing is clear on the website, with a pricing calculator showing the complete environment costs.

Monitoring

Centralized monitoring across many platforms (listed below) offering health dashboards, alerts, and estate views. Group and filter by tags, sites, or farms to find issues quickly, then drill into instances and databases for details. Add custom monitoring jobs to track business specific conditions alongside standard checks.

Example of how dbwatch monitors databases

Cross-platform

Designed for environments that span on-prem and cloud. Supports Oracle, Microsoft SQL Server, PostgreSQL MySQL, MariaDB, MongoDB, and  Sybase ASE. Cloud platforms include: Microsoft Azure, Amazon RDS, and Amazon EC2.

Control Center has a single console and helps provide consistent workflows, so teams do not need separate tools per platform.

Performance metrics

The SQL Performance Package provides diagnostics for query behavior and resource use. You can review past performance, find queries behind waits or blocking, track users and see which queries they’ve run and enable tracing of access to databases. It’s also possible to drill down into historic data.

Automated Maintenance

dbWatch also offers an automated maintenance package. Found in the monitoring console, it includes a maintenance library that supports DBAs by making their jobs more automated. It includes jobs like:  backups and routine tasks.

Management

In dbWatch DBAs can switch between finding the issue in monitoring view to fixing it in management view. It offers a place for managing all the daily DBA tasks.

It also can be very useful for cross-platform management: when and Oracle DBA is off work, a DBA who knows MS SQL Server can step in and manage their databases.

Scalability

Built for large environments with ten thousand or more  instances, dbWatch offers multi-site filtering, and workflows for repetitive tasks. Dashboards include ‘farm views,’ which give insight into how one metric behaves across the whole environment.

Compliance

The Security and Compliance package helps teams align with internal policies and external standards. It helps your databases reach and keep compliance standards such as ISO and SOC.

Reporting

Built-in and customizable reports for the whole environment, instances, and workflows. Schedule distributions and use filters to produce reader-friendly summaries for operations and management. Dashboards can be shared so stakeholders see the same view of health and trends.

2. Quest Foglight

Quest Foglight is a strong option for teams evaluating alternatives to SolarWinds, particularly those needing enterprise-level observability. The platform delivers database and hybrid-infrastructure monitoring, with Foglight for Databases covering on-premises environments.

For cloud deployments, Foglight Cloud provides monitoring as a managed SaaS service, while Foglight Evolve focuses on proactive performance management, resource optimization, and cloud cost modeling.

In August 2025 they launched a new capability: Query Explorer. This allows for easy searching, filtering, and analyzing SQL queries within the monitored environments.

Best for teams managing diverse database environments. They use Foglight to get a unified view across database platforms, find root causes for any anomalies, and uncover query-level insights that optimize performance, reduce risk, and prevent downtime.

Free trial: Foglight for Databases offers a 30-day free trial; there’s also a hosted Foglight Cloud Virtual Lab to try preconfigured environments.

Pricing: For on prem, they offer licenses by term or subscription. Cloud purchases are based on the quantity of databases being monitored.

Monitoring

Foglight for Databases provides a single web console with real-time and historical views, baselines, and performance investigation. It ships with out-of-the-box deployment, alert management, and templates that allow for threshold adjustments, and also includes AI Alarms for AI-assisted alert remediation. Coverage extends across both on-premises and cloud deployments.

Cross-platform

Current supported targets include Oracle, SQL Server, MySQL, PostgreSQL, DB2, SAP ASE, SAP Hana, MongoDB (incl. Atlas), MariaDB, Cassandra, Redis, Percona and cloud databases such as Amazon RDS/Redshift/Aurora, Azure SQL DB/Managed Instance, Google AlloyDB, and Snowflake. All these platforms are monitored from one console across on-prem and cloud.

Performance & diagnostics

Query Insights ranks high-impact statements across the environment and links to Performance Investigator for deeper analysis. Performance Investigator provides detailed query level analytics for Oracle, SQL Server, PostgreSQL, MySQL, and Azure SQL Database.

Observe by use, host, database and statement. You can focus analysis on a resource such as IO or locking to filter the view and then drill down into the insights. To support the analysis, there’s a long-term PI repository, two years on-prem and one year for the cloud.

Key capabilities include wait/event analysis, platform-specific drilldowns, change tracking, and comparison reporting. SQL workload tools (e.g., Query Insights) help pinpoint costly statements; historical lock analysis supports concurrency troubleshooting.

Management

Foglight is primarily observability, not a backup/DDL/change tool. That said, it provides alarm templates and Actions (email, command, script) you can bind to rules, plus a REST API and ServiceNow integration for ticketing/closures, meaning you can trigger external automation when conditions are met.

Other Quest Software offers management options. Toad Products offer database management automation options, Quest erwin Data Management Platform provides Data Modeling, Data Cataloging, Data Lineage, Data Observability, and Data Quality capabilities. Light Speed for SQL Server provides backups.

Scalability

Foglight scales via federation; HA supported. Depending on vendor set-up 10,000 + databases are supported. The Foglight team says the architecture has no hard-coded scalability limit. The software supports high availability – vendor managed in the cloud, self-managed if hosted.

Automated Maintenance

Foglight focuses on monitoring and diagnostics with alerting, templates, and ITSM/webhook integrations; it can trigger external runbooks but does not execute routine DBA maintenance natively. Comparable ‘automated management’ capabilities in the Quest stack are provided by separate tools (e.g., LiteSpeed, Toad).

In the Foglight Cloud option, the maintenance is provided.

Compliance

Foglight supports compliance-related work by maintaining an audit log of administrative changes, and enforcing access controls with role-based permissions and SAML SSO. Deployments can also be run in FIPS-compliant mode, and alarms can integrate with ServiceNow to provide traceable evidence of incident response.

While Foglight helps with data handling and auditing, it is not a dedicated compliance enforcement tool. For SQL Server environments that require detailed activity auditing and compliance reporting, Quest’s Change Auditor for SQL Server is often used alongside Foglight.

Reporting

Foglight’s reporting centers on dashboards and report templates: you can build custom report templates (often by cloning a dashboard view), run/schedule them, and share the output with others; these capabilities exist in Foglight Cloud and on-prem editions.

White-label branding isn’t provided as a turnkey feature, but on-prem customers can design their own reports and include their logo; in Foglight Cloud you can clone built-in reports/dashboards and adjust them within the platform’s safety constraints.  

Logo for redgate section of the alternatives to solar winds.

3. Redgate

Redgate offers an end-to-end database DevOps portfolio with SQL Server tools such as SQL Compare, SQL Monitor, and SQL Toolbelt. The platform has been expanding into broader database coverage with Flyway for migrations and cross-database change management.

Redgate also provides free tools such as SQL Search and Flyway Community, giving teams lightweight options alongside enterprise subscriptions.

Best for teams that want a SQL Server-centric DevOps toolchain, but with growing support for PostgreSQL, MySQL, and Oracle through Flyway and other newer Redgate offerings.

Free trial: Most products include a 14-day free trial; some free community editions are also available.

Pricing: Annual subscriptions per user, with tiered pricing for small teams and custom quotes required for larger deployments.

Monitoring

Redgate Monitor provides a single web console with performance and activity monitoring, alerting, and diagnostic drilldowns (e.g., waits, blocking chains, deadlocks, top queries, plan-level details). You can also mark deployments and other events on the timeline (via built-in annotations or API/PowerShell) to correlate changes with behavior.

Cross-platform

Redgate call this hybrid monitoring for hybrid estates. Redgate Monitor supports SQL Server and PostgreSQL broadly, and (as of 2025) has added support for Oracle, MySQL, and MongoDB. Monitoring spans on-prem and cloud (e.g., Azure SQL, Azure Managed Instance, Amazon RDS, EC2, GCE). The databases can be either SQL Server or PostgresSQL with Timescale DB. The software is typically self-hosted.

Performance metrics

Redgate Monitor collects metrics and provides drilldowns for query execution statistics, long-running queries, blocking chains, and plan-level details, with estate overviews for trends and outliers. The documentation groups this under performance and activity monitoring and performance diagnostics.

Management

Redgate’s management tools focus on daily DBA administration. It includes backup and recovery, multi-server scripting, documenting and visualizing dependencies, index maintenance, and classifying data across the estate.

Software you may need

  • SQL Backup Pro for backup/restore
  • SQL Multi Script to run scripts across many servers.
  • SQL Scripts Manager for the curated script library
  • SQL Doc and SQL Dependency Tracker for documentation and visualization
  • SQL Index Manager for index maintenance
  • SQL Data Catalog to classify and label sensitive data

Note: Redgate Monitor doesn’t perform database changes. Its REST API is read-only for data access. The PowerShell API is for Monitor configuration and annotations, not schema or data deployment.

Scalability

For growing estates, Redgate documents two scaling patterns:

Split components (Web Server, Base Monitor service, and repository database) onto separate machines as counts rise (often beyond ~50 servers).

Add additional Base Monitors when the load gets high, “200+ servers” per Base Monitor is the point where a second becomes advisable; Monitor can roll up multiple Base Monitors into one UI. (They also cite scenarios like geo/DMZ boundaries.)

For environment provisioning at scale, Redgate Clone and SQL Provision let you create lightweight, masked clones for many dev/test databases quickly; Flyway (migration-based) and SQL Compare (state-based) handle scalable change delivery when DBAs also own deployments.

Automation

Redgate Flyway provides CI and CD automation for database change management across multiple platforms.

Compliance

Compliance coverage is built around classifying sensitive data and controlling how copies are created for non-production use.

  • SQL Data Catalog helps teams identify and track sensitive data.
  • Data Masker applies policy-driven masking rules before data moves downstream.
  • SQL Provision is designed to bring cloning and masking together, so development and testing environments are created in a governed way.

Reporting

Redgate Monitor creates reports. They are customized with tiles, scheduled, and emailed as PDFs. According to the docs, built-in reports are available for SQL Server targets; for broader customization or branding, teams commonly use the read-only REST API to pull data into their own BI tooling and generate branded outputs. Scheduling is done in the UI; the REST API is for data retrieval rather than creating reports.

For fully branded or bespoke outputs, export data via the read-only REST API to your BI tools. Scheduling is done in the UI.

Logo to start the IDERA section of alternatives to solarwinds.

4. IDERA

IDERA offers a SQL Server focused toolset with monitoring, diagnostics, and adjacent admin utilities. This section highlights the pieces DBAs use most and how they fit day to day work.

Best for: SQL Server-centric estates (with some MySQL/MariaDB) that want deep SQL Server monitoring and diagnostics, optional wait/trace-driven tuning, and packaged tools for auditing, security checks, backup, jobs, and index maintenance.

Free trial: Free trials are available (SQL Diagnostic Manager advertises a 14-day trial; other products have trial pages as well).

Pricing: Generally quote-based.

Monitoring

SQL Diagnostic Manager (SQL Server) provides performance/availability monitoring, alerting, and diagnostics (desktop + web console), with history/baselines, lock/blocks/deadlock insight, AG awareness, and reporting (including SSRS deploy). SQL Diagnostic Manager for MySQL is agentless with 600+ monitors/advisors and RDS support.

Cross-platform

Coverage is strongest for SQL Server and MySQL/MariaDB. SQL DM supports SQL Server on Windows/Linux and can run/monitor in cloud VMs; the MySQL edition covers on-prem and Amazon RDS for MySQL/MariaDB. For additional engines (Oracle/Db2/Sybase), IDERA lists Precise software for multi-platform database performance monitoring separately.

Performance metrics

Out of the box: real-time and historical metrics, configurable baselines/thresholds, waits/blocks/deadlocks, Top SQL, query plan views, and workload analytics. Optional add-ons: SQL Workload Analysis (granular wait-state and top SQL analysis) and SQL Query Tuner (visual tuning, batch tuning, recommendations).

Management

IDERA leans observability + adjacent admin tools rather than schema change. For operations you’ll typically pair: SQL Safe Backup (policy-based backup/restore, compression, instant restore), SQL Enterprise Job Manager (centralized Agent job monitoring/management), SQL Defrag Manager (index maintenance), SQL Inventory Manager (discovery/ownership/alerts), and SQL Doctor (prescriptive tuning advice).

Scalability

Self-hosted architecture with a repository and consoles; designed to span hundreds of instances via agentless collection, baselines, and alert noise controls. Cloud-VM deployments are supported for hybrid estates; MySQL edition is agentless for fleet coverage.

Automation

Automation is event-driven: alert Action Responses (e.g., send as event/ServiceNow), scripts, and PowerShell can trigger workflows; backup, job, and defrag tools are policy/schedule driven. (IDERA monitoring itself doesn’t deploy schema/data changes.)

Compliance

SQL Compliance Manager provides policy-based auditing, alerting, and reporting for SQL Server activity and sensitive columns (framework mappings called out include PCI DSS, GDPR, HIPAA, SOX, etc.). SQL Secure analyzes effective permissions, flags risky settings, and generates audit-friendly reports/recommendations. These tools support compliance processes; they don’t certify compliance on their own.

Reporting

SQL DM supports built-in reports (and SSRS deployment) for health/performance; other tools include scheduled and templated reports (e.g., compliance/security summaries). Teams needing custom/branded outputs commonly export to SSRS or external BI.

Logo for the datadog section of alternatives to solarwinds.

5. Datadog

Datadog is strongest when you need one place to connect query behavior with application and host signals. It complements DBA toolchains rather than replacing them.

Best for teams that want a SaaS observability platform that puts database visibility alongside apps, hosts, and logs in one place.

Free trial: 14-day free trial for new accounts

Pricing: Public, usage-based pricing by product.

Monitoring

Database Monitoring (DBM) is read-only and surfaces historical Query Metrics and per-statement Query Samples (with explain plans), plus host/managed-service metrics. It highlights long-running and blocking queries and ships out-of-the-box dashboards for managed services like Amazon RDS and Azure SQL. You can add Deployment Tracking markers to correlate changes with behavior.

Cross-platform

DBM supports self-hosted and managed PostgreSQL, MySQL, SQL Server, Oracle, MongoDB, and Amazon DocumentDB, and integrates with cloud databases such as Amazon RDS/Aurora, Google Cloud SQL, and Azure SQL (incl. Managed Instance). Datadog is delivered as SaaS with an Agent you deploy close to each database.

Performance monitoring & diagnostics

Datadog correlates traces, metrics, logs, RUM, and database telemetry to localize latency, errors, and resource usage across services. Read-only diagnostics span APM and Continuous Profiler, with SLOs and alerting to track performance over time.

Management

Datadog is observability, not database lifecycle management—no native backups, index jobs, or user/patch management. Operationally, teams use Monitors/Alerts, Infrastructure Monitoring, Logs, Notebooks, and Incident Management; actions can be triggered externally via Workflow Automation.

Scalability

Control plane scale is handled by Datadog’s hosted backend. Collection scales via Agents (VMs, containers, Kubernetes). For large estates, govern growth with log indexing/exclusion filters, quota/usage controls, and documented API rate-limit increases; place agents near databases and use tags for roll-ups.

Compliance

Datadog doesn’t ship database policy packs or enforcement.

Reporting

Dashboards can be scheduled as high-density PDF reports (email/Slack), publicly shared via links, and Notebooks support narrative, data-driven write-ups. Scheduled CSV reports for Logs are available for periodic extracts.

Customization

Dashboards are highly customizable: widgets, template variables (with saved views), and JSON editing per widget/dashboard. You can export data via APIs or build Notebooks for bespoke views; the docs don’t advertise full white-label branding removal.

The Right Solar Winds Alternative for You

Choosing the right database monitoring platform depends on your environment, scale, and compliance needs. The five options covered here each bring strengths that make them worthy SolarWinds alternatives. Find the one that best suits your organization’s needs. 

To help you, the table below gives a quick side-by-side comparison of the top SolarWinds alternatives for database monitoring.

An info graphic that compares alternative tools to solarwinds

See a SolarWinds Alternative

Book a demo with dbWatch

The post Top 5 SolarWinds Alternatives for Database Monitoring appeared first on dbWatch.

]]>
How a Managed Service Provider for Databases Found a Secure Management Method https://www.dbwatch.com/blog/secure-database-management-msp/ Thu, 14 Aug 2025 13:33:08 +0000 https://www.dbwatch.com/?p=19169 A Nordic-based Managed Service Provider for Databases added dbWatch Cloud Router to replace complex, unreliable VPNs with secure, scalable access.

The post How a Managed Service Provider for Databases Found a Secure Management Method appeared first on dbWatch.

]]>

Any Managed Service Provider who has tried to scale their database services knows the hard home truth: at some point adding another client with a VPN router causes more work than the client’s paying.  

A Managed Service Provider in The Nordics solved their scaling issue by using Cloud Router, from dbWatch. Cloud Router is secure, solved their VPN headache, has put all their client’s into a single pane of glass, giving them a total overview of all customers, and kept them logged into the customer’s environment. They’re able to solve problems quickly. 

Key Advantages of dbWatch

  • Scalability for additional clients. Set all your databases into the same system without needing a multiple VPN solution. The price scales so if you join the MSP Partner program.
  • Better security. VPNs provide a pathway for hackers to enter your system and your customer’s system. Cloud Router creates secure connections and dbWatch Control Center has a historic tracking log, so you see everything that happens on database servers.
  • Secure Outbound Connectivity uses encrypted, outbound-only communication to remove VPN vulnerabilities and ensure safe, persistent access without relying on customer-side configurations.
  • Faster Setup and Maintenance simplifies onboarding with a lightweight agent, reducing setup time from hours to minutes and cutting down on support overhead across all customers.

Limitations of VPNs as a Managed Service Provider 

From the beginning of their database monitoring work the MSP installed and configured  VPN routers to monitor and interact with their customer’s environments. Each customer had a unique set up and needed a technical person on their side to manage the VPN. 

In the beginning, the VPNs worked well. The chief DBA at the MSP remembers, “It was just fire and forget.”  But it didn’t scale. At round five customers, the VPNs started to overlap, having same address space, forcing the MSP to turn on and off VPNs connections. In addition, some customers only wanted the VPNs open for a set window of time.  

Soon the setup became too complex to manage. Having certain VPNs open at the same time started to cause software conflicts, due to the incompatibilities between some of the encryption technologies.  Multiple VPNs had to be opened, monitored and closed daily.  

Then there was the cyper-security risk. Because both sides are connected to the VPN and no one can easily see if there’s an attack, the risk of using VPNs is doubled. While the MSP had their customers in a separate network to prevent an attack, not many of their customers used the same precaution, nor did the customer’s suppliers. 

When VPN keys and log-ins time out, the client contact person was needed to re-open the connection. If the contact person was out-of-office, the process was time-consuming. 

The Chief DBA acknowledges that other MSPs might connect differently with their clients. “You can get Connect website and other options. However we’re only focused on only monitoring databases and database servers and the VPN is what we needed.” 

Sometimes the hardware box would simply fail. It took too much time to maintain the box and their connections.  The VPNs needed a technical contact person on the customer side. The contact needed to set up the VPN initially and then remain on call consistently for when the connection failed. The amount of time the MSP spent managing VPNs cut into their profit margin. 

dbWatch is so much easier than VPNs, we never want to go back.

A New Solution to an Old Problem 

The MSP began using the Cloud Router package in 2022, participating as beta testers before its official release. Cloud Router replaces site-to-site VPNs with an outbound-only, encrypted connection initiated by a lightweight agent. It’s a flexible installation, with the ability to be on-site at the MSP, at the customer or in the cloud. This Nordic MSP manages databases located at the customer’s site. The approach eliminates the need for exposed inbound access or static IP allowlists, reducing both complexity and risk. 

The deployment process is standardized: the MSP installs the dbWatch server locally and configure the connect to the cloud router hosted by the MSP.  The MSP connects to the Cloud Router to see all their different customers.  

The new setup enabled the MSP’s DBA team to view all customer environments from a single control center, with full visibility into each database server’s health, performance, and alerts. Instead of managing dozens of overlapping VPN tunnels, they now use an interface where they can switch instantly between monitoring and management modes, without re-authenticating or disrupting workflows. 

In addition, the transition also resolved several operational problems. Access is persistent and no longer depends on time-sensitive VPN keys. If a connection drops, it’s logged and restored automatically, without requiring action from the customer side. Because the communication is outbound and encrypted end-to-end, the security surface is minimized and monitored, making the setup both safer and easier to audit. 

An illustration of how cloud router works.

Above graphic shows the concept of dbWatch Cloud Router.

Why dbWatch Belongs in Your Tech Stack

Various database monitoring tools are on the market today, but for Morten, dbWatch stands out for its affordability, support, and ease of use. “The product’s price sets it apart,” Morten said. “Especially for companies without a DBA tool, dbWatch is an easy decision because it doesn’t come with the high cost of other solutions.”

Without a dedicated tool, DBA teams often build their own tools and have manual routines for monitoring and reporting – reinventing the wheel at every step. With dbWatch, Morten and his team avoid wasting time and have a tool that works out of the box. “It’s very easy to deploy,” Morten said. “It’s simple to add servers; you don’t need a large system to turn it. You just need to open the correct ports between what you’re monitoring and the monitoring server.”

The Benefits of Using Cloud Router 

The MSP was hooked on Cloud Router before the beta testing finished. “It was so much easier than VPNs, we never want to go back.” Here are some of the many benefits they gain: 

Scalable Access 

Now one person can easily manage 30 customers. No longer does the DBA team spend time logging in and out of VPNs each week to ensure the connections work. Nor do they have those ‘Murphy’s Law’ moments where a customer’s system is suddenly down and the VPN key has decided not to work. 

Improved Security and Reliability 

With the outbound-only connection, there’s no need to worry about each customer’s IT hygiene. In addition, there’s no problem with VPN keys.  

Time Savings 

Now that there aren’t multiple VPNs to track, one DBA can handle more clients. In addition, the team can solve issues on first detection without waiting for access or customer contacts.  

Safe, Secure Connections

Learn how your company can benefit from Cloud Router.

The post How a Managed Service Provider for Databases Found a Secure Management Method appeared first on dbWatch.

]]>
Custom Database Monitoring Jobs https://www.dbwatch.com/blog/custom-database-monitoring/ Mon, 04 Aug 2025 16:57:36 +0000 https://www.dbwatch.com/?p=19151 Tailor database monitoring jobs to your business needs and stop issues before they cause damage.

The post Custom Database Monitoring Jobs appeared first on dbWatch.

]]>

Undoubtedly, your organization has issues that are specific to its business set-up. Some of these problems and issues are tied to databases.  

If your database is part of the problem, it can also be part of the solution. You can use custom database monitoring jobs that detect early indicators and trigger alerts before full-scale issues develop. These monitoring jobs improve database reliability while saving time on reactive fixes.  

How Does Out-of-the-box Monitoring Work? 

Customized monitoring makes more sense once you’re familiar with the default monitoring jobs. dbWatch comes with a set of monitoring jobs; they are generic and can be applied to the laundry list of jobs that most everyone needs. That list grows yearly when customers talk to us about a monitoring job they need. If it is likely an issue for others, we create the job and push it into the main product.  

dbWatch allows for customized jobs, so you can tailor detection to be alerted about your red flags. Usually, these jobs are so individualized that it doesn’t make sense to include them in the product. Installing them would create an error because the relevant data doesn’t exist in many environments.  

How does Customized Database Monitoring Work? 

Using customized monitoring means starting with a query or pattern that reflects a real problem. Then, you build a job that alerts you when that query or pattern happens again. dbWatch has customized monitoring because each business needs to be able to develop database monitoring jobs for their business case.  

The problems the monitoring jobs fix don’t always look like database issues. Below are several industry specific use cases where customized monitoring jobs have saved companies time and money. 

Saving Thousands of Dollars by Alerting when Syncing Fails  

In our home base of Norway, many ferry companies transport people over fjords. They often sell tickets on the boats, using hand-held systems containing credit card information. 

Several of these companies use dbWatch to monitor and manage their databases. One of our technicians, Per, visited them on-site for a day to set up 50 databases and service their systems.  

Over lunch, they mentioned to Per that they had a problem with their hand-held systems. Theoretically, the hand-held system should sync each time the ferry docked, using the docking station’s WiFi. In reality, sometimes they stopped syncing but kept adding new credit card information, overwriting the older information as the memory card filled.  In the high season, this could quickly add to losses of upwards of USD 5,000 a day per machine.  

Over a second cup of coffee, Per got the details and then offered to fix the problem with a customized monitoring job. The job checks each hand-held system daily, noting if the system has connected with the WiFi once in the last 24 hours. When a system fails to connect, the person responsible for hand-held systems receives an email alert and can act to ensure the data is saved.  

Example of a ferry where a terminal is used to buy and sell tickets.

Broadcaster Ensures the Correct TV Program is Published 

A national broadcaster in Europe needed to ensure that their systems pushed out the information for schedules and programs on TV. Sometimes, the system locked up and didn’t push correctly, old data was published, or incorrect information appeared. Because the system was developed in-house, they needed a unique way of ensuring everything would update automatically. 

Using dbWatch, they implemented monitoring for it, alerting them when the database doesn’t push the TV program correctly. The monitoring job checks on both ends to ensure the data has been pushed and arrived. When either end has a problem, an alert is texted to the DBA on duty, and they can correct the problem so someone who tunes in for the news doesn’t find a documentary about underwater weaving instead.  

A Large SaaS Alerted When Key Job Fails to Fire 

A large SaaS company has over 1000 database servers to monitor and manage. dbWatch wrote a customized monitoring program for them. The SaaS company needed to monitor when a specific repeating job didn’t fire within their software program on their client’s databases.  

When the job doesn’t fire, the customized monitoring job deploys to make it fire. If the job doesn’t fire, the DBAs are alerted and can implement a manual start.  

Identifying Problems that Normal Monitoring Can’t Catch 

Many organizations have operational issues that are highly specific to their business case. Often, these issues don’t appear as traditional database issues, which means the standard out-of-the-box jobs miss them.

If you are developing your own application, you can also create monitoring jobs that fit that application, using the dbWatch framework. This makes it possible to deploy monitoring alongside your application so any problems are caught early and addressed before they affect users.

When left without monitoring, these issues can cause serious disruptions, financial loss, or damage to a company’s reputation. This is where dbWatch can make a difference. You can create custom monitoring jobs tailored to your specific business set-up. Instead of relying only on predefined checks, you can define the exact query that signals a problem. Then, you’ll be notified when certain conditions occur, and you’ll be able to stop the problem before it starts.  

Webinar: Customized Monitoring

Learn how to customize your monitoring jobs with dbWatch.

The post Custom Database Monitoring Jobs appeared first on dbWatch.

]]>
How to Build Support for Buying Compliance Database Tools https://www.dbwatch.com/blog/how-to-build-support-for-buying-compliance-database-tools/ Wed, 07 May 2025 10:16:49 +0000 https://www.dbwatch.com/?p=17200 Learn how to secure stakeholder buy-in. Get practical examples to help you show value to leadership that align with their priorities like audit readiness, operational efficiency, and continuous security.

The post How to Build Support for Buying Compliance Database Tools appeared first on dbWatch.

]]>

Achieving Compliance in Databases is More Than a Checklist

Usually compliance starts with a policy or a checklist, but the reality is messier than a simple list. Security standards are broad, environments vary, and requirements change. Often, you’ll manage more than one system, up to hundreds of database instances, and each has its quirks.

How to Make the Business Case for Database Compliance Tools

While you know that having a good tool will help you reach and maintain compliance, you’re likely not in charge of the budget. Meaning you’ll need to take off the DBA hat and put on the sales hat. You’ll have to convince leadership why you need a tool.

If you’re anything like the DBAs we have working at dbWatch, there’s a reason you aren’t in sales. So we’ve put together ways you can frame your conversation with decision-makers to help them understand the value of a tool.

A visiualization of how dbWatch helps DBAs reach database compliance

Time Is Money and Manual Work Doesn’t Scale

Manual checks take tons of time. Leadership may not see this cost until something breaks or a review goes badly. Your goal is to help them see the price from the beginning. Here’s an idea of how you can do your own cost calculation to show them how much manual compliance checks cost in terms of your time.

The Cost of Staying Compliant is Monitoring and Tracking

Compliance never lasts. Maybe someone creates an account for a consultant and forgets to remove it. Perhaps a script reverts a configuration. These little changes happen constantly, and logically, if you’re not watching, you are none the wiser. Without tracking, you have no hope of maintaining compliance. You need a monitoring tool that can track these changes and alert you.

Understand Stakeholder Priorities for Compliance Databases

When asking for a compliance tool, it’s tempting to talk about how much work it would save or how frustrating the current process is. But that’s not what decision-makers care about. They’re thinking about risk, budget, database auditing, and operational reputation. They want to know:

  • What happens if we don’t do this?
  • What will it cost if we fail?
  • Can this be handled with what we already have?
Stakeholder ConcernHow You Can Frame It
Audit riskA single misconfiguration can cause a failed audit. A tool helps us catch that early.
Team capacityWe can either spend 100+ hours rechecking work manually—or track it automatically.
Scaling the environmentCompliance across five databases is doable. Across 50 or 500? That needs structure.
Proof and accountabilityWe don’t just need to be compliant—we need to prove it consistently and defensibly.
Business alignmentThis isn’t about a checklist. It’s about reducing risk, protecting the business, and building client trust.

Conclusion: Build a Case That Lasts Beyond One Audit

The work of compliance doesn’t end when the report is submitted. It continues in the background. If your team manages this manually, you’re always one change away from a setback. A tool like dbWatch gives you control. It turns compliance into a continuous, trackable process. It helps your team scale without sacrificing quality and provides leadership the confidence that security is carefully managed.

If you’re ready to make the case for better tooling, focus on what your stakeholders care about: risk reduction, operational efficiency, and proof that your environment is secure. With the right database compliance solution, you’re proactively managing risk system-wide. dbWatch helps you deliver on all three.

Reach and Maintain Security and Compliance

Discover the dbWatch Use Case for security and compliance.

The post How to Build Support for Buying Compliance Database Tools appeared first on dbWatch.

]]>
Why Embriq Added dbWatch to Their Tech Stack https://www.dbwatch.com/blog/why-dbwatch/ Fri, 02 May 2025 07:33:18 +0000 https://www.dbwatch.com/?p=17171 Embriq added dbWatch to their tech stack for cost-effective, proactive database monitoring, efficient patch management, and faster issue resolution, helping DBAs like Morten Gausen maintain reliable and streamlined operations.

The post Why Embriq Added dbWatch to Their Tech Stack appeared first on dbWatch.

]]>

Embriq is a Norwegian tech company and a leading provider of advanced IT operations, software, industrial IoT technology, and consulting services. The company operates in both Norway and Sweden, serving customers across the Nordic region.

Embriq assists its customers in simplifying complexity, ensuring their mission-critical operations, and optimizing their competitiveness.

Key Advantages of dbWatch

  • Proactive Database Monitoring provides a clear status overview of servers and an intuitive color-coded system to highlight urgent issues.
  • Patch Management and Reporting tracking patch updates, noting which servers need updates, and providing URLs for new patches.
  • Efficient Issue Resolution helps identify, troubleshoot, and resolve database issues like blocking issues quickly before they cause significant issues.

Especially for companies without a DBA tool, dbWatch is an easy decision because it doesn't come with the high cost of other solutions.

Database administrators (DBAs) working with managed service providers (MSPs) need to have the right tools to support their employees’ efficiency and reliability. When Morten Gausen joined Embriq as a Senior SQL DBA, he had one specific request for his new employer’s tech stack: he wanted dbWatch. For Morten, dbWatch was an essential part of his daily workflow, and he urged for this tool to be purchased as part of the DBA role.

The decision didn’t require a debate or lengthy approval process. Morten made a comparison of the capability and price of database monitoring systems available for his managers. They quickly saw that dbWatch was very cost-effective and could add value to their MSP, making it a clear choice.

Each workday, Morten begins with dbWatch. “I go into dbWatch and see the status of the servers,” Morten explains. “Then I check the emails that have come in overnight to get a history of any issues.” dbWatch’s intuitive color-coded system simplifies this process and supports proactive work. Red clearly denotes any urgent problems, like problematic servers, alarms, or potential failures. “It makes troubleshooting faster and more efficient. I can see which servers are okay and which need attention.”

Beyond the monitoring function, dbWatch also plays a key role in tracking patches and generating reports. The dbWatch patch management feature notes the servers that need to be updated and provides the URLs of the new patches. DBAs can also set custom delays to ensure patches are stable before deployment.

Learn more about patch management in dbWatch in our Wiki.

dbWatch makes troubleshooting faster and more efficient. I can see which servers are okay and which need attention.

Why dbWatch Belongs in Your Tech Stack

Various database monitoring tools are on the market today, but for Morten, dbWatch stands out for its affordability, support, and ease of use. “The product’s price sets it apart,” Morten said. “Especially for companies without a DBA tool, dbWatch is an easy decision because it doesn’t come with the high cost of other solutions.”

Without a dedicated tool, DBA teams often build their own tools and have manual routines for monitoring and reporting – reinventing the wheel at every step. With dbWatch, Morten and his team avoid wasting time and have a tool that works out of the box. “It’s very easy to deploy,” Morten said. “It’s simple to add servers; you don’t need a large system to turn it. You just need to open the correct ports between what you’re monitoring and the monitoring server.”

Real World Examples

With dbWatch constantly monitoring, Morten and his team can work proactively and address issues before they become critical. In one case, dbWatch flagged a database corruption issue that would have gone unnoticed until it had shut down the database. Morten recalls, “When we looked into it, we could use dbWatch to see what was happening; we didn’t even have to go into the server logs.” Because they identified the issue early, Morten and his team could restore the database from the correct backup point, avoiding downtime.

Blocking issues are another common challenge that dbWatch helps manage. One morning, Morten found a severe overnight blocking incident caused by a failed maintenance task. “When I checked dbWatch, I saw a huge issue. I went into the dbWatch GUI, located where the block was happening, and could click on the blocking session to stop it.” Because Morten had real-time insights, he could immediately act on the issue rather than contacting someone else to release the block.

dbWatch is an integral part of Morten’s database management strategy. It enables him to:

  • Detect and resolve issues proactively before they impact operations.
  • Track and manage patch updates efficiently.
  • Quickly identify and address database locks and blocking issues.

Manage Databases Efficiently

See how dbWatch works to enhance your database management.

The post Why Embriq Added dbWatch to Their Tech Stack appeared first on dbWatch.

]]>