DBAKevlar https://dbakevlar.com Making Technology Bulletproof Tue, 17 Mar 2026 00:00:33 +0000 en-US hourly 1 https://dbakevlar.com/wp-content/uploads/2019/06/cropped-dbakevlar_twt_logo2-1-32x32.jpg DBAKevlar https://dbakevlar.com 32 32 AI Snake-Oil Salesmen: How to Spot Them https://dbakevlar.com/2026/03/ai-snake-oil-salesmen-how-to-spot-them/ https://dbakevlar.com/2026/03/ai-snake-oil-salesmen-how-to-spot-them/#respond Tue, 17 Mar 2026 00:00:32 +0000 https://dbakevlar.com/?p=10667 Outside of my work at Redgate, I serve as an AI Advisor to organizations and individual investors. It’s a role I genuinely enjoy and it gives me the opportunity to help drive clear direction around AI projects, policies, and governance. But one increasingly frustrating pattern has emerged from this work over the past year, and I think it’s worth talking about: the rise of what I’ve started calling the AI snake-oil salesman.

Whenever a technology disrupts an industry as dramatically as AI has disrupted tech, the profit opportunity attracts not just innovators but also opportunists, con artists, and even charlatans. Fake products all dressed up in impressive language and AI tech. I’ve encountered them on advisory projects and in investment reviews, and I’ve gotten good at recognizing the patterns. These red flags aren’t secret and I truly believe anyone can learn to spot them, so I figured I’d save the technical community some due diligence and share what I’ve collected.

The Red Flags

  1. They have a solution without a problem. Or worse, they’re actively shopping for a problem their product can pretend to solve.
  2. Their website contradicts their pitch. They’ll tell you AI is at the core of everything they do, but their web presence was built with basic traditional tools and shows no evidence of the sophisticated infrastructure they’re describing. The contradiction is the tell and no react code in sight.
  3. All their content reads like it was written by AI. Lots of buzz words, impressive-sounding phrases, and absolutely nothing substantive about how their AI actually functions or what’s under the hood.
  4. In person, they speak in magic, not mechanics. Whether they’re talking about RAG, machine learning, or generative AI, it’s all buzz words and hand-waving. Ask them about workflows, architecture, security, or frameworks and watch the conversation go sideways.
  5. They ask what your problems are before telling you what their product does. This is a classic manipulation pattern.  They’re reverse-engineering their pitch in real time to match whatever pain you just described.
  6. They target non-technical investors and unpaid technical talent. They’re looking for people with money who don’t know enough to ask hard questions, and developers willing to work for equity in something that doesn’t yet exist.
  7. They hide behind intellectual property. There’s a difference between protecting trade secrets and being unable to explain what you’re building. If someone can’t describe their product’s purpose clearly, without giving anything proprietary away, then that’s not caution, but a gap.

What Legitimate AI Work Actually Looks Like

For contrast: credible AI practitioners can explain their architecture in plain language. They know what problem they’re solving and why AI is the right tool for it and not just the trendy one. They welcome technical scrutiny rather than deflecting it. And they don’t need your investment before they can tell you what they’re building.

A Note on Hope and Reality

Every time I encounter one of these, I genuinely hope I’m wrong. I’d love to hear a year later that they found their footing and built something real. But so far, without exception, every AI snake-oil salesman I’ve flagged is still searching for traction, or worse, still searching for a problem and often, it’s well over a year later that I’m seeing this.

Do your due diligence. Ask hard questions. And if someone can’t answer them, that’s your answer.

]]>
https://dbakevlar.com/2026/03/ai-snake-oil-salesmen-how-to-spot-them/feed/ 0
The Bosses Who Make a Difference https://dbakevlar.com/2026/03/the-bosses-who-make-a-difference/ https://dbakevlar.com/2026/03/the-bosses-who-make-a-difference/#respond Mon, 16 Mar 2026 01:03:36 +0000 https://dbakevlar.com/?p=10665 There’s a piece of advice I give at least once to every person I mentor:

“Your job is only as good as your boss.”

Most people agree with me and then there’s those that know its the only fact that matters.

This is the first in a series of posts about the managers who shaped me. Not the cautionary tales as we’ve all collected enough of those. These are the ones who got it right, often without recognition, and for some of them, their influence didn’t stop just because I stopped reporting to them.

Knowing How You Work

Before I talk about the first individual, a little context helps.

I’m wired to fill gaps. I walk into a role, scan for what’s missing or broken or quietly given up on, and I go fix it. A year into a job, rarely I’m in the role I was hired for because something more necessary has surfaced and I convinced management it’s worth it to let me do it instead. For the right manager, that’s a gift. For the wrong one, it’s a headache.

I am a master at reading patterns and sometimes uncomfortably early.  This includes understanding the patterns that I bring into the workplace. Knowing this about yourself matters, because it shapes everything- including which environments let you thrive, which ones grind you down, and which bosses will bring out your best.

Steve Ridley was one of the best and the boss that I’m going to start this series with. 

eToys, KB Toys, and a DBA Odd Couple

For just over 2 years, I worked at KB Toys, supporting their Oracle data warehouse and SQL Server systems. My colleague Terry Riddle handled the OLTP Oracle side and avoided the other systems as much as humanly possible. We were oil and water, where Terry was quiet, methodical, content to stay in his lane; I was constantly stretching, redesigning, tuning, taking on more than was probably reasonable. Terry had been at the company for eight years vs. my two, and yet we balanced each other perfectly and Steve knew that was going to happen when he hired me away from my previous employer. I’d take on the world and Terry would catch the details I blew past, while I would speak up in rooms and manage the chaos Terry preferred to avoid. Details like pages at night wouldn’t bother Terry, while I had little patience for anything but no downtime and wanted to sleep. We were each other’s yin and yang, creating an incredible database environment harmony that served the organization well.

Steve gave us both the space to operate that way. He didn’t try to flatten us into the same shape and that gift to a team is rarer than it sounds.

The Conversation I Didn’t See Coming

It was early December 2008. The real estate crash was rippling through retail. I could see the numbers deteriorating, as I’m the one that moved the end-of-year sales data to compressed partitions, so I could see the problems as clear as accounting, but I believed in the company and hoped we could recover.

Steve called me into his office. He asked if I’d been following the company’s outlook, which I said yes. He asked if I was looking for a new job, but I got emotional and I told him I was in it for the long haul. He stopped me mid-sentence. Calmly and directly, he said: “Kellyn, you NEED to start looking for a new job today.”

That moment, in which Steve choose honesty over comfort, at personal cost to himself, is one of the clearest examples of genuine leadership I’ve ever experienced. It would have been easier to say nothing. He said the hard thing anyway, because it was what I needed.

The company folded three weeks later.

The Part That Proves the Point

You might think the story ended there, but after KB Toys closed, most of us stayed connected through the early days of social media.  We all continued to communicate online and meet for happy hours, coordinated through Facebook and Twitter, while Terry didn’t, so most lost touch with him after the company dissolved.

Almost a year later, I found myself suddenly out of work after a consulting gig went sideways, (this was documented in the article, “The high cost of accepting a low salary”) when I got a call from a company interested in interviewing me. I walked in, sat down in the conference room, and noticed a name written across the whiteboard in several places: Terry.

Before I could process it, Terry Riddle walked in.

What happened is Terry had called Steve to let him know he was looking. Steve, still keeping tabs on both of us, saw that I was available via social media and sent Terry my resume. Two people who used to work for him, reconnected and rehired through a network he quietly maintained.  Terry and I worked together at this new company for 3 more years. Until recent years, I had always thought the company had come across my resume on their own and that it was a fluke that Terry had just happened to work there.

Steve had no obligation to make the recommendation or send over my resume. We weren’t his employees anymore, but he did it anyway.

What Good Bosses Actually Do

Good bosses don’t just create good working environments. They build something that outlasts the org chart.

They know you well enough to tell you what you don’t want to hear. They give you room to work in the way you are most productive. And when you’ve moved on, even when there’s nothing in it for them, they’re still in your corner.

That’s the bar and Steve Ridley is one of those bosses that cleared it.

This is the first of many, highlighting when bosses get it right.  Let me know if it resonates and if you have a story you’d like to share in the comments.

]]>
https://dbakevlar.com/2026/03/the-bosses-who-make-a-difference/feed/ 0
PostgreSQL with Windows and The Code Page Trap No One Warned You About https://dbakevlar.com/2026/02/postgresql-with-windows-and-the-code-page-trap-no-one-warned-you-about/ Tue, 17 Feb 2026 19:29:11 +0000 https://dbakevlar.com/?p=10654

Running PostgreSQL on Windows feels deceptively simple for anyone with a Windows laptop that just wants a local database to test or demo on.  Just a few clicks and I’ve installed it, started the service, open psql and I’m up and running.
Except… not quite.
Because if you’ve ever seen this message:

Console code page set to 1252 for psql compatibility

You’ve stepped into one of the more subtle, frustrating challenges of running PostgreSQL on Windows. So, let’s talk about why this happens, what it is and why it matters more than you might think…

PostgreSQL Is UTF-8 and Windows Is Not

For most, if not all database professionals, UTF-8 is immediately understood.  It’s a character encoding standard that represents text as bytes so computers can store and transmit it.  Where computers store data as bytes, text must be converted into numbers.  Early systems used ASCII, which supported 128 characters and covered basic English letters, numbers and punctuation.  ASCII worked, until the rest of the world showed up.  It wasn’t able to handle:

  • Accented characters
  • Non-Latin alphabets
  • Asian languages
  • Emoji
  • Mathematical symbols.

Due to this, different regions invented different encodings, (Windows-1252, ISO-8859, etc.) and that created a bit of chaos.  The same byte sequence could mean different characters depending on the system.  Then Unicode came onto the scene.

Unicode

Unicode is a universal character set that’s backward compatible with ASCII and can represent every Unicode character.  It assigns every character in every language a unique number, i.e. a code point.  Most European characters use 2 bytes, while Asian characters commonly use 3 bytes and Emojis use 4 bytes.  The allowed UTF-8 to become dominant very quickly as it doesn’t break ASCII and works everywhere.  An estimates says over 95% of the web uses UTF-8 today.
For PostgreSQL, the database platform defaults to UTF-8 and ensure support of modern applications that expect UTF-8. 

It’s almost unheard of to have data in a database that doesn’t require Unicode.
At the same time in Windows console? It’s living in 1998.
By default, Windows uses a legacy code page, most likely 1252 (Western European Windows).

That means:

  • Special characters may render incorrectly
  • Unicode characters may fail to insert correctly
  • Output may appear corrupted
  • Scripts may behave inconsistently

When psql starts, it often forces the console to:

Console code page set to 1252 for psql compatibility

That message is telling you:

“We’re downgrading your console encoding to something Windows understands.”

And that’s where things start breaking and did break as I built out my pipelines and workflows with my local PostgreSQL databases.

The Real Symptoms

This usually shows up in subtle ways:

  • Smart quotes become garbage.
  • International names don’t round-trip correctly.
  • JSON containing Unicode characters fails validation.
  • PowerShell scripts behave differently than expected.
  • Migration files containing Unicode blow up.

Even worse? The error messages returned could be convoluted and not pertaining to the original failure.
You insert data and then regret it immediately upon querying it in psql.
Now you’re debugging “Postgres encoding issues” that aren’t Postgres issues at all.  It’s often more complicated that first suspected, too.
There are three layers at play:

  1. Database encoding (usually UTF8)
  2. Client encoding (SHOW client_encoding;)
  3. Windows console code page

On Windows, the console code page dictates how characters are interpreted and rendered. If your console is using 1252, but your database is UTF8, you’re creating a mismatch between what PostgreSQL expects and what your terminal can handle.
PostgreSQL isn’t broken, it’s just that Windows is speaking a different language.

Forcing UTF-8

The fix is straightforward, but the levels it needs to be addressed is rarely documented clearly.
Before launching psql, set your console to UTF-8:

chcp 65001

65001 is UTF-8.

If you’re using PowerShell, you may also want:

$OutputEncoding = [Console]::OutputEncoding = [System.Text.UTF8Encoding]::new()

Then launch psql and then run:

SHOW client_encoding;

You want to see:

UTF8

And most importantly your Unicode characters will behave correctly.

The PowerShell with PostgreSQL Fix

If you’re automating PostgreSQL (Flyway, scripts, CI pipelines, AI-based SQL review tools, then you know the drill and encoding mismatches become even more painful.
You may see:

  • Migration files failing validation
  • JSON output mis formatted
  • AI review pipelines choking on non-ASCII characters
  • Environment variables not persisting correctly

And the root cause? The console was still using 1252 and kicking your butt.
When you see:

Console code page set to 1252 for psql compatibility

That’s your signal to take control of encoding before things get weird.


I had to ensure when I performed Flyway migrations that I was encoding correctly as well.  It’s just a small change, but it was essential to make sure there wasn’t any incompatibility.

flyway `

"-configFiles=$($global:FLYWAY_CONF)" `

"-url=jdbc:postgresql://$($global:PG_HOST):$($global:PG_PORT)/$($global:PG_DEMO_DB)?charSet=UTF8" `

"-locations=filesystem:$($global:SQL_MIGRATIONS_DIR)" `

Migrate

Why This Matters

Ten years ago, this wasn’t as visible.
Now we have:

  • Multilingual applications
  • AI-generated SQL
  • JSON everywhere
  • Emoji in product names
  • Distributed teams across the globe

UTF-8 isn’t optional anymore.

I’m gonna say this and may offend a few, but it’s important to remember this is how it’s running, not an insult to Windows.  It’s about a mismatch.  I feel the same way about running Oracle on Windows.  It’s not a Windows problem but an Oracle support of Windows.  So, when running PostgreSQL in a non-UTF-8 console, you must remember that it’s like running a modern data platform through a fax machine.  A Windows PC is not a server, but when in Rome, become Roman.

A Better Long-Term Approach

If you run PostgreSQL frequently on Windows:

  • Set your terminal profile default to UTF-8.
  • Use Windows Terminal instead of legacy cmd.
  • Configure PowerShell to default to UTF-8.
  • Validate encoding in your CI pipeline.
  • Explicitly check client encoding.

You can even bake this into your scripts:

# Force UTF-8 for this PowerShell session

chcp 65001 | Out-Null

[Console]::InputEncoding  = [System.Text.Encoding]::UTF8

[Console]::OutputEncoding = [System.Text.Encoding]::UTF8

$OutputEncoding           = [System.Text.Encoding]::UTF8

Do it once.Put it into your global settings for any pipelines, any database login profile and avoid hours of debugging.

The Final

PostgreSQL on Windows works, but it’s not native and it’s not seamless.  Encoding is one of the areas where the cracks show first.
If your data platform includes PostgreSQL and Windows automation, then encoding is not a minor detail, but foundational.

And if you see:

Console code page set to 1252 for psql compatibility

Don’t ignore it, as it’s your early warning system.

]]>
SQL Identity and Plan Fingerprints https://dbakevlar.com/2026/02/sql-identity-and-plan-fingerprints/ Tue, 10 Feb 2026 00:29:49 +0000 https://dbakevlar.com/?p=10650 I know, I’m back down this rabbit hole again…

As I’ve stated in other blog posts, I’m fascinated by similarities and differences in database platforms and when it comes to performance data, no two platforms are more similar, while being completely different than Oracle and PostgreSQL.  No matter how you view those differences- technical or philosophical, it’s worth researching regularly to see how close and how far from each other they continue as the platforms evolve.

The truth is, when you’re troubleshooting performance, building baselines, or explaining why the same query behaves differently today, you eventually end up asking a deceptively simple question:

“How does the database identify this query and its execution plan?”

Both Oracle and PostgreSQL answer this question, but I find they do it in very different ways, reflecting fundamentally different design philosophies around optimization, observability, and stability.  As I dive into this rabbit hole once again, I’m going to reflect on how Oracle’s SQL_ID differs from the query_id in PostgreSQL and how two terms that sound so similar (PLAN_HASH_VALUE and query_hash) could be generated so differently, as well as misinterpreted.  I’m guilty of it myself, so it’s a good place to spend some time.

The Core Concept: Query Text vs. Execution Plan

At a high level, every relational database needs to track two things:

  1. The query itself, i.e. what is the logical intent.
  2. How it was executed vs. the physical plan that was used.

Oracle treats these as separate first-class identities.  I was trained to call an explain plan proposed by the optimizer as an estimate and the actual plan used the execution plan.  I do view them as two, different entities in the database, where one is a theory and the other is reality.
PostgreSQL treats the query as primary, with the plan as largely ephemeral, which is difficult to put together with my Oracle view of the world.

I think the most interesting thing is the difference drives everything forward in this post.

Oracle: SQL_ID and PLAN_HASH_VALUE

The SQL_ID is the Logical SQL Identity

A SQL_ID is Oracle’s fingerprint for a SQL statement’s normalized text.  The SQL_ID remains a constant for a query across databases, database versions and even cloud or engineered systems.  This identifier can offer tracking across the Oracle ecosystem.

Key properties to understand about a SQL_ID:

  • It’s generated from a canonicalized version of the SQL text
  • Literal values are normalized
  • Whitespace and formatting differences are ignored
  • Same logical SQL results in the same SQL_ID

Take the following query below:

SELECT * FROM orders WHERE customer_id = :1;

No matter what value is passed, the SQL_ID remains the same in each database it’s executed in.

You’ll see and can track this same SQL_ID in Oracle’s management views, performance tools and reporting, including:

  • V$SQL
  • V$SQLAREA
  • V$ACTIVE_SESSION_HISTORY
  • DBA_HIST*
  • AWR reports
  • Oracle Enterprise Manager

Oracle assumes SQL identity matters long-term and will retain it for that purpose.

The Execution Plan Fingerprint

To identify the plan that’s used with a specific SQL_ID, we require the PLAN_HASH_VALUE.  The PLAN_HASH_VALUE represents the shape of the execution plan.

The PLAN_HASH_VALUE is calculated from:

  • The operations in the plan
  • The join order
  • Access paths, including any scans, hashes, sorts, etc.
  • Plan structure, but not cost or cardinality

If the plan shape changes, the hash changes, even if the SQL text does not.  This means that you can have a single SQL_ID with many PLAN_HASH_VALUEs attached to it.

SELECT SQL_ID, PLAN_HASH_VALUE FROM V$SQL
WHERE SQL_ID=’8g1k2m3n4p’;

SQL_ID         PLAN_HASH_VALUE
------------- ----------------
8g1k2m3n4p     1234567890
8g1k2m3n4p     9876543210

This isn’t an accident, but an intentional way to track the plan and performance changes in an Oracle database over time.  Yes, Oracle expects multiple plans per SQL and not just that plans will evolve over time, but that active plan management will occur with features such as baselines, SQL Plan management, profiles, etc.

How Oracle Presents This Data

Oracle makes this relationship very explicit and with it’s extensive features around performance data, offers a wide range of ways the SQL_ID and PLAN_HASH_VALUE will be used with these features to provide performance analysis.

ConceptOracle Artifact
Query identitySQL_ID
Plan identityPLAN_HASH_VALUE
HistoryAWR
Runtime visibilityASH
Plan controlSQL Plan Management

Oracle assumes DBAs will want to know about plan data and any changes that happen over time.

PostgreSQL: query_id and query_hash

PostgreSQL approaches this from almost the opposite direction.  Some of this comes from the very history of the product in the hands of developers and the other is the notion that with PostgreSQL, if you need something, you simply build it. 

A Logical Query Fingerprint

The query_id does appear in the following:

  • pg_stat_statements
  • auto_explain
  • log_min_duration_statement

It is a hash of the normalized parse tree, not the raw SQL text, which is different from how Oracle has approached the SQL_ID.

Key properties of the query_id are:

  • Literals are normalized
  • Whitespace is irrelevant
  • Same logical query results in the same query_id
  • It requires compute_query_id = on

So for the following statement, one very similar to our earlier one in Oracle:

SELECT * FROM orders WHERE customer_id = $1;

We end up with a query_id, but with differences showing up at the plan level.

Not All Query Hash Data is Alike

PostgreSQL, unlike Oracle, does not have a stable, persistent equivalent to PLAN_HASH_VALUE. 

Instead, there’s a plan generated at execution time.  Unless you decide to use prepared statements, the plan may not persist, where in Oracle, it’s far less common to not have statements found in the repository, (although it does happen for some short running statements.)  The plans in PostgreSQL are also not treated as a durable identity.

As PostgreSQL becomes more widely adopted, and managed by DBAs who came from other database platforms, some extensions and tooling expose more information, such as:

  • A derived plan hash
  • This plan is often calculated from EXPLAIN output
  • It’s typically used for comparison, not governance

This is a crucial distinction and something I have to remind myself of often:

PostgreSQL does not assume execution plans are long-lived artifacts.

How PostgreSQL Presents This Data

PostgreSQL focuses more on aggregate behavior, vs. on plan lineage:

ConceptPostgreSQL Artifact
Query identityquery_id
Plan identityEphemeral / derived
Historypg_stat_statements
Runtime visibilityauto_explain, logs
Plan controlLimited, as hints are non-core

The philosophy is to fix the query, fix the schema, and let the optimizer do its job.

Side-by-Side Comparison

As reminders are always good, I thought I would build out a side-by-side comparison, as it goes a long way to drive the important points home.

FeatureOraclePostgreSQL
Query identifierSQL_IDquery_id
NormalizationYesYes
Plan identifierPLAN_HASH_VALUENot native
Multiple plans per queryExpectedRarely tracked
Plan persistenceStrongWeak
Built-in plan governanceYes, with SPMNo
Typical tuning approachPlan-focusedQuery/schema-focused

Why This Matters in Real Life

Common Oracle to PostgreSQL Migration Pitfall

I still have to catch myself when I’m in a PostgreSQL database and I start down the rabbit hole of:

“Where is the plan hash history?”

The honest answer is that PostgreSQL doesn’t think that way and nor should it out of the box.  How many of us install Oracle with all the bells and whistles and wonder how much we’re going to have to maintain just to run a simple data repository in an age that may not always need a well of database features?

If you need Oracle-style plan tracking in PostgreSQL, you’re going to need to build it, but it’s not difficult.  It’s just not automated out of the box and with the big price tag that Oracle offers.

  • Capture EXPLAIN (ANALYZE, BUFFERS)
  • Hash plan output
  • Store snapshots yourself
  • Correlate to the query_id when needed

As I’ve said before, this is not an oversight, but a design choice.

Simplicity is Power with PostgreSQL’s Model

PostgreSQL may seem to be in its infancy when compared against Oracle’s 40 years of investment, but the truth is, it excels when:

  • Query patterns are simpler
  • Schema design is strong
  • Plan instability is rare
  • Developers own performance fixes

It avoids the operational overhead of plan governance and with it the cost of maintaining that overhead.

Oracle’s Model is Robust

Oracle is unmatched when viewing it through a microscope, especially from the POV of someone with deep Oracle experience.  That someone is going to expect:

  • Fine-grain details when workloads are highly dynamic
  • Expansive options for SQL when it can be easily rewritten
  • Plan regressions are unacceptable
  • Performance stability is a business requirement

In Summary

Oracle and PostgreSQL solve the same problem from opposite ends.  While Oracle treats SQL and plans as long-lived and as managed assets, PostgreSQL, at least currently, treats plans as implementation details.   Neither approach is “better”, per say, but assuming they are equivalent leads to bad tuning decisions, broken monitoring, and painful migrations.

Understanding SQL_ID vs query_id, and why PLAN_HASH_VALUE has no true peer in PostgreSQL is one of the most important mental shifts engineers must make when working across both platforms.

]]>
2025 Year In Review, The Community- Part 2 https://dbakevlar.com/2025/12/2025-year-in-review-the-community-part-2/ Wed, 31 Dec 2025 16:06:08 +0000 https://dbakevlar.com/?p=10601 So to finish up my look back in 2025, I’ll dig into what I did outside of speaking and writing as my role as Advocate at Redgate.

Advocacy, Mentorship, and Community Engagement

While speaking and writing, 2025 reaffirmed the importance of advocacy and mentorship. I continued to support professionals navigating career transitions, offered guidance on neurodiversity and sustainability in tech, and contributed to inclusive, community-led events and initiatives reflected in my broader online presence and speaking engagements. I’m still co-organizer for the Data Platform DEI User Group in the Microsoft Community and am ramping up the PDX WIT Meetup in Portland, all while maintaining a few 1-on-1 mentoring opportunities with a few individuals.

I don’t commonly call out those I mentor and do so to protect their privacy, but I do want to say something for all the hard work each person has done. This quieter, relational work doesn’t always show up in headlines, but it remains a deeply rewarding and meaningful part of the year’s impact.

I want to give a heartfelt shout-out to the mentees I am endlessly proud of: Heidi, Vitalija, Maria, Tracy, Amy, and Libby. Each of you impresses me beyond measure with your never-give-up attitude, often supporting one another, and your intentional focus on both self-care and career growth. You show up every day carrying far more than what your job descriptions ask of you, and you do it with grit, generosity, and grace. It’s an honor to mentor you, and just as meaningful to call you my friends.

I also want to take a moment to thank those who have stood by me during some of the more challenging moments this year: Tracy, Heidi, Miranda, Kimberly, Marsha, David, Buck, and Pat. I know I have a tendency to operate like Rambo, an army of one, but your support, steadiness, and willingness to show up have meant more to me than I can adequately express. You reminded me that strength doesn’t have to be solitary, and your presence and care truly mean the world to me.

I also want to extend my thanks to the people who make the work at Redgate itself such a joy. Kirsty Roper, thank you for being an incredible manager, as your trust, support, and leadership make it possible for me to do my best work. Grant Fritchey and Steve Jones, you are phenomenal humans as well as fellow advocates, and I’m grateful every day to work alongside you.  I say farewell to some wonderful peers on my team like Louis Davidson and Ryan Booz while sending a warm welcome to Pat Wright as he joins the team as our newest advocate.  I also welcome our new team members Louise, Fergus, and Tony- your energy and perspectives are already making us better. We’re stronger together, and I’m genuinely excited for what we’ll accomplish as we head into 2026.

What’s My Day Job?

Now there’s a few out there who know me well enough to know that the above is NOT all that I do and that I NEED A LOT to keep me busy.  If I get bored, I get in trouble.  Due to this, I work throughout the week with the marketing and product teams, but even more so with engineering at Redgate.  I’m an engineer at heart and it’s just really, difficult to beat that out of me.  I am very proud to keep the “Oracle sky” flying high at Redgate.  In the last year, we’ve launched a number of products and enhancements for Oracle in Redgate products.

Redgate Monitor for Oracle launched on August 29, 2025.  It was a huge undertaking for the engineering team and I want to recognize the work that was done by the Polymon team who also worked with me on the initial monitoring work for MongoDB and MySQL.  I had 3 decades to take on multiplatform and this team took it on in ONE YEAR!

I made AI look at Github and count how many releases included enhancements for Redgate Flyway for Oracle, (I’m pretty sure ChatGPT was swearing at me under its AI breath) as I see changes coming through all the time, but it’s important that I don’t confuse work with an actual release that included Oracle changes, and in 2025:

  • Flyway Desktop had 11 releases.
  • Flyway CLI had 7 releases.
  • Oracle Schema Compare also had 2 major releases

Changes included dependency handling, native connectors, advance encoding, partition management, comparison and drift management.  Most of these enhancements were due to the Iron Horse team in engineering and the Flyway product team and I really appreciate their collaboration and effort.

AI Advisor

You didn’t think everything I’ve covered here represents the full scope of my work, did you? In addition to my role at Redgate, I continue to allocate time each week to support my previous company, Silk, as a consultant, and I’ve also had the opportunity this year to advise 28 organizations as an AI Advisor. This work spans industries, data platforms, and maturity levels, and it keeps me grounded in the practical realities of how AI is actually being adopted, or struggling to be adopted, inside real production environments.

What makes this advisory work especially meaningful is its technical depth and intentionality. I work with organizations to define clear, outcome-driven goals for AI, assess the readiness of their data, platforms, and teams, and design governance and policy frameworks before models ever enter the picture. Establishing guardrails around data quality, security, lineage, risk, and accountability early on is not optional, but should be foundational. Done well, it allows AI initiatives to scale responsibly and deliver value without eroding trust or introducing hidden operational risk. Helping teams build that foundation is both intellectually rewarding and, increasingly, necessary.

Github

As part of my speaking and advisement, I’m also writing a lot more code than I have done in years.  Where I was doing more architecture and white papers in previous roles, now I’m back to building the actual solutions from the code level.  The code is most often in *SQL, python, PowerShell, BASH and whatever else is thrown at me.  I’m working on Postgres, Oracle, SQL Server, multiple AI and cloud vendors, too. 

I made 69 contributions just to my own repositories this last year.  It’s increased in the last six months as I started to invest in my own code, not just for the company, (which isn’t counted in those totals.)

Looking Back

Reflecting on 2025 reinforced some timeless truths:

  • Technical fundamentals still matter, even as AI and abstraction layers rise
  • AI needs intentional guardrails if it’s going to help instead of harm; and multi-platform realities are now a baseline expectation for data professionals.

Across keynotes, deep technical sessions, articles, podcasts, and mentorship, the year was about intentional momentum .  I choose to build on decades of experience while staying grounded in what engineers and teams actually need today.  I have no doubt I’ll hit publish on this 2nd part of the year in review and remember people and areas I should have focused on, but I’ll stop here and cross my fingers I covered what I should have.

As I look toward 2026 and beyond, I remain focused on practical AI integration, thoughtful multi-platform strategy, and sharing lessons that help the community solve hard problems with confidence and clarity. 

Happy Holidays and Happy New Year!

 

]]>
2025 Year in Review: Advocacy at Redgate, Part I https://dbakevlar.com/2025/12/2025-year-in-review-advocacy-at-redgate-part-i/ Mon, 29 Dec 2025 18:33:30 +0000 https://dbakevlar.com/?p=10580 It’s been a crazy 2025 where I focused on depth over noise, sharing insights on long-term experience and real-world problems. After digging into the post, I ended up breaking this up into two posts to give proper credit to all that happened in 2025. You’re also going to find that rather than chasing shiny AI trends, my work this year centered on helping technologists navigate AI complexity with clarity, especially when data protection was concerned. 

I did keynotes, technical conferences, published writing, and performed community engagement and all the while the central thread was focused on practical impact: work that engineers can use, teams can adopt, and leaders can trust.

Setting a Direction

This year I delivered 11 keynote-level talks, both at technical conferences and universities.  I didn’t just do thought leadership, but explorations of why choices matter and how technical professionals can succeed in uncertainty.

My favorite keynotes this year:

The themes that carried out through these presentations included:

  • The evolving role of database professionals in hybrid and cloud environments
  • How organizations can integrate AI responsibly and strategically
  • The cultural and governance implications of data democratization
  • Practical leadership in technology transformation

Rather than simply reporting on trends, these talks emphasized decision-making frameworks, helping audiences understand the deeper forces shaping our industry and how to act on them.

Deep Practical Engagement at Technical Conferences

In 2025, I was honored to speak at numerous technical conferences, sharing hands-on guidance and actionable takeaways. These presentations did cover some introductory material, but more often went in deep to help professionals solve real challenges in databases, DevOps, AI, and cloud infrastructure.

One highlight which may surprise folks that stood out for me was “PostgreSQL’s Rise to Power: Why the Open-Source Giant is Dominating the Database Landscape” at FOSSY 2025 in Portland, Oregon. This open-source event session examined the practical trends and architectural underpinnings that are driving PostgreSQL’s adoption across organizations of all sizes.  There was a high number of younger attendees and when a 25-year old came up to excitedly speak to me about VIM, I was over the MOON!

I spoke at numerous events in Microsoft, Oracle, DevOps, Open-Source and AI Communities this last year:

  • Zero to Understanding: Oracle for the SQL Server Expert
  • Why the Command Line is Still King in the DevOps World
  • Leading Through Transformation and the Impact of AI to Organizations
  • Guard Rails of Data Democratization with AI in Today’s World
  • DevOps in the Age of AI: Human Powered Evolution
    …and more- 14 new technical sessions for 2025, not counting keynote content.  Each of my technical sessions focused on practical lessons from the trenches of modern data engineering.

These demos and talks reflected my goal of tackling real issues that everyone is facing in tech today, not just the latest buzzwords.

Technical Writing with Staying Power

I ended up on the cover of the Financial IT magazine, which surprised me as much as anyone else, as no one had let me know beforehand that I was going to be headlined!

Writing remained a core part of my work at Redgate in 2025. Instead of short takes or trend pieces, I prioritized long-form, reference-quality content that database professionals, no matter if new or experienced, could reference.

Several Redgate/Simple Talk articles published this year included:

I also wrote a total of 35 posts this year on DBAKevlar.  I was really thrilled that I was able to contribute to my own blog again, as I started it back in 2008, which means there’s 18 years of investment demonstrating my own technical journey.

Community Through Podcasts

In addition to writing, I continued my involvement with the Simple Talks podcast, where we unpack technology adoption, career experiences, industry challenges, and emerging topics like AI governance and data security.

I also participated in Simple Talk’s “State of the Database Landscape 2025” podcast alongside Louis, Steve, and Grant, discussing trends in security, AI adoption, and database professional development.  In the first half of 2025, there were so many, I wondered if all I was going to do was podcasts and keynotes for the year.

But…there’s more

As I’m breaking this up into two parts, the next post I’ll get into community, mentoring and advisory work that was part of my 2025, so stay tuned!

]]>
Appreciating AI Collaboration as a Neurospicy Tech Girl https://dbakevlar.com/2025/12/appreciating-ai-collaboration-as-a-neurospicy-tech-girl/ Mon, 22 Dec 2025 18:04:34 +0000 https://dbakevlar.com/?p=10578 Collaboration has always been one of the hardest parts of my career, and not because people are incapable or unwilling, but more often due to conditions make good collaboration difficult for those challenged with neurodiversity.

Remote work, distributed teams, time zones, conflicting priorities, overloaded calendars, along with , personality mismatches, unconscious bias, power dynamics, and differing communication styles all get in the way. Even when everyone has good intent, meaningful collaboration can feel fragile, exhausting, or inaccessible…especially in deeply technical roles where thinking time matters as much as meeting time.

Over the years, I’ve learned that the biggest barrier to productivity isn’t lack of skill or motivation, but more often it’s friction. And for me, friction often shows up as hesitation on my side that I’ve learned over time.
Should I ask this question? Are they going to roll their eyes at me and think the answer is obvious? Will someone get annoyed? Will I be misunderstood?

This is where generative AI, no matter if it’s ChatGPT, Claude, Perplexity, etc., has become something I didn’t expect but deeply appreciate: my go-to collaborator.

A Judgment-Free Space to Think Out Loud

When I’m working through a technical problem and there’s no one available or no one safe to collaborate with, AI gives me a place to think out loud.  I can do a brain dump with the best of the neurospicy crowd, but for the neurotypical on the receiving end, it can be overwhelming to be on the other end of a brain akin to 25 McDonald’s drive-thrus spitting out orders all at once.  Many AI tools have already recognized the benefit of AI for brain dump solutions, so I wasn’t surprised the need is there.

The thing is, I can ask AI questions without worrying about:

  • Eye-rolling or frustration
  • Attention span loss by the listener I’m hoping for similar focus from.
  • Being told to either slow down or asked questions that don’t relate to the topic I’m hyper-focused on.
  • Someone interpreting curiosity as incompetence which often results in…ahem…mansplaining to my brain which is already running at 500mph.

There’s no pushback rooted in ego, no performative expertise, no subtle power struggle, but just clean interaction.

That doesn’t mean AI is always correct, and that’s okay, because honestly, no one is perfect. 

Challenging Without Consequences

One of the most valuable aspects of working with AI is the ability to challenge it freely.

If something feels wrong, incomplete, biased, or poorly reasoned, I can say so directly. I don’t have to soften my language, manage anyone’s feelings, defensive reactions or tip-toe around anyone’s egos.

When challenged, AI does one of three useful things:

  1. Re-evaluates the response and corrects it
  2. Explains the limits of its data or certainty
  3. Helps me verify claims with sources or alternative approaches

That is all.  There’s no huffiness, no threat response, and no time wasted navigating interpersonal tension, (and anyone who knows me, KNOWS how much I abhor having my time wasted.) In human collaboration, those moments can derail my entire hyper-focused brain and yet with AI, I can simply move forward at the accelerated pace I’m comfortable at.

Neurospicy Productivity Matters

I will always admit to my neurospicy-ness. I am aware that I can come off as blunt at times, (most of the time the general public also realizes it’s because “I’m a say what I mean” girl and I’m a woman, but that’s the world and how the world views you must be accepted.) This results in me moving quickly, asking questions directly, and I care deeply about accuracy and outcomes. It has sometimes been an issue in my career and not because I’m wrong, but because communication styles don’t always align.

AI doesn’t require me to mask, it doesn’t require me to slow my thinking to protect someone else’s comfort and it doesn’t interpret clarity as aggression.

That alone removes a massive amount of friction for me and makes my day better.  Like I said, friction and frustration are the enemy of productivity.

Removing Procrastination and Anxiety

With that said, procrastination and frustration are my personal worst enemies. Not laziness, but hesitation. The pause I get before asking a question. The mental load of preparing for a reaction, which sometimes is justified and other times, just my brain reacting.  The energy spent deciding whether something is “worth” bringing up, is a surprising amount of wasted time for me.

With AI, that hesitation disappears and I simply move approach the problem.  I’m never anxious about the interaction or worry about whether I’m taking up someone’s time or asking the “wrong” question. I just engage, iterate, refine, and move forward.  That freedom has made me more productive, more assertive, and more consistent in getting real work done, including deep technical problem-solving where momentum matters.

Collaboration Isn’t Just Human or Only AI

I want to be clear: AI does not replace human collaboration. It doesn’t replace lived experience, creativity, empathy, or shared ownership. The best work still happens with people and I love working with other HUMANS.

All I’m saying here is that AI does fill a critical gap:

  • When no one is available
  • When collaboration feels unsafe or costly
  • When I need to move fast without social overhead

It’s a collaborator that’s always present, never threatened, and willing to be challenged.  I’m willing to admit out loud, for those of us who thrive on clarity, iteration, and momentum, that matters more than we want to admit in today’s age where bosses are looking for any way to replace human beings.

Generative AI hasn’t made me less collaborative.
It’s made me more productive and more confident by removing barriers that never needed to exist in the first place.

I’m willing to say that out loud.

]]>
Vector Search in Oracle Database 26ai https://dbakevlar.com/2025/12/vector-search-in-oracle-database-26ai/ Fri, 19 Dec 2025 20:54:42 +0000 https://dbakevlar.com/?p=10567 The use case: Help Me Learn My New Car

As much as folks are lamenting about the new cloud version of Oracle support, I have other issues on my mind- like trying to acclimate to my first electric vehicle, even though I already owned a gas-powered Mini Cooper. 

As I was looking for a good use case to test new vector search with Oracle 26ai, it occurred to me that I could use something better than a key word search when using the manual to my new car. 

What if I used the manual, which is public information and could be easily made searchable as my use case?

Imagine you run a support portal (internal or customer-facing) with lots of short “support notes”:

  • “My MINI won’t charge with a public CCS fast charger…”
  • “The app shows charging error after an OTA update…”
  • “My EV plug latch is stuck…”

You discover quickly that keyword search often fails because people describe the same problem in different words. Vector search solves this by comparing meaning, not exact terms.  I’ve been facing something similar as I try to come up to speed with my car, so I decided to see if AI could help me out.

So in this blog post, I will work on:

  1. store the user manual and its embedding vector in a table
  2. create an HNSW vector index for fast nearest-neighbor search
  3. run a query that returns the most semantically similar sections in the manual to stop having to search for the info and often, failing.

Oracle 26ai is a strong fit because it lets you do this inside the database with a native VECTOR type, SQL functions, and built-in vector indexing options.

Create Object, Load Data, ETC.

Assumptions

  • You already have embeddings (from your app / model pipeline), or
  • You’ll generate embeddings in-database via DBMS_VECTOR.UTL_TO_EMBEDDING using a model you’ve loaded (optional section below). Oracle Documentation

Dimension note: pick a dimension that matches your embedding model (e.g., 768, 1024, 1536, etc.).

Table to store notes and vectors

1) Drop & create a table for support notes + embeddings

begin

execute immediate 'drop table support_notes purge';
 exception
 when others then
   if sqlcode != -942 then raise; end if;
end;

/
create table support_notes (
note_id     number generated by default as identity primary key,
created_at  timestamp default systimestamp not null,
title       varchar2(200) not null,
body        clob not null,
-- VECTOR(d) is the native vector datatype (d = embedding dimension)
embedding   vector(1536) not null
);

Oracle’s native VECTOR datatype is designed for storing embeddings directly in tables, so this was surprisingly easy.

Insert sample data (I have two options)

Option A: Insert embeddings you already have

Below I show the “shape” of the insert. Replace the […] with real embedding values from your model pipeline.

Make sure to insert precomputed embeddings (example shape; replace with real vectors) when using this yourself:

insert into support_notes (title, body, embedding) values (
  'Fast charging fails at public station',
  'CCS fast charger starts then stops after 5-10 seconds. Happens on multiple stations.',
  vector('[0.0123, -0.0456, 0.0789, ...]')  -- 1536 floats
);
insert into support_notes (title, body, embedding) values (
'Charging error after software update',
'After an OTA update, the car reports a charging error intermittently, especially on Level 2.',
vector('[0.0011, -0.0234, 0.0567, ...]')
);

commit;

Option B (this is optional), generate embeddings in the database:

Oracle 26ai supports generating embeddings via DBMS_VECTOR.UTL_TO_EMBEDDING (using JSON params and an in-database model you’ve loaded).

This Assumes you already loaded an ONNX embedding model named ‘doc_model’.

var params clob;

exec :params := '{"provider":"database","model":"doc_model"}';

insert into support_notes (title, body, embedding)
values (
'EV plug latch stuck',
'The charging connector latch won’t release unless I unlock twice.',
dbms_vector.utl_to_embedding('The charging connector latch won’t release unless I unlock twice.', json(:params))
);

commit;

Create the vector index (HNSW)

For fast similarity search, create an HNSW index, i.e. we’ll create an HNSW vector index (in-memory neighbor graph):

create vector index support_notes_hnsw_idx
on support_notes (embedding)
organization inmemory neighbor graph
distance cosine
with target accuracy 90;

I discovered how to do this pattern for HNSW vector indexes in Oracle AI Vector Search in the following blog.

Run A Top-K Similar Vector Search

You provide a query vector and ask for the nearest neighbors by cosine distance.  The following query will find the five most similar notes in the manual as part of an input vector.  We’ll need to replace the :qvec with query embedding(VECTOR(1536)) and in a lot of apps, you can just compute this embedding in the app tier and then bind it.

select
 note_id,
 title,
 vector_distance(embedding, :qvec, cosine) as dist
from support_notes
order by dist
fetch first 5 rows only;

VECTOR_DISTANCE is one of the core vector SQL functions used for similarity search, so it makes this easy, too.

Why use Oracle for vector search?

  • One platform for OLTP + AI search: keep your transactional data, security model, and vector similarity search in the same database instead of bolting on a separate vector store. Oracle positions AI Vector Search specifically around native VECTOR storage + in-database generation options.
  • First-class indexing choices: Oracle supports multiple vector index types (including HNSW and others) and documents when to use them.
  • In-database embedding support: if you want, you can load an ONNX embedding model and generate embeddings with DBMS_VECTOR.
  • Hybrid search path: if you later want “keywords + meaning”, Oracle documents hybrid vector indexing that combines text and vector search in one approach.

Below is a single, run-it-in-SQLcl demo script for Oracle Database 26ai that does hybrid search (keyword + vector) over a realistic “2024 MINI Cooper SE user manual” use case.

  • Dimension: VECTOR(1536)
  • Distance: COSINE
  • Hybrid: Oracle Text (keyword) + Vector index (semantic)
  • Embeddings: generated in-db via DBMS_VECTOR.UTL_TO_EMBEDDING (you’ll plug in your model/provider)
-- mini_manual_hybrid_demo.sql
-- Hybrid search demo for Oracle 26ai: Oracle Text + Vector Search (COSINE, 1536 dims)
-- Use case: Chunked 2024 MINI Cooper SE manual content (simplify manual navigation)

set echo on
set feedback on
set pagesize 200
set linesize 200
set serveroutput on
whenever sqlerror exit sql.sqlcode

prompt =========================================================
prompt 0) Preconditions
prompt - You need an embedding model accessible to DBMS_VECTOR
prompt - Update the PARAMS JSON below to match environment
prompt =========================================================

--------------------------------------------------------------------------------
-- 1) Drop old objects (safe rerun)
--------------------------------------------------------------------------------
begin
  execute immediate 'drop table mini_manual_chunks purge';
exception when others then
  if sqlcode != -942 then raise; end if;
end;
/

begin
  execute immediate 'drop sequence mini_chunk_src_seq';
exception when others then
  if sqlcode != -2289 then raise; end if;
end;
/

--------------------------------------------------------------------------------
-- 2) Create table: manual chunks + metadata + embedding
--------------------------------------------------------------------------------
prompt =========================================================
prompt 1) Create storage table for chunked manual text + vectors
prompt =========================================================

create table mini_manual_chunks (
  chunk_id        number generated by default as identity primary key,
  source_id       number not null,
  doc_title       varchar2(200) not null,
  section_path    varchar2(500) not null,  -- e.g. "Charging > troubleshooting"
  page_hint       varchar2(50),
  chunk_text      clob not null,
  -- dimension is fixed at 1536 (match your embedding model)
  embedding       vector(1536) not null,
  created_at      timestamp default systimestamp not null
);

create sequence mini_chunk_src_seq start with 1000 increment by 1;

--------------------------------------------------------------------------------
-- 3) Helper: embedding params + convenience procedure to insert chunks
--------------------------------------------------------------------------------
prompt =========================================================
prompt 2) Setup embedding params + insert helper
prompt =========================================================

-- IMPORTANT: Update this JSON for environment.
--  - {"provider":"database","model":"MINI_MANUAL_EMBED_1536"}
--  - {"provider":"oci","credential_name":"OCI_CRED","endpoint":"...","model":"..."}
--  - {"provider":"openai","credential_name":"...","model":"text-embedding-3-large"}
--
-- Keep the dimension consistent with VECTOR(1536).
var EMBED_PARAMS clob
begin
  :EMBED_PARAMS := '{
    "provider": "database",
    "model": "MINI_MANUAL_EMBED_1536"
  }';
end;
/

create or replace procedure add_manual_chunk(
  p_source_id    in number,
  p_doc_title    in varchar2,
  p_section_path in varchar2,
  p_page_hint    in varchar2,
  p_chunk_text   in clob
) as
  v_emb vector(1536);
begin
  -- Generate embedding in-database
  v_emb := dbms_vector.utl_to_embedding(
             p_chunk_text,
             json(:EMBED_PARAMS)
           );

  insert into mini_manual_chunks(source_id, doc_title, section_path, page_hint, chunk_text, embedding)
  values (p_source_id, p_doc_title, p_section_path, p_page_hint, p_chunk_text, v_emb);
end;
/
show errors

--------------------------------------------------------------------------------
-- 4) Insert realistic sample manual chunks (MINI Cooper SE focused)
--    The ones I chose are short, “chunkable” blocks like you'd create after parsing the PDF.
--------------------------------------------------------------------------------
prompt =========================================================
prompt 3) Insert realistic MINI Cooper SE manual-style sample chunks
prompt =========================================================

declare
  v_src number := mini_chunk_src_seq.nextval;
begin
  add_manual_chunk(
    v_src,
    '2024 MINI Cooper SE Owner''s Manual' 
    'Charging > Overview',
    'p. 110',
    q'[
Charging your vehicle: You can charge using AC charging (Level 1/Level 2) or DC fast charging when equipped and when supported by the charging station.
Always follow the instructions on the charging station and ensure the connector is fully seated before leaving the vehicle.
If charging does not start, verify the vehicle is unlocked (if required), the connector is latched, and the station is authorized.
]'
  );

  add_manual_chunk(
    v_src,
    '2024 MINI Cooper SE Owner''s Manual',
    'Charging > AC charging > Tips',
    'p. 114',
    q'[
AC charging: Use a compatible charging cable and confirm the outlet or EVSE is operating properly.
If the charging cable has status indicators, check for fault lights. Avoid using extension cords unless explicitly permitted by the EVSE manufacturer.
Charging may be reduced or interrupted to protect the battery under extreme temperatures.
]'
  );

  add_manual_chunk(
    v_src,
    '2024 MINI Cooper SE Owner''s Manual',
    'Charging > DC fast charging > Troubleshooting',
    'p. 120',
    q'[
DC fast charging: If the charging session starts and then stops, try re-initiating the session and confirm the connector is fully engaged.
Some stations require re-authorization if the session is interrupted.
If repeated attempts fail, try a different stall or station. Inspect the connector for debris and ensure the charge port area is clear.
]'
  );

  add_manual_chunk(
    v_src,
    '2024 MINI Cooper SE Owner''s Manual',
    'Charging > Charge port & connector',
    'p. 112',
    q'[
Charge port: Keep the charge port and connector clean and dry. Do not insert objects into the charge port.
If the connector does not release, confirm the vehicle is unlocked and follow the recommended release procedure.
Do not force the connector; damage may occur.
]'
  );

  add_manual_chunk(
    v_src,
    '2024 MINI Cooper SE Owner''s Manual',
    'Driver assistance > Lighting > Headlights',
    'p. 68',
    q'[
Automatic headlight control: When set to the default position, the vehicle controls exterior lights based on ambient light conditions.
Daytime running lights may be active during daylight. In low-light conditions, the system turns on headlights and tail lamps automatically.
Driver remains responsible for using appropriate lighting in poor visibility.
]'
  );

  add_manual_chunk(
    v_src,
    '2024 MINI Cooper SE Owner''s Manual',
    'Infotainment > Mobile app > Charging status',
    'p. 142',
    q'[
Charging status in the mobile app: The app may show charging state, estimated completion time, and notifications.
If the app shows an error but the vehicle is charging, refresh the status or verify connectivity.
For persistent issues, confirm the vehicle services are active and the phone has network access.
]'
  );

  commit;
end;
/
prompt Insert complete.

--------------------------------------------------------------------------------
-- 5) Create HYBRID indexes:
--    a) Oracle Text index on chunk_text for keyword search
--    b) Vector index (HNSW) on embedding for semantic search using cosine distance
--------------------------------------------------------------------------------
prompt =========================================================
prompt 4) Create indexes for HYBRID search (Text + Vector)
prompt =========================================================

-- Oracle Text (keyword / BM25-style scoring with CONTAINS)
-- Note: CTXSYS must be installed/configured 
create index mini_manual_text_idx
on mini_manual_chunks(chunk_text)
indextype is ctxsys.context;

-- Vector index (HNSW) for cosine similarity
create vector index mini_manual_vec_idx
on mini_manual_chunks(embedding)
organization inmemory neighbor graph
distance cosine
with target accuracy 90;

--------------------------------------------------------------------------------
-- 6) Queries
--------------------------------------------------------------------------------
prompt =========================================================
prompt 5) DEMO: Pure keyword search (Oracle Text)
prompt =========================================================

-- Keyword query example: "connector does not release"
-- Score is available via SCORE(1)
select
  chunk_id,
  section_path,
  page_hint,
  score(1) as text_score,
  substr(chunk_text, 1, 160) as preview
from mini_manual_chunks
where contains(chunk_text, 'connector AND release', 1) > 0
order by text_score desc
fetch first 5 rows only;

prompt =========================================================
prompt 6) DEMO: Pure vector similarity search (semantic)
prompt =========================================================

-- Create a query embedding from a user question (in-db)
var Q_TEXT clob
var QVEC  vector(1536)

begin
  :Q_TEXT := 'My fast charging starts then stops after a few seconds. What should I check?';
  :QVEC   := dbms_vector.utl_to_embedding(:Q_TEXT, json(:EMBED_PARAMS));
end;
/

select
  chunk_id,
  section_path,
  page_hint,
  vector_distance(embedding, :QVEC, cosine) as dist,
  substr(chunk_text, 1, 160) as preview
from mini_manual_chunks
order by dist
fetch first 5 rows only;

prompt =========================================================
prompt 7) DEMO: HYBRID search (keyword + vector)
prompt =========================================================

-- Hybrid strategy:
--  1) Use keyword query to enforce topical relevance (CONTAINS)
--  2) Use vector distance to order by semantic similarity within that set

var KQ varchar2(4000)
begin
  :KQ := 'charging AND (fast OR DC OR station)';
end;
/

select
  chunk_id,
  section_path,
  page_hint,
  score(1) as text_score,
  vector_distance(embedding, :QVEC, cosine) as dist,
  substr(chunk_text, 1, 160) as preview
from mini_manual_chunks
where contains(chunk_text, :KQ, 1) > 0
order by dist
fetch first 5 rows only;

prompt =========================================================
prompt 8) DEMO: “Simplify the manual” output (a friendly answer-shaped result)
prompt =========================================================

-- In a real app you’d pass the top chunk(s) to your summarizer / RAG layer.
-- For demo, we return the best chunk as “what to do next”.

select
  'Suggested section: ' || section_path || ' (' || page_hint || ')' as suggestion,
  chunk_text as recommended_text
from (
  select section_path, page_hint, chunk_text,
         vector_distance(embedding, :QVEC, cosine) as dist
  from mini_manual_chunks
  where contains(chunk_text, :KQ, 1) > 0
  order by dist
)
fetch first 1 row only;

prompt =========================================================
prompt DONE.
prompt Next step: replace sample chunks with your parsed manual chunks.
prompt =========================================================

It took me a while to get this to work the way I wanted, had to view several examples, push some of the code through AI and I’m still fiddling with it, but it’s a start to making it easier for me to search the manual with simple text vs. keywords.

So, Does this Solve My “Simplify the Whole Manual” Goal?

I’m parsing the manual PDF into chunks (200–800 tokens-ish each), each with:

  1. section_path (table of contents path)
  2. page_hint (page or location marker)
  3. chunk_text (the actual chunk)
  4. Then generating embeddings (you can do it in the database, like the script, or in your app pipeline)
  5. Hybrid search:
    • Oracle Text: catches exact terms, model numbers, “DRL”, “CCS”, “release procedure”
    • Vector search: catches meaning (“starts then stops”, “interrupts after a few seconds”, “session fails”)
  6. The output:
    • Retrieve top 3–10 chunks
    • Summarize into “steps to try” + link to section/page

Example Query and Result Set

SQL> select
2 'Suggested section: ' || section_path || ' (' || page_hint || ')' as suggestion,
3 chunk_text as recommended_text
4 from (
5 select section_path, page_hint, chunk_text,
6 vector_distance(embedding, :QVEC, cosine) as dist
7 from mini_manual_chunks
8 where contains(chunk_text, :KQ, 1) > 0
9 order by dist
10 )
11 fetch first 1 row only;
SUGGESTION
Suggested section: Charging > DC fast charging > Troubleshooting (p. 120)


RECOMMENDED_TEXT

DC fast charging: If the charging session starts and then stops, try re-initiating the session and confirm the connector is fully engaged. Some stations require re-authorization if the session is interrupted. If repeated attempts fail, try a different stall or station. Inspect the connector for debris and ensure the charge port area is clear.

Why Oracle for this specific use case of manual simplification?

  • You get one system that can store: chunk text, metadata, vectors, and run both keyword + semantic search in SQL (less glue, fewer moving parts).
  • You can keep security and access control in the database (who can search which manuals / editions) if there was proprietary information which shouldn’t be assessable to the public.
  • You can run this close to your data and integrate it with your existing app patterns (APIs, SQL, PL/SQL), and scale it with indexes designed for the job (Oracle Text + HNSW).

I’ll revisit this as I keep coming up to speed with my new car.  I really knew my 2015 Mini Cooper JCW all the way down to the bolts and I’d like to understand my new 2024 Mini Cooper SE Resolute as well.  I’m hoping this will be a way to speed up that knowledge transfer.

]]>
Extension Management in PostgreSQL for New DBAs https://dbakevlar.com/2025/12/extension-management-in-postgresql-for-new-dbas/ Thu, 11 Dec 2025 20:30:58 +0000 https://dbakevlar.com/?p=10558 PostgreSQL’s true power doesn’t just come from its rock-solid relational engine, but it’s the fact that Postgres can grow with you. Extensions allow you to bolt on new capabilities, enhance performance, integrate external tools, and transform the database into something far more powerful than its default installation, which is something I’m really learning to love.

From pg_stat_statements to pgvector, logical decoding plugins, job schedulers, and custom procedural languages, Postgres extensions behave like feature packs you can enable at the database level. That also means DBAs must know how to inspect, maintain, and manage them just as carefully as any schema or server configuration.

This post walks through the essentials of checking available extensions, verifying versions, installing them safely, and quickly assessing what’s loaded in your environment.

Viewing All Available Extensions

To see every extension your current PostgreSQL instance supports — including those not yet installed — query the pg_available_extensions view.

SELECT * FROM pg_available_extensions;

This gives you:

  • Extension name
  • Default version shipped with your Postgres build
  • Installed version (if any)
  • Description/documentation comment

This view is your first stop when validating what’s possible on a given cluster, especially if you’ve upgraded PostgreSQL or added new packages.

Checking Installed Extensions & Version Status

To view only those extensions currently installed (and whether they match the default version or need an upgrade) use:

SELECT name,
 default_version,
 installed_version,
 comment
FROM pg_available_extensions
WHERE installed_version IS NOT NULL
ORDER BY name;

Example output:

        name        | default_version | installed_version |                                comment
-------------------+-----------------+-------------------+---------------------------
pg_stat_statements | 1.11            | 1.11              | track planning and execution statistics of all SQL statements executed
 pgagent           | 4.2             | 4.2               | A PostgreSQL job scheduler
plpgsql            | 1.0             | 1.0               | PL/pgSQL procedural language
(3 rows)

In this case, the installed extensions:

  • pg_stat_statements
  • pgagent
  • plpgsql

…are all running their default versions, which makes comparison against documentation or best practice guides incredibly easy.

If you’re troubleshooting performance, managing security posture, or preparing for a Postgres upgrade, confirming versions here is critical.

Creating an Extension

Installing an extension is as straightforward as it gets:

CREATE EXTENSION <extension name>;

This is executed at the database level, so remember, this is not at the cluster level. That means you choose which databases receive which capabilities. This also means extensions can be installed incorrectly (wrong DB, wrong owner, wrong schema), so always double-check before deploying production changes.

Verifying Extension Installation

After installation, confirm what’s loaded with:

SELECT oid, extname FROM pg_extension;

Example:

  oid  |      extname

-------+--------------------

15101 | plpgsql

 16389 | pgagent

 44351 | pg_stat_statements

(3 rows)

Any extension successfully created appears here. If it doesn’t, the create operation didn’t complete, or privileges prevented installation.

Faster: Using \dx in psql

I am learning to love \* shortcuts, so if you’re working in the psql command line, the quickest view is:

postgres=# \dx
                         List of installed extensions

      Name        | Version |   Schema   |                              Description

------------------+---------+------------+-----------------------------------------

pg_stat_statements | 1.11    | public     | track planning and execution statistics of all SQL statements executed

pgagent            | 4.2     | pgagent    | A PostgreSQL job scheduler

plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language

(3 rows)

\dx is essentially a shortcut to the same information as the catalog query  and it’s just much faster to type.
For day-to-day DBA work, this becomes your go-to readability tool.

Why It Matters

Postgres extensions enable capability… but they also introduce risk if unmanaged:

  • Version drift after upgrades can cause unexpected behavior.
  • Security considerations vary per extension (schemas, grants, shared libraries).
  • Operational impact such as extensions like pg_stat_statements add overhead if misconfigured.
  • Dependency management is real for things like PostGIS or logical decoding plugins.

A smart DBA treats extensions like packages in Linux or modules in Oracle or SQL Server:
You track them. You audit them. You maintain them.

Final Thoughts

Extension management is one of the features that makes PostgreSQL amazingly flexible, but flexibility comes with responsibility. By learning how to inspect, validate, and maintain extensions properly, DBAs can ensure they’re running secure, performant, and consistent database environments.

If you’d like a follow-up article on:

  • How extensions load into shared_preload_libraries
  • Best practices for extension upgrades
  • The must-have extensions for DBAs
  • Schema placement and permission considerations
  • Logical decoding, monitoring, or pgvector deep dives

Postgres is a playground, and it’s fun to play and learn what it’s all about!

]]>
Multiplatform Reality Checks: pg_stat_wal vs. Oracle Redo Metrics https://dbakevlar.com/2025/11/multiplatform-reality-checks-pg_stat_wal-vs-oracle-redo-metrics/ Tue, 25 Nov 2025 22:52:36 +0000 https://dbakevlar.com/?p=10537 For anyone who has spent years tuning Oracle redo, the first time you look at PostgreSQL’s pg_stat_wal view may feel a bit underwhelming. Everything works, but the instrumentation isn’t the same and you suddenly realize how much Oracle has spoiled you with it’s advanced and expensive features.

As I’ve been working deeper with PostgreSQL, I keep getting questions about how its WAL (Write-Ahead Logging) data compares to Oracle’s redo performance metrics. Let’s break it down in a way that makes sense for people who’ve been living in the Oracle world for years.

PostgreSQL and What pg_stat_wal Actually Gives You

PostgreSQL introduced pg_stat_wal relatively recently, and it provides solid but high-level visibility into WAL activity. You’ll see metrics like:

  • Total WAL bytes written
  • Rate of WAL generation
  • Full-page-write counts
  • Number of writes and number of fsyncs
  • Estimated sync times

This is great for understanding the following:

  • How “chatty” the system is
  • When a workload causes WAL spikes
  • Whether storage performance is keeping up
  • How to scale archive and replica throughput

It’s not like we’re going to confuse it with the deep dive Oracle veterans are used to. pg_stat_wal gives you throughput and timing, not internal behavior.

There’s no visibility into:

  • Contention for internal WAL structures
  • Per-session redo production
  • Per-wait-event breakdowns
  • Background writer pressure
  • Commit latency causality
  • Checkpoint-induced stall details

It’s helpful, but it’s nowhere near the comprehensive instrumentation that Oracle spent decades building.

Oracle Redo Has Decades of Instrumentation and It Shows

Oracle’s redo logging system is one of the most heavily instrumented parts of the database engine. An Oracle DBA with years under their belt, knows many of these views inside and out:

  • V$LOG, V$LOGFILE, V$LOG_HISTORY
  • V$SYSSTAT, V$SESSTAT, V$MYSTAT
  • V$SYSTEM_EVENT, V$SESSION_EVENT
  • V$EVENT_HISTOGRAM
  • V$LATCH for redo copy and redo allocation
  • V$ARCHIVE_DEST for archiver throughput

*The above views are not part of the Automatic Workload Repository (AWR) and are not part of a management pack, but still part of the standard and enterprise edition license.

An example of a query that may look at waits on redo log switches might look like the following:

WITH log_window AS (
   -- Focus on the last 30 minutes of log switches
   SELECT MIN(first_time) AS start_time,
          MAX(first_time) AS end_time
   FROM   v$log_history
   WHERE  first_time >= SYSDATE - INTERVAL '30' MINUTE
),
sess_redo AS (
   -- Per-session redo stats
   SELECT st.sid,
          MAX(CASE WHEN sn.name = 'redo size'    THEN st.value END) AS redo_size_bytes,
          MAX(CASE WHEN sn.name = 'redo entries' THEN st.value END) AS redo_entries
   FROM   v$sesstat  st
   JOIN   v$statname sn
          ON sn.statistic# = st.statistic#
   WHERE  sn.name IN ('redo size','redo entries')
   GROUP BY st.sid
),
sess_waits AS (
   -- Sessions waiting on log-switch-related events
   SELECT se.sid,
          se.event,
          se.total_waits,
          se.time_waited_micro
   FROM   v$session_event se
   WHERE  se.event IN (
            'log file switch (archive incomplete)',
            'log file switch (checkpoint incomplete)',
            'log file switch completion'
          )
)
SELECT lh.thread#,
      lh.sequence#,
      lh.first_time,
      sw.sid,
      sw.event,
      sw.total_waits,
      ROUND(sw.time_waited_micro / 1e6, 3) AS time_waited_sec,
      ROUND(sr.redo_size_bytes / 1024 / 1024, 1) AS redo_mb,
      sr.redo_entries
FROM   v$log_history lh
      CROSS JOIN log_window lw
      JOIN   sess_waits sw
             ON lh.first_time BETWEEN lw.start_time AND lw.end_time
      JOIN   sess_redo  sr
             ON sr.sid = sw.sid
WHERE  lh.first_time BETWEEN lw.start_time AND lw.end_time
ORDER BY lh.first_time,
        time_waited_sec DESC
/

Yes, the query is a little more involved and an output example might look like this:

THREAD#

SEQUENCE#

FIRST_TIME

SID

EVENT

TOTAL_WAITS

TIME_WAITED_SEC

REDO_MB

REDO_ENTRIES

1

18234

25-NOV-25 10:02:11

321

log file switch (checkpoint incomplete)

42

18.247

512.3

1245890

1

18234

25-NOV-25 10:02:11

417

log file switch (archive incomplete)

15

6.931

305.7

845221

1

18234

25-NOV-25 10:02:11

512

log file switch completion

11

3.404

92.4

201887

 

Oracle exposes things PostgreSQL simply doesn’t track:

  • LGWR wait events and detailed IO timings
  • Redo buffer contention (allocation/copy latch pressure)
  • Why commits stall (latch, IO, CPU, archive lag, wakeups)
  • Per-session redo generation
  • Archiver/standby delays at per-destination granularity

And of course, the wait-event model gives you the kind of nuanced insight that Oracle DBAs take for granted:

“Is the commit slow because of IO, CPU, log buffer space, archiver pressure, or LGWR scheduling?”

PostgreSQL can tell you that commits slowed down. Oracle can tell you the why, and precisely where the hurt is happening.

Side-by-Side Comparison

WAL/Redo Data Exposure

Capability

PostgreSQL (pg_stat_wal)

Oracle Redo Metrics

WAL/redo bytes generated

Yes

Yes

Number of writes/syncs

Yes

Yes

Full-page-write counts

Yes

N/A

Commit latency details

Very limited

Deep and granular

Redo buffer contention

Not exposed

Fully exposed

Process-level waits

None

Extensive LGWR / latch waits

Log switch history

Limited

Rich historical tracking

Archiver performance

Basic

Detailed per destination

Per-session redo stats

No

Yes

Tuning depth

Basic

Advanced

     

Why the Gap Exists: Different Architectures, Different Priorities

I really want to add, the contrast isn’t because one database is “better;” it’s because the logging pipelines are designed differently and the investment is over decades for Oracle.

PostgreSQL WAL Architecture

  • Group commit model
  • Background WAL writer, background checkpointer
  • No single LGWR process
  • Lighter, simpler write pipeline
  • Less risk of centralized contention

Oracle Redo Architecture

  • Central LGWR process
  • Separate log buffer
  • Redo allocation/copy latches
  • Highly tuned for extreme OLTP workloads
  • Three decades of instrumentation

Oracle essentially built a redo subsystem that’s self-diagnosing. PostgreSQL’s is efficient and resilient, but not nearly as chatty.  This has significant positives and negatives for both, but it’s important to understand what those are.

So What Does This Mean for multiplatform DBAs?

If you’re coming from Oracle, here’s the honest truth:

pg_stat_wal will feel sparse, and sometimes frustrating.

You can still tune PostgreSQL WAL effectively, but you will just rely more on inference than on the kind of hard wait event data Oracle hands you on a silver platter.

In PostgreSQL, you’ll focus more on:

  • Checkpoint behavior
  • WAL file recycling
  • fsync timing
  • Storage throughput
  • Replica lag
  • Workload patterns

And you’ll use pg_stat_wal as one piece of the picture—not a complete story.

In Summary

Both databases have robust, reliable logging systems. Oracle’s redo visibility is legendary and unmatched. PostgreSQL is getting better every release, but WAL instrumentation today is still a “just enough” model compared to Oracle’s “instrument everything” philosophy.

For Oracle DBAs crossing over, the key is adapting to the PostgreSQL mindset.  You don’t get all the internals, but you do get enough to guide you, and the rest comes from understanding the system more intuitively under the covers.

]]>
Gearing Up for PASS Summit: Keynotes, Redgate Fun, and the Magic AI-Ball https://dbakevlar.com/2025/11/gearing-up-for-pass-summit-keynotes-redgate-fun-and-the-magic-ai-ball/ Thu, 13 Nov 2025 21:35:20 +0000 https://dbakevlar.com/?p=10531 It’s that time of year again when data professionals from every corner of the world are gathering for PASS Data Community Summit and I’m already exhausted…and thrilled to go to Seattle. Between the keynotes, the sessions, and reconnecting with my Summit family, this week always feels like the ultimate data event that I must power through!

I start the long week with SQL Saturday Oregon, the weekend before PASS Summit, as I’ll be the keynote speaker at this event and then many of us will head off to Seattle on the SQL Train on Sunday to begin the week of learning and shenanigans.  As we get ready to storm Seattle for another year of insights, inspiration, and a few too many hours each day, here’s a little look at what I’ve got planned, what you can expect, and a cheeky side project I’ve brought to life for this year’s event.

The Keynote Kickoff

PASS Summit keynotes have always set the tone for the week. All I’m going to say is that they let us drive the theme, the content and the scripts this year, so be forewarned:  It’s going to be quite the event!  Lots of demos, lots of humor and I hope it resonates with everyone.
I’ll be doing the keynote with Steve Jones and Laura Copeland this year and it’s going to have everything that previous attendance has asked us for in a keynote, so bring your caffeine and your curiosity; you’ll need both.

My Sessions & The Collectible Minifig

I’ll be presenting four sessions this year outside of the keynote, focused on Oracle for the SQL Server professional, Understanding multiplatform workloads and tools, and an AI panel.  These sessions will teach how to build and manage database environments, no matter the platform, that scale for modern teams, no matter what your specialty is.  I’ll be presenting three of them with one of my favorite people, Rene Antunez.
If you’re in my sessions, please connect with me or just come say hi! And yes, I’ll have my limited-edition Kellyn LEGO minifigs on hand. The previous year’s minifigs have become something of a collector’s item at events, and I can’t wait to hand out the one of me this year.

The Magic AI-Ball

Now for something fun and yet, educational. I’ve brought back the Magic 8 Ball, but made it AI-powered.
This year, I’ve published a PowerShell script called Magic8 Ball, available now on my GitHub.

This isn’t your average chatbot, but a sample script that offers insight in how you can start small and isolated with AI to:

  1. Become comfortable working with an LLM just using Powershell for the initial script.
  2. Run Ollama with an LLM on your laptop isolated and offline, so there’s no concern about unintentionally uploading PII to the world.
  3. As it’s running locally and offline with an open-source model, there’s no tokens or costly GPUs, (mine runs on a 16G laptop…seriously.)

The script is a playful offline simulation that uses a local large language model (LLM) running through Ollama (available for Windows 11). It’s powered by one of my favorite models, Qwen 2.5, and runs completely isolated and offline, which is perfect for demos, labs, or those who just like their AI self-contained.  Take the code and make changes to the prompt to perform different tasks or build out new code as you get more comfortable with the set up.

Here’s what you’ll need:

  • Ollama installed locally
  • Qwen or any Ollama-compatible LLM (runs offline once downloaded in the current script)
  • Windows 11 environment

The script acts as a cheeky, snarky Magic 8-Ball, while answering your burning questions with AI-infused wit. Think of it as “Outlook in the cloud… not so good” meets “As I see it, AI will fix it.”

Redgate Wednesday Night Event  and Ask the AI-Ball!

On Wednesday night, Redgate is hosting an invite-only event where the Magic AI-Ball will make its public debut!
You’ll be able to ask it questions live, see its best sarcastic comebacks, and even compete to win one of 20 red Magic 8 Balls we’ll be giving away to the best-voted questions of the night.

A close up of a red ball

AI-generated content may be incorrect.
Bring your imagination (and your sense of humor), the snarkier, the better.

Summit Is More Than Sessions

Beyond the slides and scripts, PASS Summit is about reconnecting with our people. The hallway conversations, the late-night debates about Copilot vs. Chatgpt or SQL Server or Postgres, (ok, not so much on the second one, but we’re getting there) and the moments that remind us why we love this community.

I’ll be hanging around with friends, technical peers, coworkers and community advocates throughout the week. If you see me, please do come say hi, share your favorite tech fail story, and maybe ask the Magic AI-Ball a question or two if you’re at the Wednesday event.

See You in Seattle!

Pack your laptops, bring your curiosity, and get ready to learn, laugh, and maybe even get a little snark from a Magic AI-Ball along the way. Follow the fun on LinkedIn and other social media platforms with the hashtags #PASSDataSummit, #Redgate, and #DBAKevlar.

]]>
McDonald’s Drive-Thrus and OOM Kills https://dbakevlar.com/2025/10/mcdonalds-drive-thrus-and-oom-kills/ Fri, 24 Oct 2025 19:51:20 +0000 https://dbakevlar.com/?p=10523 We organizers just wrapped up the Data Platform DEI Neurodiversity Day, which I helped host while battling a pretty major illness. During two sessions especially, one by James Reeves and another by Itzel Yagual, I had a bit of an epiphany about how my brain works and I thought I’d share.

I’ve long known that I’m wired differently as a neurodiverse individual. I give the term multitasking a run for its money, as a professional once described my brain as having 25 McDonald’s drive-thrus, all taking orders at once. My daily goal, as someone with AuDHD, is to manage that traffic and to keep from burning out, which this professional also referred to as “avoiding wasting energy on too many Happy Meals”, AKA distractions.

When I’m on stage speaking, I’m in full control with one clear lane open and smooth traffic flow and only a few distractions. But in loud or chaotic environments like exhibitor halls or noisy restaurants, every drive-thru lights up, taking in random noise that offers no real value. That sensory overload drains me fast.

One of those “drive-thrus” runs a soundtrack in my head, 24X7.  It’s a real playlist (currently on Spotify) that’s been looping for years, new songs coming in from time to time, old ones removed, but I can’t tell you how surprised I was to find out not everyone hears music in their head 24X7. It plays whether or not I have headphones on, even in my dreams. When I do have headphones on, this playlist is on my phone and is a great controlled distraction that helps me focus. The gist is, my brain just doesn’t need a music player to have the music playing. If I take my headphones off, my brain is very happy to just keep playing the songs in my head. But this week, I became aware of something I’d missed when I was ill in the past.

As I fought off a fever while hosting the event, I noticed an eerie silence. My mental playlist which I’d never noticed not playing, had gone quiet. During the Neurodiversity Day sessions, it hit me that my brain had done what a Linux server does when resources are low: it killed a background process. My internal “OOM kill” (Out of Memory) had taken out the playlist channel. When I finally woke at 4 a.m., I knew I was recovering because “99 Problems” by Hugo was blasting in my head: my playlist was back! The question is, in the past, when I was quite ill, did this happen then, too and I simply missed that it was missing?

Needless, I think it’s time to retire the McDonald’s analogy. My brain is more like a high-CPU Linux server running multiple LLMs and databases, where nonessential processes are shut down when resources are over-allocated.

And as for large, noisy social events? I’ll be referring to those as ransomware attacks from now on.

]]>
We Can do Better: Why Representation and Respect Still Matter for Women in Tech https://dbakevlar.com/2025/10/we-can-do-better-why-representation-and-respect-still-matter-for-women-in-tech/ Mon, 20 Oct 2025 19:14:59 +0000 https://dbakevlar.com/?p=10520 I have the greatest respect for those that organize and create user group tech events. It’s an incredible challenge, especially having to compete with today’s huge, heavily funded, cloud vendor events, and due to this, one of the biggest challenges in running a tech event is driving attendance. The formula is familiar: great speakers, strong technical sessions, networking opportunities, and engaging social events. But somewhere along the way, as technology has advanced, one outdated tactic has lingered: using the presence of young women to attract a male-dominated audience.

For women in tech, this is not a trivial issue, but can be quite personal. I don’t know a single woman in our field who hasn’t experienced an uncomfortable interaction with a male peer at some point in her career. Often, the awkwardness stems from an over-abundance of alcohol, a misunderstanding or cultural difference, but the impact is the same: we are reminded that some still view us through a lens other than “technical professional.”

From early in my career, I only wanted to be recognized purely for my expertise. My husband, who later became my partner after being my first mentor eight years earlier, understood the challenges I faced as a woman in a male-dominated space. It was also difficult not to notice how quickly perceptions changed once I became “Tim Gorman’s wife.”  The inappropriate interactions stopped almost entirely and not because I had earned respect, but because men didn’t want to offend another man. That realization has stayed with me and I know I’m privileged to have my husband in some of the technical communities I’m part because it does provide me a buffer from some of the inappropriate behaviors.

At a recent event in Croatia, I was reminded how much work remains to be done. Eleven years earlier, I had spoken at the same event and was struck by how the women working there were dressed.  During the day, they helped at the event, dressed mostly in short skirts and very high heels, where the men were in T-shirts and jeans, which made it clear to everyone the event catered to the male audience.  During the nightly parties, night club attire was donned by these same women and with heavy alcohol consumption, the cultural and gender divide became deeper.  At the 2025 event, I was heartened to see progress: the women were dressed more like the men, in matching event shirts, and although fitted, the event was showing more inclusivity and professionalism can coexist. But the stares, the whispered comments, and the subtle reminders that these women were here to attract male attendees at the after parties remained.

After the conference, a social media post to promote the event’s anniversary, featured these women posed like beauty pageant contestants vs. the speakers or user group. I saw the picture and felt the air go out of me, but I was at a loss during that moment to know how to respond. How can we still be here, in 2025, reducing women to visual props in an industry built on innovation? From history, I was aware it wasn’t an oversight and also understood there’s a cultural difference that must be considered. I had to remind myself that of all the communities I’m involved with, this one I needed to be the most patient with on the topic of women in tech, but I wanted to believe we’d moved beyond this. I was disheartened to see comments and likes from respected peers, including women, showing how challenging it was and many viewed it was simply easier to turn a blind eye.

Experience has shown we cannot expect women to thrive in tech if we continue to have events, even unintentionally, perpetuate the message that our presence is ornamental. Representation isn’t just about having women on stage or in attendance; it’s about creating an environment where they are seen, heard, and respected for their skills, not their appearance.

When a male peer finally spoke up under that same post and pointedly asked, “Are these women actually in tech?” it meant more to me than he probably realized. The fact is, when men advocate for inclusion, it helps shift the culture. Many men still only recognize bias when it’s pointed out by a male peer. That’s why allies matter, both men and women allies.

So yes, I could stay silent, but I am speaking up. Not to criticize, but to educate as someone who sees how vital women in tech can be when we are treated as technical peers and not ornamental. I’m speaking up because silence makes me complicit, and complicity keeps the wheel turning where women feel unwelcome, unseen, or unsafe at the very events meant to advance everyone’s careers. We should do better. We can do better. Because the sustainability of women in tech depends not on token gestures, but on a shared commitment from everyone in the community to ensure that respect and representation are never optional.

]]>
The Power of Self-Care: My European Adventures Between Tech Events https://dbakevlar.com/2025/10/the-power-of-self-care-my-european-adventures-between-tech-events/ Sat, 18 Oct 2025 15:24:23 +0000 https://dbakevlar.com/?p=10513 Recharging, Exploring, and Sharing Knowledge So My Kevlar Stays Intact

The world is always moving fast for me, so it’s easy to overlook the importance of self-care. We often find ourselves caught up in work, deadlines, and the constant drive for productivity. However, taking the time to step away, unplug, and recharge is vital. Not only for my mental and physical well-being but also for my creativity and professional growth. Recently, I had the incredible opportunity to spend a couple of weeks traveling through Europe, exploring the vibrant cities of Amsterdam, Berlin, Prague, Vienna, Trieste, and finally, the charming coastal town of Rovinj.

My Route, 1/2 by Train, 1/2 by Car, through Europe

This journey was more than a vacation; it was a deliberate act of self-care, a chance to destress, reclaim some brain cells, and re-energize for the year ahead. I had already been accepted to speak at two events, Redgate Summit in Amsterdam and HROUG, an Oracle event in Croatia and while considering how to take some time off, I realized I could just stay in Europe between the two events and have a considerable vacation in Europe.

The Importance of Self-Care and Taking Vacation

Self-care isn’t just about bubble baths, hobbies or time with friends, but it can also be about recognizing when you need a break and allowing yourself the space to enjoy it. Taking a vacation, especially one that immerses you in new cultures and experiences, offers a powerful reset. In each of the cities I visited, from wandering the canals of Amsterdam to soaking up the museums of Berlin, a fantastic “botel” on the river in Prague, I found new perspectives and renewed inspiration. I was on my own for much of this trip and simply enjoyed museums at my own pace, much of the time with noise-canceling headphones on and it brought me immense peace. These moments reminded me that stepping away from the routine is not a luxury, but a necessity for sustained creativity and professional fulfillment.

Destressing and Reclaiming Brain Cells

Travel has a unique way of clearing the mind and invigorating the spirit. Whether it was sipping coffee in a café in Prague or getting lost in a museum in Amsterdam, I felt the stress of the previous months melt away. These experiences allowed me to recharge, return with more energy, and approach tech (and life!) with a fresh outlook. For anyone feeling stretched thin or running on empty, I can’t emphasize enough how restorative a change of scenery can be.

Refreshed, Which Means Upcoming Keynotes

After a few weeks on vacation and with renewed energy, I’m thrilled to share that I have three keynotes coming up the last two months of the year:

Each of these events represents an exciting opportunity to connect, learn, and give back to the technical communities the incredible journey this year has been as a multiplatform and AI advocate with Redgate.

SQL Saturday Oregon

SQL Saturday Oregon is a vibrant gathering of data professionals, offering a welcoming space for learning and networking in the Portland/Southern Washington area. I’m honored to be delivering a keynote here for the 4th years in a row, sharing insights and encouraging others to invest in both their technical skills and personal well-being, all with an AI spin as we all pivot to our new reality.

PASS Summit

The PASS Summit stands as one of the premier events for data professionals worldwide. It brings together experts, enthusiasts, and innovators to share knowledge and explore the latest in the Microsoft data platform. The keynote will include the awesome Steve Jones and Laura Copeland.  We’ll be introduced by the incredible Grant Fritchey, and the keynote theme should resonate deeply with the attendees on a truly geeky level- we’re doing a lot of demos, a lot of coding and automation with AI, so be ready!

UKOUG Super Sunday

Rounding out my keynote engagements is UKOUG Super Sunday on November 30th, a staple for Oracle professionals. This event offers a fantastic opportunity to connect with the UK Oracle community, discuss emerging trends, and inspire one another to grow and evolve within the field.

If 3 keynotes didn’t seem like enough, I will also be speaking at TechBash 2025 in the Poconos at the end of the month, (not a keynote!) and am one of the co-organizers for the Microsoft Data Platform DEI Neurodiversity Day on Oct. 23rd. Join me at Neurodiversity Day 2025 if you’re interested at: https://meetu.ps/e/PlccY/1YkfQ/i

The Value of Technical Communities: Microsoft and Oracle

Reflecting on my experiences, both as a traveler and a speaker, I’m reminded of the incredible value that technical communities bring. Whether it’s Microsoft, Oracle, DevOps or open source, these communities foster a culture of support, innovation, and shared growth. They provide a space where professionals can exchange ideas, solve problems, and celebrate achievements together.

Being part of these communities has enriched my career in countless ways. I’ve gained knowledge, built lasting friendships, and found inspiration in the collective wisdom of peers from around the globe. As I prepare to contribute to these upcoming events, I’m excited to give back, share what I’ve learned, and encourage others to make the most of these invaluable networks.

Take Care To Be Present

Self-care and community are not mutually exclusive, but deeply intertwined. By taking care of ourselves, we show up as our best selves for our communities. By engaging with others, we find motivation and purpose that fuels our personal and professional journeys. As I look forward to the year ahead, I’m grateful for the time spent exploring Europe, the opportunity to connect with fellow professionals at upcoming events, and the vibrant communities that continue to inspire and support us all.

]]>
That Code Review with OLlama and Qwen 2.5 https://dbakevlar.com/2025/09/that-code-review-with-ollama-and-qwen-2-5/ Fri, 19 Sep 2025 20:18:06 +0000 https://dbakevlar.com/?p=10505 I just posted on how to use a PowerShell script in a flexible manner to perform different tasks with generative AI locally with Ollama, but I realized I hadn’t gone into just how cool and simple it was to run this on my laptop, along with the beauty of the script output.

Pulling from the Demo

For the demo scenario, the following exists:

  • A Postgres 17.5 database, source and target for a migration.
  • Four migration scripts requiring code review for the following:
    • Syntax
    • Row Level Security issues inside the database (RLS referenced in the prompt)
    • Missing index options
  • Flyway CLI is configured and in use for the databases and will be used as the orchestrator.

There is a series of scripts and the parameters/variables are set beforehand for the postgres user password, Flyway path, database connection info, etc.

The ai-review.ps1 script, called as part of the parent script, is written as follows:

param(
[string]$HostName,
[int]$Port,
[string]$Database,
[string]$User,
[string]$Password
)

# AI-assisted review using Ollama 

$ollama = Get-Command ollama -ErrorAction SilentlyContinue
if (-not $ollama) {
Write-Host "Ollama not found. Skipping AI review." -ForegroundColor DarkYellow
exit 0
}

$model = $env:AI_REVIEW_MODEL
if (-not $model) { $model = "qwen2.5-coder" } # choose any local model you have

#Gather SQL diffs(new files)

$files = Get-ChildItem ../sql -Filter "V__.sql" | Sort-Object Name
$summary = @()
foreach($f in $files){
$content = Get-Content $f.FullName -Raw
$summary += "### File: $($f.Name)n````sqln$content`n````"
}
$prompt = @"
You are an expert database reviewer. Analyze these PostgreSQL migration files for:

- security issues (e.g., missing RLS, overly broad grants)-peformance pitfalls (missing indexes, risky scans) -rollback/reversibility notes -correctness and deployment risks

Respond with concise bullet points and a PASS/REVIEW flag per file.
$($summary -join "nn")
"@

Write-Host "Running AI review with model '$model'…" -ForegroundColor Cyan
$result = & ollama run $model $prompt
$result | Set-Content -Path ./ai_review_report.md
Write-Host "AI review written to ./ai_review_report.md" -ForegroundColor Green

If you read through the script, once the parameters are set, the script goes in and performs a number of checks to verify that Ollama and the LLM can be found, as well as the DDL files.  It then tells Ollama as part of the AI prompt that it is an “expert database reviewer” and as part of this Postgres DDL review, it will check for the list I discussed in the first section of this post.  It then will print the results in a report named “ai_review_report.md”.

The End-to-End

What does the execution of the parent script, in it’s full end-to-end look like?

As this is part of a larger scripting scenario, notice it tests connectivity, performs the code review, outputs the report, verifies integrity, runs the Flyway migrations, performs some smoke tests and then looks at some top queries on the Postgres cluster after testing the new code that’s been released.  

AI Code Review Report

The report, which is the goal of the ai-review.ps1 is the most important part, as that is the focus of this post:

The Ollama AI and Qwen 2.5 LLM used the prompt posed to it and created a summary report with information on each of the four scripts, checking on security, performance, rollback/reversibility and correctness, then granted a FAIL, REVIEW or PASS status flag to each of the scripts.  If any of the scripts had received a FAIL status, the migration would have exited and rolled back as part of the migration requirements.

In Summary

AI can do a lot of things, but rarely do we ask if it should or explain how it solves a problem that plagues us every day.  Code reviews is a demand on many developer and DBAs time that can be offloaded to AI quite easily and in a way that doesn’t require a lot of GPU or tokens to perform when done with a lightweight AI such as Ollama with a local LLM.

]]>
Embedding Local AI into DataOps with Powershell https://dbakevlar.com/2025/09/embedding-local-ai-into-dataops-with-powershell/ Fri, 19 Sep 2025 19:33:20 +0000 https://dbakevlar.com/?p=10501 When working with database migrations, automation is the key to speed and safety. Redgate Flyway already helps us structure and deploy migrations consistently, but what if we could add AI-powered checks directly into our existing pipelines and do so in a safe manner as well?  I’ve been promoting this as part of my open-source and Postgres event sessions for the last few weeks and I’d like to discuss it further.  Recently Ollama offered the option for a Windows 11 installation.  No GPU or tokens required, (although having GPUs won’t hurt) I did timed tests and found it to be a solid choice for simple generative AI work when needed locally and even added optimization options for folks who are interested. 

Now armed with a local AI and LLM, I began combining PowerShell scripts with to create command line solutions for tasks that incorporated what I was working with, (link to the code below) including Flyway CLI and Postgres. With this post, I’d like to narrow in on how a single, flexible PowerShell pattern can drive code reviews, performance checks, conflict detection, and even release documentation as part of projects, as this is how I often view the code that I write. 

I am going to ask for one thing though: no one judge my PowerShell code too harshly, that’s an area I’m still a novice.

Prerequisites

Before jumping in, let’s discuss what you’ll need to get this to work:

  • Flyway installed and configured with your database.
  • Ollama installed locally (e.g., running ollama and running a qwen2.5 LLM downloaded locally also).
  • PowerShell 7.x (though most examples will work in Windows PowerShell 5.1).
  • A local database of your preferred flavor.  For my example here, I have a Postgres 17.5 database running on my PC.

Base Script: AI-Powered Code Review

I’ve pulled out the code from the overall script, but here’s the minimal script that reviews a SQL migration file.  You can add or subtract from the AI prompt to have the review do more or less as  you require:

param(
   [string]$SqlFile
)

# Load migration script contents
$sqlContent = Get-Content $SqlFile -Raw
# Run AI review with Ollama
$review = & ollama run qwen2.5 "Review this SQL migration for syntax issues, style problems, and possible risks: $sqlContent"

Write-Host "=== AI Review for $SqlFile ==="
Write-Output $review

Run it with:
.\ai-review.ps1 -SqlFile ".\migrations\V1__create_table.sql"

You’ll get natural-language feedback from the local model, flagging potential issues before running flyway migrate.  Manual review is expected at this state, but you could add in a parser to look for any file flagged for review or fail, stopping the migration if found in the code review.

Extending the Script for Other Tasks

Because the script is prompt-driven, you only need to change the prompt text to perform a new kind of check. Let’s walk through some examples as we build this out to do more.  This is what I really loved about this script.  You see how an AI prompt is issued as part of the PowerShell script, so why not take that further and perform other tasks?

1. Performance Impact Check

param(

    [string]$SqlFile

)

$sqlContent = Get-Content $SqlFile -Raw

$review = & ollama run qwen2.5 "Analyze the following SQL migration for potential performance issues on large data sets. Suggest indexing, partitioning, or query rewrites if needed: $sqlContent"

Write-Host "=== Performance Analysis for $SqlFile ==="

Write-Output $review

This helps you catch long-running changes (e.g., unindexed ALTER TABLE … ADD COLUMN on massive tables) early.  Depending on the platform, you could look for potential lock escalation problems in SQL Server or TX waits or TEMP usage in Oracle, too.

2. Conflict Detection with Existing Schema

param(

    [string]$SqlFile,

    [string]$SchemaDump

)

$sqlContent = Get-Content $SqlFile -Raw

$schemaContent = Get-Content $SchemaDump -Raw

$review = & ollama run qwen2.5 "Check if the following migration conflicts with the existing schema. Highlight duplicate indexes, conflicting constraints, or dropped columns still in use. Migration: $sqlContent Existing Schema: $schemaContent"

Write-Host "=== Conflict Check for $SqlFile ==="

Write-Output $review

Pair this with a schema export (pg_dump -s for Postgres, expdp (DataPump) of the metadata for Oracle, sqlpackage for SQL Server) to give the model context.  The script appears simple, but can be expanded upon to perform platform specific tasks, creating a framework that all platforms can be based from for AI actions and use local AI, offering a more secure method to begin an organization AI testing method from.

3. Standards / Best Practices Validation

param(

    [string]$SqlFile

)

$sqlContent = Get-Content $SqlFile -Raw

$review = & ollama run qwen2.5 "Check this SQL migration against best practices: use of transactions, consistent naming conventions, avoiding SELECT *, correct use of primary keys, etc. Return specific issues and recommendations. Script: $sqlContent"

Write-Host "=== Standards Check for $SqlFile ==="

Write-Output $review

This enforces team or company-level guidelines automatically.  I don’t know a DBA out there that wouldn’t find this useful and couldn’t build on this to help developers help themselves and create recommended practices.  It also offers us a way to identify if AI solutions provide value or if they don’t, while not requiring a lot of time-laden or cost heavy investment.

4. Automatic Documentation Generation

param(

    [string]$SqlFile

)

$sqlContent = Get-Content $SqlFile -Raw

$review = & ollama run qwen2.5 "Summarize the following SQL migration in plain English for release notes. Include the affected tables, columns, and purpose of the change: $sqlContent"

Write-Host "=== Release Note for $SqlFile ==="

Write-Output $review

Now your migration can generate its own release note entry and one less tedious/manual task that most of us dislike performing. 

Orchestrating with Flyway

As the original script is based off was part of a larger demonstration script suite I wrote, I can’t help but mention, these were all part of steps to make deployments more robust.  Any of these scripting options I put in this post could be integrated as scripts into a larger Flyway-driven pipeline. Here’s an example run.ps1 orchestrator, (although not as dynamic as my goal to create in my end product):

param(

    [string]$Stage = "All",

    [string]$MigrationsPath = ".\migrations"

)

$migrationFiles = Get-ChildItem $MigrationsPath -Filter *.sql

foreach ($file in $migrationFiles) {

    if ($Stage -eq "All" -or $Stage -eq "Review") {

        .\ai-review.ps1 -SqlFile $file.FullName

    }

    if ($Stage -eq "Performance") {

        .\ai-performance.ps1 -SqlFile $file.FullName

    }

    if ($Stage -eq "Conflicts") {

        .\ai-conflicts.ps1 -SqlFile $file.FullName -SchemaDump ".\schema_dump.sql"

    }

    if ($Stage -eq "Docs") {

        .\ai-docs.ps1 -SqlFile $file.FullName

    }

}

if ($Stage -eq "All" -or $Stage -eq "Migrate") {

    flyway migrate

}

Once your Flyway project is created, this allows you to call the following from the command line and simply let it do the automation with Flyway as the orchestrator:

.\run.ps1 -Stage Review

.\run.ps1 -Stage All

Depending on if you’re running a specific check or all, you can run different AI-driven checks as part of your release cycle.

Why Local AI Fits This Workflow

Running Ollama locally avoids the two biggest barriers to adopting AI in database workflows:

  • Security: SQL scripts and schema never leave your environment.
  • Control: You choose the model (Qwen, Llama, Mistral, etc.) and tune prompts without API quotas or vendor lock-in. 
  • Performance:  Depending on the resources available and requirements, you can build out or simplify as needed.

This makes it a practical way to embed AI into CI/CD pipelines where sensitive schema changes are in play.

Worth it

By wrapping Ollama calls inside PowerShell, you unlock a reusable, modular pattern for some of the most common needs we may use expensive tools or consume time manually on:

  • Code review – catch mistakes before they run.
  • Performance checks – anticipate slowdowns.
  • Conflict detection – spot schema collisions.
  • Best practices validation – enforce standards.
  • Documentation generation – automate release notes.

Each task is simply a different prompt. Combined with Flyway, this becomes a flexible foundation for AI-augmented DevOps pipelines that are secure, local, and adaptable. If you’d like to download the original script suite demo and cheat sheet to get started, feel free to go over to my GitHub zip file and download it.  Have fun with it, as I think AI should be fun…and I prefer when it’s fun AND SAFE.

]]>
Career Lessons from Mentoring https://dbakevlar.com/2025/08/career-lessons-from-mentoring/ Fri, 08 Aug 2025 20:43:59 +0000 https://dbakevlar.com/?p=10495 Why I Move On, and Why You Might Need To

I mentor between three and five individuals at any given time. Many are women, though I mentor men as well. These conversations are always insightful, hopefully for them, not just me, and over time, some clear patterns have emerged. Today, I want to share a few of the tried and true rules I’ve learned and shared as part of these valuable interactions.

Why I Leave Roles (and Why You Might Consider It Sooner)

Here’s the truth: I have a very low tolerance for leadership foolishness.

People sometimes ask, half-jokingly and half-seriously, why I’ve changed jobs more often than others in my field. The real reason? I learned early on that my job is only as good as my boss. If leadership starts making foolish decisions, whether around people management or company vision, I begin to map my exit strategy.

Microsoft gave me above-average management, which is why I stayed for five years. At other places, I’ve often lasted around two years and not because I wasn’t contributing, but because I know my value. When it’s recognized and utilized, I can deliver in the top 1%. But being AuDHD, I can spot a lack of direction or leadership almost immediately. Once I see that, it’s no longer emotional, but strategic. (Okay, there was one time it was personal, but I digress…)

This approach has given me an incredibly diverse career in tech. I’ve learned from the good and the not-so-good, and those lessons now shape the advice I pass on to mentees, especially those navigating career changes during this turbulent time in the industry.

  1. Today’s Tech Industry Is Not Yesterday’s

This isn’t just about tech, but about the modern job market in general. We’re no longer in an era where you stay at one company for your entire career. Most pensions are gone and if you speak to millennials, many of them aren’t aware of what a pension is. Stock options, although popular, vest in a few years. 401Ks transfer easily, so this is not an anchor for employees to stay with a company.

In most states, work is at-will. The company doesn’t owe you a lifetime job, and you don’t owe them your loyalty at the expense of your career or worse- your happiness. You’re providing a service: your skills, your ideas, your output. In return, you deserve a constructive work environment and leadership that knows how to lead.

  1. You’re Staying Too Long (and It’s Costing You)

Far too often, I meet people who’ve stayed in roles where they’re undervalued, underpaid, and under-led.

Sometimes they stay because the benefits or salary feel “secure.” Sometimes it’s to see if a bad culture is temporary. But here’s the hard truth: if the issues are long-term, the company isn’t going to change for you.

When I see someone whose salary no longer matches their experience, 90% of the time it’s because they’ve stayed too long. Salary increases inside a company average 1–4% per year. Changing jobs? That can mean a 20–30% jump, even without perfect negotiation skills.

And staying too long can lead to panic moves: jumping to the first job offer just to escape, rather than strategically picking the right next step. That’s why I recommend starting your search nine months to a year before you plan to leave.  I care about my mentees’ future and I want them to land in a better place than whence they came, not worse.

  1. Keep Your CV and LinkedIn Always Current

Your CV and LinkedIn aren’t “when I’m job-hunting” tools, they’re living career assets. Keep them fresh so you’re always ready when opportunity knocks because I can attest, many of my opportunities presented themselves when I wasn’t looking and took months to navigate to ensure both parties were satisfied with the result.

Quick checklist:

  • Show impact: List what you’ve done and the value it delivered, including revenue earned, costs saved, projects shipped.
  • Be visible: Blog, guest post, write white papers, present, run webinars, or even write a book. Show that you bring value beyond the job description.
  • Be authentic: Borrow ideas if they resonate, but don’t try to be a carbon copy of someone else. If you just copy what’s already out there, you risk becoming “white noise” in the chaos of the internet.
  • Add detail: Don’t just list a title, but explain your responsibilities and achievements.
  • Update quarterly: If you tend to forget, set a calendar reminder.
  1. You’re Only as Good as Your Network

Every single job I’ve had since 2009 came from a personal recommendation. In many cases, I didn’t even know the role existed until someone in my network said, “Hey, you should come work with us.”

Networking doesn’t have to mean “salesy.” It means being visible, building relationships, and staying connected so people think of you when opportunities arise. Conferences, LinkedIn, videos, blogs- these things make you known before you walk into the interview.

When a new opportunity comes up, I always talk to people in my network at that company to get the real story. I’ve never regretted reaching out to someone and its ensured I felt more confident with my decision to join an organization.

  1. Don’t Get Hung Up on Titles

Job titles are meaningless these days. Focus on the work itself. If a role fits even 40% of your skills, apply.

Also, once inside a company, keep your eyes open. Many of my roles evolved far beyond the job description I was hired for and this can be seen in title changes over the years.  I happened because I spotted a gap, proposed a new role, and convinced leadership it would help the business. A great manager will recognize the value and make it happen, even with a company as large as Microsoft.

When you know your worth and can demonstrate your impact, you’re not just applying for jobs, you’re creating your own future.

Whether you’re just starting out or deep into your career, remember: your loyalty belongs to your career, not a logo. Learn from each role, build your network, keep your tools sharp, and never stay where you’re undervalued. The right role with the right leadership is out there and you’ll be ready when it comes.

]]>
PostgreSQL for the Oracle DBA, Part 6: Tuples, MVCC, and Two Views https://dbakevlar.com/2025/07/postgresql-for-the-oracle-dba-part-6-tuples-mvcc-and-two-views/ Thu, 17 Jul 2025 23:02:45 +0000 https://dbakevlar.com/?p=10485 After our first two weeks of ensuring Grant and I didn’t burn down SQLServerCentral figuring out how Steve Jones has kept the pace he has for so long, (quite an impressive feat, I think we’d both agree!) I’m back to working with my comparisons and building more knowledge in PostgreSQL.  What caught my attention this week was the simple concept of a row (or tuple) which might seem universal in relational systems, after all, data is data no matter the platform, right? But under the hood, the way databases store, manage, and control visibility of that data can differ drastically. This is especially true when comparing PostgreSQL with how Oracle manages rows.

Both systems implement MVCC, aka Multi-Version Concurrency Control which allows a database to enable high concurrency while ensuring consistency. But the mechanisms, storage structures, and performance trade-offs they use are surprisingly different.

Let’s unpack what a tuple means in PostgreSQL 17 and how it contrasts with a row in Oracle, followed by a deeper dive into each system’s approach to MVCC.

PostgreSQL: The Tuple-Centric, an Append-Only Model

In PostgreSQL, (focusing on version 17 for this post) a tuple is a single row stored in a heap file. Each tuple is physically independent and contains not only user data but also metadata used for MVCC visibility, such as:

  • xmin: the transaction ID that created the tuple
  • xmax: the transaction ID that deleted or updated it
  • ctid: the tuple’s physical location (block#, offset#)
  • Flags and null bitmap

When a row is updated, PostgreSQL doesn’t modify it in-place. Instead, it will:

  1. Create a new version of the tuple with the updated data.
  2. Mark the old tuple as expired by setting its xmax.
  3. Finally, leave the old version in place until vacuumed.

This append-only update strategy ensures readers never block writers and vice versa. However, it also means bloat that will need to be dealt with at some point.  In other words, more dead tuples that must be cleaned up by PostgreSQL’s VACUUM or autovacuum process.

Oracle: In-Place Rows and Undo Magic

In contrast, Oracle, (using the simplest, classic instance or a PDB architecture) uses a more traditional model where a row in Oracle is updated in-place. To maintain MVCC, Oracle generates undo records that store the before image of the row which was changed.

What happens during an update is basically as follows:

  1. The row is changed in-place in the data block.
  2. An undo record is written to the UNDO tablespace.
  3. Transactions querying a past state of the row can reconstruct it using the UNDO data if needed.

Oracle’s MVCC relies on System Change Numbers (SCNs) instead of transaction IDs. Each operation is tagged with an SCN, and Oracle uses these to determine whether a row is visible to a transaction based on its snapshot.

This model has a major advantage: no need for periodic vacuuming. The cleanup of old data happens automatically via UNDO and space management mechanisms.

Key Differences Between PostgreSQL 17 and Oracle

Feature

PostgreSQL 17

Oracle

Update Strategy

Append-only (creates new tuples)

In-place updates

MVCC Mechanism

Tuple versioning via xmin/xmax

Undo-based rollback using SCNs

Visibility Control

Transaction IDs and snapshots

SCN-based snapshots

Old Data Cleanup

Manual (VACUUM/Autovacuum)

Automatic (Undo retention)

Concurrency Handling

Readers never block writers (tuple isolation)

Readers reconstruct old versions from UNDO

Storage Implication

Risk of bloat if vacuum delayed

Space-efficient, but UNDO tablespace sometimes must be managed

Row Identifiers

ctid (physical location)

ROWID (block, slot, object reference)

Index Maintenance

Indexes must be updated for every new tuple

Less frequent due to in-place updates

 

Practical Considerations

  • PostgreSQL’s model is elegant in simplicity and excellent for analytical workloads where immutability and snapshot isolation shine. But it requires diligent maintenance via autovacuum, especially under high write loads.
  • Oracle’s undo-based MVCC is space-efficient and stable for high-throughput OLTP systems, but concurrency and lock issues can arise.

Summary

Both PostgreSQL 17 and Oracle are mature, powerful relational databases, but the tuple is one aspect where their philosophies diverge.

PostgreSQL views each change as a new truth, preserving history until someone cleans it up. Oracle rewrites reality but keeps a short-term diary just in case someone needs to look back.

Understanding these underlying mechanics is critical to our learning path and the bigger picture around data consistency, recovery planning, and even optimization strategy. Appreciating how each system handles MVCC at the tuple level gives you a clearer picture of why they behave the way they do.

]]>