This post is the sixth in an ongoing series about how to survive as the only DBA in your organization. Since October of last year, I’ve been assigned to a team that is responsible for owning and maintaining the development infrastructure. It’s a great team of seasoned professionals, but not a single other DBA. As a result, I’ve had to think very carefully about how I go about my daily work, so as to give our customers consistently good service, while still allowing those without a lot of SQL Server related knowledge to pick up my work when I’m not available.
Why is being generic a good thing?
When I think of the word “generic”, I usually picture those off-brand foods at most grocery stores, with their simplistic labels and lackluster colors. But in the case of processes, being “generic” really means “standardized, yet flexible”. This is a good thing, because it means your processes can answer many different (but ultimately) related needs.
Let’s take an example: a development team needs to trace activity in their database. We’re going to assume that just granting this group rights to run the trace is not an option, since, let’s say, they’ve taken the server down with a poorly done client side trace in the past (don’t laugh, it happened to me). In any case, let’s look at three options to answer this request:
Have the developers sit with you while you run a Profiler trace
I don’t like this for several reasons, not the least of which is that it is going to take a good chunk of my time. Because of course, the developer will probably have no idea what they are looking for, and may not be able to product the condition they are trying to capture on demand. It also still uses Profiler, which, as far as I’m concerned, should be banned.
Script out a one-time trace and have it run on the server
This is better, because it takes a lot less of my time to simply setup a server-side trace and let it run. I can then let the developer’s read in the trace files via something like a signed stored procedure (a topic for another day perhaps). But there’s still the one-off aspect: who’s to say that the next time the developers need this I’ll be around, have saved the trace definition, etc?
Write a templatized script that accepts a database name and a path for trace files, and use it going forward
This, to me at least, is the best option. After a slightly longer initial setup (one-time to write and document the script, plus test it), setting up subsequent traces will take very little time. In addition, the use of a template will mean a consistent experience / process for my customers, even when I’m not around. Even when I am around, it will also make it easier for DBA Junior to handle the request, leaving me to look at more interesting things. And by making the script flexible enough to handle different servers / databases, it becomes much more useful.
I go so far as to have a “no one-off” policy at work. That is, if I do something, I script it, put some parameters in, and save it off to source control. Then I publish it in our procedures manual, so that if a similar request comes in the team can handle it right away. It leads to a lot of scripts that are not highly used, but it also means less work in the long term, and a great bag of tricks in the process.
But, can something be too generic?
Sure it can. I’ve fallen into the trap many times of trying to have one process fit way too many needs, only to end up with a monstrous, un-followable mess. If you’ve got a ton of “if this is true, do this, otherwise do this” type of logic in your process, you might want to consider if you’re really answering related needs. This is kind of like the process equivalent of that awful stored procedure we’ve all seen; you know, the one that has twenty plus input parameters, and has every one in the WHERE clause as
WHERE ((some_field = @some_parameter) OR (@some_parameter = NULL)). It looks good, but in the end the execution is piss poor.