Yep, you heard me right. I am going to automate everything I do. Or, at least, try to.
I’m not going to talk in detail about why I’m doing this, other than to say:
- I have better things to do with my time than perform mundane, repeatable tasks.
- I recently re-read John Sansom’s excellent blog post on the subject, and, as is usually the case with John’s work, found it rather inspiring. Bravo John!
Instead, I’m going to talk a little about two kinds of “automation”, and why I generally pick one that might not, at first glance, be the right choice.
Full versus Partial Automation
I will define something that is fully automated as a task or operation which requires absolutely zero manual effort on my part. That is, I don’t have to push a button, flip a switch, type a command, or even know that it is happening, unless something goes wrong. An example of something that is already fully automated in my world is the weekly and daily database maintenance that I have installed, using Ola Hallengren’s excellent (and freely available) scripts. These run every night and ensure that my databases are backed up, corruption free, and optimized. Unless something goes *bump* and the jobs fail, I don’t get so much as an e-mail upon completion.
By contrast, a process that is partially automated requires some manual effort, but is still optimized such as to require as little work on my part as possible. I may need to open a script, enter a few parameters, or run a Powershell command, but outside of that, all the logic and processing is done behind the scenes, with perhaps a pretty little progress bar showing that work is, well, progressing. An example of this might be setting up of a server side SQL trace; I have a series of scripts ready that, given a database name and a path at which to place the trace files, will setup a standardized trace, grant rights on some signed stored procedures to allow non-DBAs to read the trace data, and even generate a README file to be sent to the requesting team with a series of instructions.
In an ideal world, I suppose that all of the mundane daily work I have to take care of would be in the former (fully automated) category. But in reality, I find that there’s a bit of a diminishing return on putting more and more effort into fully automating tasks. Consider this scenario / requirement: the request and creation of new databases.
At a basic level, the process we follow is something like this:
- A developer requests a new database via our in-house ticketing system.
- We (my team) ensure that the request at least contains the following details:
- The name of the database.
- The server on which to create it (or at least a version of SQL Server).
- The initial size of the database.
- The domain users or group that requires access to the database.
- Assuming we’ve got all the information required, we will go out to the database server where the databases will be placed, and determine a suitable location to place the data files. This is usually determined by simple the simple rule of “whichever volume has the most space”, with a few exceptions. If there is not enough space on the server, we’ll inform the requestor that there will be a delay, and start the process of ordering more storage.
- Once we know where the data and log files will be placed, we will create the database, take a full backup (to initialize the differential backup chain), and grant the access requested.
Now, I could certainly envision writing some kind of application that would allow developers to submit requests for new databases and process everything straight through based on the rules outlines above, similar to what I see on my web host’s control panel for MySQL databases. But, at the same time, constructing such as system would take a lot of time and effort. Contrast that with a partial automation solution, such as:
- Provide an InfoPath form for the requests, forcing people to enter the necessary details in before submitting the request, thereby eliminating back-and-forth chatter.
- Use a Powershell script that, when given a server name, database, and log file size, will connect via WMI and determine a list of suitable locations for placing the database files, then prompt the user to select one. When selections are made, the database is created and a full backup taken. This eliminates logging into the computer (to look at drives) and stepping through a bunch of GUI screens to create the database.
- Have a second Powershell script that can accept a text file containing domain groups / users and role names, which will then grant the access required on a specified database. The file could be generated from the InfoPath form. Again, this eliminates some GUI clicks, and minimizes the chance of fat-fingering group / user names.
While this will not prevent someone from having to manually process the request, it will significantly cut down on the pain to do so, without a great deal of effort / complexity. Especially considering that these requests are fairly infrequent (perhaps one to three a month), I see no reason to spend additional time on providing a fully automated solution, when the partial one provides almost as much value.
Over the course of the year, I’ll be blogging about the various processes / means I use to accomplish this goal.
How do you automate your processes, and how do you determine a “break-even” point, if you will?