Fun in DateTime Calculations

Recently I was working on a task that needs to handle various datetime calculations, and it is really fun.

Usually there are two types of approaches, one is to convert datetime value to string value and then through string calculations like substring, concatenation and convertion to get the result, another is via pure datetime functions, like dateadd and datediff. I like the second approach, which is better in performance.

I learned a lot from Itzik Ben-Gan’s series articles here: (DateTime Calculations), especially the Language-Independent (i.e. @DateFirst non-relevant) weekday calculation in part 2 of the series.

   1: -- 1 find the first day of the prev month / current month / next month

   2: declare @dt datetime  = getdate();

   3: select prev_mth_1st_day = dateadd(month, datediff(month, 0, @dt)-1, 0)

   4: , curr_mth_1st_day=dateadd(month, datediff(month, 0, @dt), 0)

   5: , next_mth_1st_day=dateadd(month, datediff(month, 0, @dt)+1, 0)


   7: -- 2 find the last day of the prev month / current month / next month

   8: select prev_mth_last_day = dateadd(month, datediff(month, 0, @dt),  -1)

   9: , curr_mth_1ast_day=dateadd(month, datediff(month, 0, @dt)+1, -1)

  10: , next_mth_1ast_day=dateadd(month, datediff(month, 0, @dt)+2, -1)


  12: -- 3.1 find the first week day of the prev month / current month / next month

  13: select prev_mth_1st_wkday=case datepart(dw, dateadd(month, datediff(month, 0, @dt)-1,0)+@@datefirst-1)

  14:         when 6 then  dateadd(month, datediff(month, 0, @dt)-1,0)+2

  15:         when 7 then  dateadd(month, datediff(month, 0, @dt)-1,0)+1

  16:         else   dateadd(month, datediff(month, 0, @dt)-1,0) end

  17: , curr_mth_1st_wkday = case datepart(dw, dateadd(month, datediff(month, 0, @dt),0)+@@datefirst-1)

  18:         when 6 then  dateadd(month, datediff(month, 0, @dt),0)+2

  19:         when 7 then  dateadd(month, datediff(month, 0, @dt),0)+1

  20:         else   dateadd(month, datediff(month, 0, @dt),0) end

  21: , next_mth_1st_wkday = case datepart(dw, dateadd(month, datediff(month, 0, @dt)+1,0)+@@datefirst-1)

  22:         when 6 then  dateadd(month, datediff(month, 0, @dt)+1,0)+2

  23:         when 7 then  dateadd(month, datediff(month, 0, @dt)+1,0)+1

  24:         else   dateadd(month, datediff(month, 0, @dt)+1,0) end;


  26: -- 3.2 find the first weekend day of the prev month / current month / next month

  27: select prev_mth_1st_wkndday=case when datepart(dw, dateadd(month, datediff(month, 0, @dt)-1,0)+@@datefirst-1) < 6

  28:         then dateadd(month, datediff(month, 0, @dt)-1,0)+ 6- datepart(dw, dateadd(month, datediff(month, 0, @dt)-1,0)+@@datefirst-1)

  29:         else   dateadd(month, datediff(month, 0, @dt)-1,0) end

  30: , curr_mth_1st_wkndday = case when  datepart(dw, dateadd(month, datediff(month, 0, @dt),0)+@@datefirst-1) < 6

  31:         then dateadd(month, datediff(month, 0, @dt),0)+ 6- datepart(dw, dateadd(month, datediff(month, 0, @dt),0)+@@datefirst-1)

  32:         else   dateadd(month, datediff(month, 0, @dt),0) end

  33: , next_mth_1st_wkndday = case when  datepart(dw, dateadd(month, datediff(month, 0, @dt)+1,0)+@@datefirst-1) < 6

  34:         then dateadd(month, datediff(month, 0, @dt)+1,0)+ 6- datepart(dw, dateadd(month, datediff(month, 0, @dt)+1,0)+@@datefirst-1)

  35:         else   dateadd(month, datediff(month, 0, @dt)+1,0) end;


  37: -- 4.1 find the last week day of the prev month / current month / next month       

  38: select prev_mth_last_wkday=case datepart(dw, dateadd(month, datediff(month, 0, @dt),0)-1+@@datefirst-1)

  39:         when 6 then  dateadd(month, datediff(month, 0, @dt),0)-1-1

  40:         when 7 then  dateadd(month, datediff(month, 0, @dt),0)-1-2

  41:         else   dateadd(month, datediff(month, 0, @dt),0)-1 end

  42: , curr_mth_last_wkday = case datepart(dw, dateadd(month, datediff(month, 0, @dt)+1,0)-1+@@datefirst-1)

  43:         when 6 then  dateadd(month, datediff(month, 0, @dt)+1,0)-1-1

  44:         when 7 then  dateadd(month, datediff(month, 0, @dt)+1,0)-1-2

  45:         else   dateadd(month, datediff(month, 0, @dt)+1,0)-1 end

  46: , next_mth_last_wkday = case datepart(dw, dateadd(month, datediff(month, 0, @dt)+2,0)-1+@@datefirst-1)

  47:         when 6 then  dateadd(month, datediff(month, 0, @dt)+2,0)-1-1

  48:         when 7 then  dateadd(month, datediff(month, 0, @dt)+2,0)-1-2

  49:         else   dateadd(month, datediff(month, 0, @dt)+2,0)-1 end;


  51: -- 4.2 find the last weekend day of the prev month / current month / next month       

  52: select prev_mth_last_wkndday= case when  datepart(dw, dateadd(month, datediff(month, 0, @dt),0)-1+@@datefirst-1) < 6

  53:         then dateadd(month, datediff(month, 0, @dt),0)-1-datepart(dw, dateadd(month, datediff(month, 0, @dt),0)-1+@@datefirst-1)

  54:         else   dateadd(month, datediff(month, 0, @dt),0)-1 end

  55: , curr_mth_last_wkndday = case when  datepart(dw, dateadd(month, datediff(month, 0, @dt)+1,0)-1+@@datefirst-1) < 6

  56:         then dateadd(month, datediff(month, 0, @dt)+1,0)-1-datepart(dw, dateadd(month, datediff(month, 0, @dt)+1,0)-1+@@datefirst-1)

  57:         else   dateadd(month, datediff(month, 0, @dt)+1,0)-1 end

  58: , next_mth_last_wkndday = case when  datepart(dw, dateadd(month, datediff(month, 0, @dt)+2,0)-1+@@datefirst-1) < 6

  59:         then dateadd(month, datediff(month, 0, @dt)+2,0)-1-datepart(dw, dateadd(month, datediff(month, 0, @dt)+2,0)-1+@@datefirst-1)

  60:         else   dateadd(month, datediff(month, 0, @dt)+2,0)-1 end;


  62:  go


  64: -- 5 find the first nth weekday (Mon/Tue/Wed/Thu/Fri/Sat/Sun) of the month where @dt is in

  65: -- example, find the 5th Friday of the month of Aug, 2013 

  66: declare @nth int=5, @dt datetime='2013-08-12';

  67: declare @dw tinyint  =5 -- 1=Mon,2= Tue,3=Wed, 4=Thur, 5=Fri, 6=Sat, 7=Sun


  69: select [1st_nth_wkday]=case when  datepart(dw, dateadd(month, datediff(month,0,@dt),0)+@@datefirst-1) >= @dw

  70: then dateadd(day, (@nth-1)*7 + (7-(datepart(dw, dateadd(month, datediff(month,0,@dt),0)+@@datefirst-1)-@dw)), dateadd(month, datediff(month,0,@dt),0))

  71: else dateadd(day, (@nth-1)*7 + (0-(datepart(dw, dateadd(month, datediff(month,0,@dt),0)+@@datefirst-1)-@dw)), dateadd(month, datediff(month,0,@dt),0))

  72: end

  73: go



  76: -- 6 find the last nth weekday (Mon/Tue/Wed/Thu/Fri/Sat/Sun) of the month where @dt is in


  78: -- find the 2nd last Sunday of current month

  79: declare @nth int=2, @dt datetime=current_timestamp;

  80: declare @dw tinyint  =7 -- 1=Mon,2= Tue,3=Wed, 4=Thur, 5=Fri, 6=Sat, 7=Sun


  82: select [last_nth_wkday]=case when datepart(dw, dateadd(month, datediff(month,0,@dt)+1,0)-1+@@datefirst-1) >= @dw

  83: then dateadd(day, -(@nth-1)*7 - (datepart(dw, dateadd(month, datediff(month,0,@dt)+1,0)-1+@@datefirst-1)-@dw), dateadd(month, datediff(month,0,@dt)+1,0)-1)

  84: else dateadd(day, -(@nth)*7 - (datepart(dw, dateadd(month, datediff(month,0,@dt)+1,0)-1+@@datefirst-1)-@dw), dateadd(month, datediff(month,0,@dt)+1,0)-1)

  85: end

  86: go



I believe there can be other interesting datetime related calculations, like quarterly related datetime calculations, but should be similar to the queries mentioned above.

Posted in Uncategorized | 4 Comments

Reliably Drop A Database With CLR Stored Procedure


After I read Greg Low’s post and my initial try, I kept on thinking that I need to come up with a solution that in theory is robust. The only way I can think of is to use a CLR stored procedure which will do the following three steps:

1. alter database <blah> set offline with rollback immediate;

2. find all the files that compose the <blah> database, using

select physical_name from master.sys.master_files where database_id = db_id(‘blah’)

3. delete each file found in step 2.

So here is C# code to create the CLR stored procedure:

   1: using System;

   2: using System.Collections;

   3: using System.Data;

   4: using System.Data.SqlClient;

   5: using System.Data.SqlTypes;

   6: using Microsoft.SqlServer.Server;

   7: using System.IO;


   9: public partial class StoredProcedures

  10: {

  11:     [Microsoft.SqlServer.Server.SqlProcedure]

  12:     public static int usp_DropDB (SqlString db_name)

  13:     {


  15:         try

  16:         {

  17:             using (SqlConnection conn = new SqlConnection("context connection=true"))

  18:             {

  19:                 string sql = "select cnt=count(*) from master.sys.databases where name=@dbname";

  20:                 SqlCommand scmd = new SqlCommand();

  21:                 SqlParameter parmDBName = new SqlParameter("@dbname", SqlDbType.VarChar, 128);

  22:                 parmDBName.Value = db_name.Value;

  23:                 scmd.Parameters.Add(parmDBName);

  24:                 scmd.CommandText = sql;

  25:                 scmd.Connection = conn;

  26:                 conn.Open();

  27:                 Int32 i = (Int32)scmd.ExecuteScalar();

  28:                 conn.Close();

  29:                 scmd.Dispose();


  31:                 if (i == 1)

  32:                 {


  34:                     sql = "if exists (select * from sys.databases where name=@dbname)";

  35:                     sql = sql + "\r\n alter database [" + db_name.Value + "] set offline with rollback immediate;";


  37:                     SqlCommand cmd = new SqlCommand();

  38:                     SqlParameter pDBName = new SqlParameter("@dbname", SqlDbType.VarChar, 128);

  39:                     pDBName.Value = db_name.Value;



  42:                     cmd.Parameters.Add(pDBName);


  44:                     cmd.CommandText = sql;

  45:                     cmd.Connection = conn;


  47:                     conn.Open();

  48:                     cmd.ExecuteNonQuery();

  49:                     cmd.CommandText = "select physical_name from master.sys.master_files where database_id=db_id(@dbname)";

  50:                     SqlDataReader rd = cmd.ExecuteReader();

  51:                     ArrayList alFilePath = new ArrayList();

  52:                     while (rd.Read())

  53:                     {

  54:                         alFilePath.Add((string)rd.GetString(0));

  55:                     }

  56:                     conn.Close();


  58:                     cmd.CommandText = "drop database [" + db_name.Value + "];";


  60:                     conn.Open();

  61:                     cmd.ExecuteNonQuery();

  62:                     conn.Close();


  64:                     for (i = 0; i <= alFilePath.Count - 1; i++)

  65:                     {

  66:                         File.Delete((string)alFilePath[i]);

  67:                     }

  68:                 }

  69:                 else

  70:                 {

  71:                     return 1; // no db found

  72:                 }

  73:             }

  74:             return 0; // succeed

  75:         }

  76:         catch

  77:         {

  78:             return (-1); // -1 meaning error occurred

  79:         }

  80:     }

  81: }


Using Visual Studio to compile this to DLL file, let’s assume the file is called/located at c:\temp\CLR_SP.dll, and then using the following sql statement to create the CLR procedure:

   1: use master

   2: create assembly clr_dropDB from 'C:\temp\CLR_SP.dll' 

   3: with permission_set = External_ACCESS;

   4: go


   6: create proc dbo.usp_DropDB @db_name nvarchar(128)

   7: as external name clr_dropDB.StoredProcedures.usp_DropDB;

   8: go


  10: -- example 

  11: use master;

  12: declare @id int;

  13: exec dbo.usp_dropDB @db_name=N'testdb';

  14: print @id;


My dev/test environment is Visual Studio 2012 Professional and SQL Server 2012 Developer Edition.

Posted in Uncategorized | Leave a comment

Reliably Drop A Database In T-SQL Script


I read with interests about Greg Low’s post “Reliably Dropping a Database in a T-SQL Script is Too Hard”, I think I have a solution, which at least I cannot prove it not working so far.

   1: USE master; 

   2: GO


   4: IF EXISTS(SELECT 1 FROM sys.databases WHERE name = N'Blah')

   5: BEGIN 

   6:     use [Blah]

   7:     ALTER DATABASE Blah SET SINGLE_USER WITH ROLLBACK IMMEDIATE; -- all OTHER sessions inside [Blah] will be closed


   9:     alter database [Blah] set recovery full;

  10:     backup database [Blah] to disk='NUL' with init; 

  11:     use master;

  12:     backup log [Blah] to disk='NUL' with norecovery; -- [Blah] db will be in restoring mode


  14:     DROP DATABASE [Blah]; 

  15: END; 

  16: GO


There are two points worth mentioning here:

1. Before setting the database [Blah] to single_user mode, we need to be inside [Blah] (line 6), this way, all other sessions will be killed by line 7

2. Line 12 backup log with norecovery will set the database [Blah] to restoring mode and so no other sessions can access it.

Note, this code is for illustration and easy-explanation only, in real world, line 6 to 14 (i.e. the IF block) should be inside a dynamic sql string, otherwise, if you do not have [Blah] database in the first place, you will get the error.

Posted in Administration, Operation Productivity | Tagged | 2 Comments

Thoughts About MS Advanced Certification


Now MS has decided to cancel all its advanced certifications. I am here to share some of my thoughts on this cancellation and my expectation for future.

I have read lots of blogs discussing this cancellation. Most are against the cancellation. Indeed, I agree this is not a decent and compassionate decision from MS. However personally, I hope MS will come up with better alternatives later. I always feel the MS advanced certification lacks the necessary balance between depth and width, and also the knowledge required for certification does not seem to have a long life as it is quite product version related. Yes, MCM needs deep knowledge, but whether to the extent that knowledge may not even be documented in MSDN, and even need help from SQL Server development team in preparing the certification exam?

I’d like to use myself as an example. In the days when DOS was still the primary OS and as a university student, I spent almost all of my spare time in researching virus, I collected virus samples, debugged into the virus, and figured out the virus pattern so I could show off about how to clean the virus. I read tons of books such as “Inside DOS Kernel”, “How to write device drivers”. “Advanced Assembly Language”, etc, etc. No doubt, I was very strong in programming skills by studying the virus (virus were created by really smart or genius guys). But with Windows replacing DOS, network replacing isolated PCs, virus started to change its form and complexity and I gradually felt more difficult to catch up with all the knowledge needed to study virus, and finally I had no choice but to give up. Today, when looking back, I felt my time was actually wasted. If I had spent time in studying “The Art of Computer Programming” instead of studying virus, I should make better career achievement by now.

Yes, at that time I felt so cool, so proud, and I received huge “respects” from others because I can fix virus. I believe had MS launched an “MCM for DOS” at that time, I would no doubt jump on the bandwagon. However, 5+ years later, the knowledge I gained from DOS study became useless in Windows. But today (i.e. 20+ years later since my virus study days) or dare I say another 20+ years from now on, the usefulness and knowledge out of “The Art of Computer Programming” will still be there.

Virus study is something similar to today’s deep-knowledge learning of SQL Server. When we spend lots of time to study the data/index page structure, log structure, undocumented dbcc commands or even memory structure, we all know some day these may change with a new product release. As to when, it is hard to say, maybe three years, five years or fifteen years. But one thing is for sure, that day will come. Other than this doomed change, the knowledge you have learned is of rare use in real daily work, making a lower ROI for your time. To me, these knowledge is good and valuable but with a too high opportunity cost.

So what advanced certification should MS have and promote to IT community? I think any advanced certification should have the following two properties:

1. Productivity-oriented: At the end of day, an MCM needs to work for an employer or a client. They should be expected to be exemplary in improving the work efficiency. In simple ROI term, one MCM should be worth 2+ non-certified DBAs in handling workload yet still with better work quality.

2. Long term value-oriented: If MS had an MCM certification for sql server 2000, can we expect this MCM still as valuable as the SQL Server 2000 era? With current MCM certification mode, I guess not, even though sql server 2000 is not that far from us. (SQL Server 2000 is the mainstream MS database engine until 2005, and lots of companies upgraded to sql server 2005 well after 2006 or 2007) so how can we come up with a certification that even after 20 years can still be related to one’s capability as a DBA even in the new SQL server versions?

How to design and carry out such a certification is actually another topic. But I hope MS does not operate this level of certification simply as a business for profit, instead it should run this certification program like an Olympic game. Everyone is welcome to participate but only the best will be crowned with MCM. Those who finally win the MCM title should also be rewarded a big treasurable prize, like a real pure gold medal. With such event, MS will harvest not only the interests of the IT community but also the reputation and goodwill that cannot be found through the traditional certification program. We all know Olympic game is not for people with a faint heart, so neither should be MCM certification for DBAs / developers. In this way, I am pretty sure MCM will be forever in high pursuit and every MCM certification event will be an event for celebration.

Posted in Uncategorized | Leave a comment

Agile Database Administration


I first heard the word “agile <something>” about 10 years ago, I do not have profound experience to what “agile <something>” really is, but I do know that the key of “agile <something>” is a breakup of the traditional way of doing <something> and this agile way is coming out of necessity because the traditional way simply cannot get <something> done as traditionally expected.

In recent years, there are many “agile” terms popping up, like agile development, agile project management, agile requirement management, and agile methodology etc, but I seldom hear “agile database administration” (I only see one blog using this term: Agile Database Administration,  by Stuart Ainsworth). In my personal understanding, agile database administration is needed to answer the current challenges to DBAs.

To me, the biggest challenge to today’s DBA is the rapid changes in DBA’s responsibility domain due to the technology change. With each new version release, a DBA needs to learn something new/complex and be prepared to use it or even trouble-shoot it in real world very soon. For example, when replication is set up, we need to monitor the replication status. With mirroring setup, we need to worry about mirroring status. With service broker feature used, we need to debug service broker problems etc. Besides this, I see more sql server instances installed, and this leads to more workload and more new issues. 

All these means DBAs need a new methodology to handle current challenges without compromising work quality, and this new methodology can be found/created in the philosophy context of the original Agile Manifesto, i.e. a nimble/quick-response way to handle complex, time-consuming challenges. However, for database administration, I’d modify the manifesto to the following:

Individuals and coordination over processes and tools

Working solution over comprehensive documentation

Stakeholder integration over  service level agreement

Proactive maintenance over reactive response


In the spirit of the original twelve principles behind the agile manifesto, I’d also propose the following principles for agile database administration:

The highest priority is to ensure a stable, workable and secure database environment for the customer.

1. Collect just workable (not necessarily complete) requirements to plan the work

2. Acquire just needed (not necessarily all-round) knowledge to start the work

3. Develop just a usable (not necessarily perfect) solution to finish the work

4. Document just key (not necessarily comprehensive) information to wrap up the work

5. Conduct more face-to-face communication than emails to enhance collaboration

6. Work more on automation than manual clicks to reduce errors

7. Focus more on continuous improvements than revolutions to avoid disruption

8. Invest more on common issues than exceptions to maximize ROI

I do hope our DBA community can come up with a more sophisticated agile database administration manifesto and make it as a corner stone for an academic subject, say database administration engineering.

Posted in Administration | Tagged | Leave a comment

Database Administration KPIs – 2/2

In my last post, I discussed about the quantitative KPIs which, to me, are useful for “outsiders”, i.e. non-DBAs, to evaluate DBA work performance from business perspective. However, I believer there should be some qualitative KPIs that can be best used by “insiders” to evaluate the DBA work from a more technical and sophisticated point of view. So here is the list of these KPIs:

1. Database Administration Infrastructure Index: We know that in real world, we need transportation infrastructure of both good quality and sufficient quantity to boost economy and production. The transportation infrastructure is to facilitate the quick and efficient goods exchange to revitalize economy.  The same theory applies to database administration, i.e. DBAs need an infrastructure to quickly /efficiently collect / consume various information to do a top-tier database administration work. Here, the database administration infrastructure may include a set of hardware and software equipment/tools that composes a database administration framework. One quick example is that we often need a re-occurrence of an issue in a production for further trouble-shooting, but to do so, we can either wait the issue to re-appear in the production, or we can try to “trigger” the same issue in a dedicated DBA lab if there is such a lab, which of course includes both hardware and software tools. To me, this DBA lab can be considered as part of a database administration infrastructure, and I always dream that I can have a top-class dedicated lab where I can test how far my dream can fly with various administration ideas. I am still dreaming these days. (after all, where there is a dream, there is a way)

2. Database Administration Scalability Index: This index is to evaluate the DBA capability and capacity in handling increased DBA duty volume. With a strong scalability index, it means when duty volume grows, the actual DBA workload should be of little or no change as shown below


Between point A and B, with duty volume more than doubled, the DBA workload remains almost unchanged. To achieve this goal, there must be lots of automation plus a comprehensive administration platform on a robust administration infrastructure.

One example is: if we can automate the restore of a database from one production to one QA environment, we should be able to automate multi-database restores from multi-production environments to multi-QA environments. If we can further add some parallel restoring mechanism in the design, I’d say this db restore practice is very scalable.

3. Database Administration Maturity Index: This index tries to address the following questions:

(i) Are there any established rules/policies for a DBA to follow when dealing with administration issues? These rules/policies are not necessarily to be followed during manual work, they can be actually embedded into the script logic, workflow logic etc.

Take a random restore for example, before a restore, do we need to check the target server’s disk space to see whether there is sufficient space for a restore? I guess we may all encounter the following scenario: I need to restore a database by restoring from a full backup file and then from a differential backup, and this database is big, so it may take me 8 hrs to finish the restore from a full backup file, and when I do the restore from the diff backup file, the restore fails because in the diff backup file, the log file grows from 10GB to 18 GB and thus outgrows my 15 GB log disk space.

If there is proper pre-restore step followed, this embarrassing scenario can be avoided in the first place.

(ii) Is there a continuous improvement process for DBA daily practices? If there is, what are the evaluation criteria for the result of this process?

Many times, people just do what they are accustomed to and seldom think about what can be done to change the practice to be better. Or in extreme cases, “continuous improvement” is not even encouraged. I remember long, long ago, I was working in a company that required DBA to manually check 4 production servers against a 2-paged (about 20+ items) checklist document everyday. What I felt very bad was I have to manually check all the sql server logs and windows event log (application/system), job history etc. It can easily take 3 hrs to do this check. Imagine you have to do it every day in the morning, when interrupted by emails / meetings / phones, the process can easily last for a whole day. The worst thing was senior management did not even encourage any automation in this area for various reasons.

(iii) Does the DBA(/team) have the requirements/resources to improve the technical knowledge/skills on a regular base? And what are the criteria to verify the improvement? Top tier administration needs top tier DBAs, and as such, a mature database administration environment should encourage/facilitate DBAs to continuously grow/challenge their knowledge/skills.

(iv) Is the db administration corporate knowledge being taken care of? Each business environment may have different business practices/requirements when managing back-end db systems, this can vary from sql service installation to common user problems. All these are precious corporate knowledge and should be shared/maintained properly so the knowledge will not be lost due to one DBA’s absence.

(v) How is the collaboration and work relationships between DBA team and other stakeholder teams or among members within the DBA team? A mature DBA team should have a strong bond among the members to work for the reputation and goods of the team, and can collaborate well with other teams, like network / system admin /application support teams etc.

In a business world, it is almost impossible that we can achieve something without interaction with others. One example is performance issue, when users complains about sudden slowness of an application, it is hard to say whether the issue is caused by the db system or something else, like storage, network, application itself or even the user self (I once saw a case that the slowness was due to the bad data input from the end user). So a healthy and positive work relationship will avoid the unnecessary blames or finger pointing to others, and quicken the whole work.


To me, a good designed KPI, whether quantitative or qualitative, should be verifiable via both DBA work quantity and quality and cannot be abused for selfish purpose. So for example, assuming we have a KPI called “Avg DBA daily work time”, this KPI aims to reflect how many hours a DBA need to handle the workload daily, however, this KPI can be easily manipulated, and we cannot logically link more workload with long work time either, also long work time does not necessarily mean quality work. As such, this is not a proper KPI candidate.

I hope our DBA community can collaborate to come up with a set of KPIs to be used in a broader way. I think one practical use is to certify against these KPIs those companies with remote dba outsourcing services, just like currently, lots of companies are getting ISO 9000 certified. If there is such certification, I believe it will be very popular not only for those companies trying to get certified but also other companies, who can use the KPIs as yardsticks for internal assessment and evaluations.

Posted in Operation Productivity | Tagged | 1 Comment

Database Administration KPIs – 1/2


As a DBA, we may all have this question: How should my work be evaluated objectively? or in another way, what criteria would I hope to be used by others (my manager / clients / peers etc) to evaluate my work?

In DBA world, we love “no news is good news” and as such, to me, the best DBA (/team) should be a person (/team) who is never seen in any news/spotlight but holds an irreplaceable position in an organization’s IT hierarchy.

Though this qualitative description gets nodding from all DBAs I have worked with, it can hardly be useful to non-DBA stakeholders, like HR/senior management, because intangible description cannot be used for performance evaluation, which must be based on facts, the quantitative and verifiable facts.

So I think a set of quantitative KPIs will help out here. But to design these KPIs seems not easy because most DBA work has dependencies on external factors (such as network/hardware/business policies etc) controlled by others, and we do not want the KPIs to be unable to reflect DBA’s own and sole efforts.

With some struggle and thoughts, I have come up with the following KPIs which I believe are usable in most cases:

Avg. Database Availability Time (longer, better): The average time a sql server service is running between two unplanned downtime windows. (Planned downtime can be excluded here)

This KPI, strictly speaking, is not necessarily the full responsibility of DBA team as any unexpected network/hardware issue can cause the database unavailable. But there are many items that a DBA (/team) can control, such as: one db log file growing out of control, or tempdb growing out of disk or bad CLR stored proc causing a mini-dump and sql service restart etc. So this KPI actually aims to indicate DBA efforts to maintain the internal health of the sql server system, and to avoid any events that may cause the system’s unavailability. At the bottom line, this KPI will request the DBA to make the unplanned to be planned, for example. send early warnings to stakeholders, if something is unavoidable, at least we can minimize the shocks to end users and help them to make a decent “exit”. 


Avg. Database Usability Accidents (lower, better): The accidents that cause the users unable to use the database system.

Still this may not be the full responsibility of DBA as these accidents can come from lots of things like bad query of 3rd party applications or bad hardware etc. This KPI mainly addresses database accidents that, if intervened early, can be prevented to impact the database usability from end-user perspective, let’s say long-running reporting queries blocking user’s regular OLTP application session, or user session timeout due to continued log file growth because of the small database file growth configuration (like 1 KB), etc.


Avg. Daily Workload/Person(higher, better): Avg daily dba service requests fulfilled per DBA  

We can use # of service requests as a starting point, but initially, we could categorize the common DBA service tasks (such as refresh an environment, install a sql server instance, deploy a script etc) and assign a payload factor to each category, so we can have a meaningful workload indicator. I think four categories, i.e. tiny, small, medium and big, should be a good starting point.


Avg. Service Fulfillment Time (shorter, better): this KPI should be calculated as workload unit (a unit of workload can be considered as one DBA work hour, this may includes the real work time plus the necessary documentation / communication time) completion time, for example a SQL Server instance setup task is worth 5 units, while granting a user with read-only access may count for 0.5 unit. so this KPI can indicate to stakeholders how efficiently /responsively a DBA works.

DBA Duty Volume: information about # of servers/instances/dbs/total db sizes.

This KPI is a good indicator for DBA responsibility scope, i.e. the number of database administration “objects” and their sizes.

Although these quantitative KPIs are useful to non-DBAs, I still feel there should be some qualitative KPIs that are useful to DBAs. In another word, these qualitative KPIs are used by DBAs to evaluate DBAs. For example, as a DBA, if I were asked to evaluate another company’s DBA work, I could use these qualitative KPIs as a starting point to do my evaluation. (I will come up with a list of quantitative KPIs in my next post.)

So far, I have not worked in an environment that a DBA team needs to be evaluated against a set of KPIs, and it would be very interesting to hear stories from anyone who has such experience, things like how to maintain (i.e. collect, calculate, report) these KPIs, whether there are any pros and cons etc.

Posted in Operation Productivity | Tagged | 7 Comments