IT People’s Technical Debt and Reputation Credit

In Canada and USA, when people apply for a credit card or a loan/mortgage, the lending institutions will always do a credit history check against the applicant, and based on the result, they will decide whether to approve the application or decide what loan rates the applicant will be charged. This practice no doubt greatly reduces the potential default risks the financial institutions may take.

On the other hand, because of this practice, most people are very careful about their financial well-being, and everyone tries the best to be responsible for their financial decisions so they can avoid a bad credit history. This is such a good win-win scenario from both social and personal perspectives.

I have been in IT industry long enough to see lots of bad results because of improper trust bestowed to people who are assigned or hired to take on some tasks. The most disturbing thing to me is these people can leave for another job without any “credit trace” left behind for whatever “damage” they have done to their projects / work and may repeat their faults again (because they may not even realize their faults in some cases, I will give an example later).

I define “Technical Debt” as the negative impact (to either an employer or a community involved) an IT professional has caused through his professional work, and “Reputation Credit” as the positive impact. I always dream that if there exists an organization that can collects all IT professionals reputation credit history and with some formula, can assign a credit score to each person, then there will be far fewer failed IT projects and far less wasted time and money in various IT activities. For example, it will be easier for companies to hire people and make the justified compensation package based on one’s credit score just like banks will charge you less rate for a loan when you have a high credit score, and it will also be much easier for decision makers to decide whose recommendations may be less risky to follow thus less possible to have a project slipped into mess.

Here are the two true stories that I believe if we had such “Reputation Credit Bureau”, there would be different outcomes:

1. A Canadian IT company has a new CEO hired, the company has been service-oriented by providing out-sourcing work on developing applications and consulting service and has been profitable all the time, the new CEO, for whatever reason, tried to change the company’s business model by developing a hardware product, and as such, spending millions in setting up a new office in US, after two years, it seems the hardware product does not achieve the expected response from the market, now the CEO left the company and got hired in another start-up company in Silicon Valley. The Canadian company had to wrap up the product project and dissolve the US office and also laid off 40% staff in Canadian office. It left a bad taste to all people affected.

I bet the CEO should get a very bad score (to the brink of bankruptcy) for her technical debt owed to this Canadian company and to all those laid-off people both in Canada and US offices. I also bet the start-up company in Silicon Valley will have a second thought if it run a “reputation credit history” report on this candidate.

2. A company started a multi-million dollar project by using some fancy technologies, in the initial discussion of the technical design, some proposed design points, for example, data modelling, are strongly objected by the DBA team who have been dealing with data modelling and sql server technologies all their career lives, but decision makers finally takes the opinion of the developer team, in which the technical architect is a member (who himself has very little knowledge of the complex data modelling). The result is a huge productivity underperformance, and the project was delayed again and again. The funny part here is those who strongly recommended the design and the methodology already left the company within 1 year, and those who remain are struggling to deal with the issues caused by such a flawed design. I’d believe those who initially proposed such architect design truly believe this is the best choice, but what puzzles me is how one can (or dare) stick to their opinions when someone else with more domain knowledge on the topic are against their opinions? In an analogy, it is like I dare to stick to my t-sql solution proposal when Itzik Ben-Gan recommended otherwise after reviewing my proposal. It is just beyond my understanding how this could ever happen.

I keep on thinking that if there is such technical debt thing, those who do not have enough domain knowledge will be very cautious to propose something that they are not sure of themselves, knowing there will be “penalty” (i.e. low reputation credit score) down the road if things go south, and as such, the correct recommendation from the DBA team may be adopted (for the data modelling part), thus avoiding the current headaches.

Of course, IT industry is not a financial industry, so this “reputation credit bureau” will not come to reality any time soon, but I still believe with spending in IT industry so big (thus big stake in financial well-being to companies), someday, such “reputation credit bureau” organization will be created in one form or another, and it will be beneficial not only to employers(who can hire the best people they can afford) but also to individuals (who will know that they will be fairly compensated). Such “bureau” can sustain itself by charging for any credit history queries just as the current credit bureau does (Equifax / TransUnion).

I am looing forward to that day…

Posted in professional development | Tagged | 11 Comments

New Year Wish List for SQL Server Features


With SQL Server 2016 to be officially released this year, I want to make a wish list for the features I really dream of, and see when / whether this list will come true down the road.

1. Revolutionize SSMS with features like those in MS Office Word / Excel

With MS announced their intention of release SSMS independently of SQL Server database engine in June 2015, I wish SSMS, for its t-sql editor part, can be as capable as current MS office Word and Excel applications. For example, I can arbitrarily set color / fonts to any portion of codes. The business reason behind is that I can make codes easier to carry coding convention or business rules. For example, I can make all “Delete” as red and big sized font.

On the other hand, when we query a table, we get a spreadsheet-styled result set, which is similar to Excel, but in Excel, we can automatically create a graph based on a result set. Isn’t it wonderful that we can do similar things in SSMS ? i.e. when we do a query, we get both the query result and along side is the graph (like a pie or line) based on the result set?

2. Revolutionize SQL help

I always enjoy the style and simplicity of help mechanism in PowerShell, all you need to do is run:

Get-Help –name <KeyWord>

Where KeyWord can be a cmdlet name (either system or user defined) or some specific word, like about_operators.

In SQL Server world, if you create an object (a SP, trigger, function etc), you can use sp_help to find the metadata of the object, but there is no way for you to see any user-created information in the object (unless you use sp_helptext to read the object body script)

Besides this shortcoming, another one is we cannot display any syntax for regular T-SQL statement. For example, you cannot use sp_help ‘create index’ to find the syntax of “create index”

It would be very productive if we can see syntax of t-sql just within SSMS itself, and we can build a help framework to allow developers put the “help” info in the stored procedure/functions/views to facilitate the understanding of these objects.

3. Incorporate PowerShell into T-SQL

Just like SQL server 2016 supports R language, it will be great if we can mix t-sql with PowerShell, that will open a door to many innovative scripts. For example, currently, to delete any backup files older than X days before starting the backup is not too simple. (We need to use SSIS package or sql agent job to handle this.)

4. Create Conditional Foreign Keys

Currently foreign key (FK) constraint serves the referential integrity check. But as a data architect, I find traditional FK constraint implementation is pretty “ugly” in real world design.

Think about this example: 

I have a [CUSTOMER] table, and each customer may have multiple phones, and so we will have a [Phone] table, inside which, we have a Customer_ID column referencing to [CUSTOMER] tables’s PK, i.e. [Customer_ID].

Now I also have a [EMPLOYEE] table, and each employee may have multiple phones as well, so I have to create another [Phone2] table, inside which, we have a Employee_ID column referencing to [EMPLOYEE] tables’s PK, i.e. [Employee_ID].

Now let’s say I also have [ServiceProvider], [ATTORNEY], [TRUSTEE] tables, each may have multiple phones, so strictly speaking, we need to create a [phone] table for each corresponding owner table. This is a mess. But let’s say, I’ll create a brand new table [phone] as follows:

  phone_number varchar(20)
  phone_type varchar(10) — = mobile, home, office, fax etc
, owner_id int
, owner_type varchar(20) — = Customer, Employee, ServiceProvider, Trustee, etc

This table has a problem in implementing FK constraint, i.e. there is no way for me to set up a FK relationship using [owner_id] to reference a PK in [CUSTOMER] or [EMPLOYEE] table.

What I really dream of is that SQL server can create a somewhat “conditional” FK, i.e FK is set up based on a condition. In this case, when [owner_type]=’Customer’, owner_id will reference to [CUSTOMER].[Customer_ID], when [owner_type]=’Employee’, owner_id will reference to [EMPLOYEE].[Employee_ID], and so on so forth.

Using pseudo code, it should be like the following:

Alter table dbo.Phone add Constraint [fk_phone_owner] Foreign Key
with [owner_id] reference to
case [owner_type] when ‘Customer’ then [Cusomter].Customer_ID
                  when ‘Employee’ then [Employee].Employee_ID
                  when ‘Trustee’  then [Trustee].Trustee_ID

Currently, this type of foreign key integrity relationship can be implemented via triggers, but every DBA knows trigger is best to be avoided whenever possible. So if SQL Server engine can implement such conditional FK integrity, it is great to solve lots of real world headaches.

5. Introduce parallel maintenance for SQL Server instances.

These days, most of sql server environments contain multiple big databases and the maintenance of these databases, such as backup / restore / index maintenance can be very time-consuming if doing things in sequence. Isn’t it nice if we can do all the backups in parallel with one t-sql like the following (pseudo code)

Backup Database [A], [B], [C], [D] to disk=’d:\a.bak’, ‘d:\b.bak’, ‘d:\c.bak’, ‘d:\d.bak’

and then there are four sessions (or threads) that do the backup at the same time?

6. Enhance handling mechanism for blocking

Blocking is so common to every DBA, it is annoying yet unavoidable. To me, one of the most annoying case is that during index maintenance, one index reorganizing (or rebuilding) is being blocked for long time that may exceed out of the maintenance window, thus cause the minimum accomplishment and a waste of the maintenance window. Starting SQL Server 2014, “alter index” has the option to kill itself or the blocker during blocking after waiting for some user-defined time (check <low_priority_lock_wait> option).

But from a DBA perspective, I think this option should be extended to many other DML T-SQLs, like Insert / Delete / Update, and any other T-SQLs that may be a blockee (such as Truncate, Select, or partition related operations etc).

What’s more, I want to know who is the blocker and better, the whole blocking chain. My imagined pseudo option should be as follows:



Save_Blocker_Info = <xml> }

Where <xml> can be an XML variable that records the blocking chain info. This info should be populated once MAX_DURATION is achieved.

With this blocking chain info, DBAs may come up with more creative ways to manage blocking and minimize the negative blocking impacts.

I hope some of the items will come true before 2020. God bless SQL Server. 

Posted in Uncategorized | 10 Comments

Be Cautious About Invoke-SqlCmd Bug

With PowerShell becoming more popular in daily DBA work, I’d like to mention one huge ugly bug I recently encountered to, IMHO, an corner-stone cmdlet in SQLPS module, invoke-sqlcmd.

I mentioned this here (and also in MS forum)

But I’ll give a more simplified version here to verify this bug (for SQLPS module of SQL 2012/SQL 2014)

First, let’s create two test tables in TestDB

use testdb
--drop table dbo.s, dbo.t

create table dbo.s (id int identity primary key, b varchar(50));
create table dbo.t (id int primary key, b varchar(50));

Ok, in PowerShell, now let’s just run two t-sql statements, one insert into each table, with the 2nd statement firing an error (I purposely make it error out by not populating the primary key column)

import-module sqlps;
invoke-sqlcmd -Server localhost -Database TestDB -Query "insert into dbo.s (b) values ('Hello'); insert into dbo.t (b) values ('World')"

Now you will see the following error


Now, if we check what is in table dbo.s, by select * from dbo.s, we will see two records instead of the expected one record as shown below


To me, this bug alone will stop me from using this invoke-sqlcmd, yet without which, SQLPS seems crippled in many cases.

When I try to report this as a bug in MS Connect, I find someone has reported a similar issue before in 2013, yet, MS closed it as “Won’t Fix”, I guess this is another bad decision from some PM who really did not appreciate the importance of this bug.  The MS connect item is here

I hope those DBAs who are keen on using SQLPS, please vote it up to let MS change their weird previous decision.

Once SQL 2016 is RTM’ed, I will test it again and see how it goes and report back here.

Posted in Uncategorized | Tagged , | 2 Comments

Database Administration via CrowdSourcing

According to Wikipedia:

 Crowdsourcing is the process of obtaining needed services, ideas, or content by soliciting contributions from a large group of people, and especially from an online community, rather than from traditional employees or suppliers.

Every DBA must have googled something to address his/her technical issue, and broadly speaking, this can be considered as using crowdsourcing for work purpose. But this is a very light way of crowdsourcing. To me, a real high-quality and serious crowdsourcing for database administration needs to be treated as a real project with planning, timeline, goals, budget and some senior DBA resources. The key to success is on that senior DBA who should be able to design a needed system and divide it into small logical units that are suitable for crowdsourcing.

Some advantages for crowdsourcing in database administration can be:

1. Scalability in database administration capacity: we can harvest the wisdom of the online community to increase our productivity and efficiency instead of relying on more man-hours to meet increased business requirements.

2. Cost control: DBA work is not cheap, yet finding the quality resource is still challenging most of the time. With proper crowdsourcing, we can minimize the cost and reduce the burn-outs of existing DBAs.

3. Work Quality Control: Through paid crowdsourcing channels (such as, we can usually award the task to the best rated bidder for the work. Through non-paid crowdsourcing (like, people will vote for the best answer, and this is like a quality control feature free to everyone.

4.  Potential big time saving: In database administration, there are quite some very impressive open-sourced tools, which, if used properly, may save us huge amount of time. To develop these free tools alone may easily cost an excellent senior DBA hundreds (if not thousands) of hours.

Currently, crowdsourcing does not attract much attentions from IT leaders. I think it is mostly because of a few hurdles listed below:

1. Lack of crowdsourcing management professionals: crowdsourcing itself is mainly rooted in online community, but we have vast number of channels for crowdsourcing opportunities, such as twitter, linkedin, stackoverflow, experts’ personal blogs, and various online forums under specific websites (in sql server domain, we have MS forums, and SSC forums etc), and also those professional conferences (such as PASS annual conference) can be a good channel as well. To manage crowdsourcing channels, we need people to know what to ask and how to ask in each different channel and can engage in constructive conversations with online community, and at the end, the senior DBA resource should be able to assemble everything together to make the whole solution work. This is no easy work.

2. Hard to trace responsibility: if  I start a crowdsourcing project and it fails, who should take the responsibility? The online community or myself? But on the other hand, if I am assigned a specific project, if the project fails, it is my sole responsibility, thus easy for management to manage.

Crowdsourcing administration does not necessarily mean easy or inexpensive especially for small project, but crowdsourcing can be a good way to amplify the capacities of existing resources, and maximize investment in technical project. Most important, it can reduce the dependency on hiring more qualified staff to start work.

Maybe in future, we will some 3rd party companies dedicated to such crowdsourcing project and then sell the service / solution or product to other companies.

Posted in Uncategorized | Leave a comment

Award from

Today, I am excited and humbled to learn that I am awarded “Rookie of the Year – 2014” by

I have to say it is probably my best decision in 2014 to write for as I am not only paid decently for writing about what I love to share, but also forced to do a lot of R&D in areas that I am not very familiar with, such as .Net classes, PowerShell, SMO etc. This R&D work has benefited me a lot.

Writing an article is different from writing a blog. Writing a blog is easy in the sense that you know it will be “published” but writing an article in a highly-regarded website is totally another thing because first there is no guarantee that your article will be published and second, even if it is published, you will more likely get caught for any bugs or deficiency in your code or logic. To me, the concern or worry that my article is below accepted quality forces me to spend more time and efforts in crafting an article from idea selection, article composition to multiple environments tests (i.e. from sql server 2005 to sql server 2014) and continuous proof-reading /modification. The process is much longer and more painful than writing a blog.

I really encourage all SQL Server professionals to share their ideas via anyway they like, blog, forum, or articles. But personally, I’d recommend if you have some practical tips that are not too long and can be easy to follow. It is very rewarding to know that your tip helps others in their daily work and on the other hand, it is awesome to treat yourself a good dinner outside with royalties received.

Posted in professional development | Tagged | Leave a comment

Bug in SQL Server 2014 SQLPS For Registered Servers/Groups

I recently had an opportunity to write an article for on managing registered servers under “local server groups”, and Greg Robidoux was kind enough to do a review for me and he got back to me saying my script not working under SQL Server 2014 SSMS, though it works under SQL Server 2012. Indeed, I tested the script only in SQL Server 2008/R2 and SQL Server 2012 because I have not installed SQL Server 2014 (expecting SP1 to be released soon).

So here is the background information, we can create registered servers / groups via PowerShell (i.e. SQLPS environment). What I need to do is to start SQL Server 2012 SSMS (abbr. as SSMS 2012 hereafter), connect to my local SQL instance and right-click an object, and from the pop-up window, click “Start PowerShell” as shown below



I then can enter into SQL Server 2012 SQLPS command window, where I can go into SQLServer:\SQLRegistration “folder” as shown below



Then I can use new-item cmdlet to create groups and registered servers.

For example, in the SQLPS 2012 window, if I run “new-item –path Group_A –type directory”



I will see in SSMS 2012, a group “Group_A” appear in the “Registered Servers” window:



However, if I do the same thing in SQLPS 2014, I will not see “Group_A” in SSMS 2014.

In my case, I prepared a script that will automatically create all the groups / registered servers based on my input, the generated groups / registered servers can be like the following:



I have tested my script in SQL Server 2008 / 2008 R2 / 2012 environments, no issue found, everything went well.

Since Greg told me it did not work in SSMS 2014, I started to install a SQL Server 2014 instance on my computer, on which I already have another two sql instances, one is SQL Server 2008 R2 and the other SQL 2012. After installation, I tested my script under SQL 2014 SQLPS window, and no doubt, it does not work, i.e. in SSMS 2014, I cannot see the registered servers / groups as shown in Fig-4. However, I noticed a weird thing that after I run my script in SQLPS 2014, I see all the registered servers / groups from SSMS 2012 though I do not see them in SSMS 2014, not in SSMS 2008R2 either (of course, I have cleaned everything in SSMS 2012 beforehand).  So this got me thinking there must be something wrong in SQL Server 2014.

So my first task in the debugging process is to figure out where these registered servers/groups information is saved. After some googling, I did find out and verified the answer from this link ( (Big thanks to both who asked and answered the question, you are my heroes.)

In my computer, the RegSrvr.xml for SSMS 2014 is located at the following folder (notice \120\, that is SQL Server 2014 version number)



Similarly the RegSvr.xml for SSMS 2012 is located at: (notice \110\, that is sql server 2012 version number)



With this information and the weird observation (i.e. registered servers/groups are seen from SSMS 2012 instead of SSMS 2014), I guess SQLPS 2014 may probably mis-manipulate the RegSrvr.xml in the wrong folder. So I cleaned up the registered servers/groups from SSMS 2012, and then mark down the [Date modified] value of RegSrvr.xml in \110\ folder, let’s say the time value is 10:00pm,  and I also marked the [Date modified] value for the RegSrvr.xml in \120\ folder, let’s say the value is 10:02pm. Now at 10:20pm, I run my script from SQLPS 2014 window to register all the servers/groups, and then I check back the [Date modified] value of RegSrvr.xml in both \110\ and \120\ folders, no surprise, the value of the file in \110\ folder changed from 10:00pm to 10:20pm, while that in \120\ folder remains at 10:02pm.

So this indeed proves SQLPS 2014 has a bug when it deals with registered servers/groups for “Local Server Groups” in SSMS 2014.

For fun purpose, I copied RegSrvr.xml from \110\ folder to \120\ folder (overwriting the existing one in \120\ folder), and then I started SSMS 2014 again. Just as expected, I find all the registered servers/groups appear in SSMS 2014 “Registered Servers” window, wow !

Posted in Administration | Tagged | 1 Comment

Decode SQL Server Job Schedules

As once a data architect, I cannot help but admiring and appreciating the data model behind the sql server job schedules. The core design is actually just one table, msdb.dbo.sysschedules, but the design beauty/elegance of the table is that it can support many different types of schedule with a rich set of variations for each, for example,  a schedule can be: on the weekdays of 2nd week of every 3 months, run every 30 min between 1:00am to 3:00pm starting at 1:15am.

Recently, I have been working on a task:

With two parameters, @Start_DateTime and @End_DateTime, find the all/specific scheduled jobs that will run during this time window, and list the exact scheduled run time for each job.

I initially try to google out a solution, but I simply cannot find out one that works to my needs, the closest one I can google out is this “Keep Track of all your jobs schedules”, but it has some deficiencies that cannot generate correct result, for example, if the job schedule is daily running every hour from 8:00am to 8:00pm, and you want to find out the job schedules between 10:05am to 1:05pm, the result will give you all schedules from 8:00am to 8:00pm instead of just between 10:05am and 01:05pm, and sometimes, it simply did not give out any job schedules.

So I start to work on my own, I have to say it is a fun / rewarding journey, my 1st version is about 1,300 lines with complicated if/else/case when, and my 2nd version is about 800 lines and my 3rd (the current) version is about 400 lines and I have tested in various weird schedules (in sql server 2012 and a few sql server 2008 cases as well), like a schedule of the 2nd Tuesday/Friday of every 3 months, and also a job with multiple daily / weekly schedules etc, and I always get the correct results.

I will give a few explanations regarding the code, so it is better to understand.

1. The @Start_DateTime and @End_DateTime should be in future, i.e. they should > getdate().

2. I need to rely on next_run_date/next_run_time of msdb.dbo.sysjobschedules, which will be refreshed every 20 min as indicated on BOL. This means if you create a new job with a new schedule, the next_run_date / next_run_time columns will not be populated immediately and if so, my script may not return correct results.

3. The most difficult part is about the calculation of “the <nth> <weekday> of every <X> months”, the difficulty here is that weekday as in datepart(weekday, <@date>) is impacted by the @@DateFirst setting (language dependent). However, I was lucky to find a language-independent solution in Itzik Ben-Gan’s series articles (DateTime Calculations).

if object_id('dbo.uspCheckJobFutureSchedule', 'P') is not null

    drop proc dbo.uspCheckJobFutureSchedule;


create proc dbo.uspCheckJobFutureSchedule

  @Start_DT datetime

, @End_DT datetime

, @JobName varchar(128)=''

, @ScheduleName varchar(128)=''


begin --proc

    if object_id('tempdb.dbo.#tmp', 'U') is not null

        drop table #tmp;

    if object_id('tempdb.dbo.#job', 'U') is not null

        drop table dbo.#job;

    set nocount on;

    if object_id('tempdb..#tblCalendar', 'U') is not null

        drop table #tblCalendar;

    create table #tblCalendar (dt smalldatetime, [year] smallint, [month] tinyint, [day] tinyint, [weekday] tinyint, [week_of_month] tinyint)

    declare @tbl table (dt smalldatetime, [year] smallint, [month] tinyint, [day] tinyint, [weekday] tinyint, [week_of_month] tinyint);

    --declare @start_dt datetime, @end_dt datetime;

    declare @diff_days int, @calc_start_dt datetime, @i int, @inner_loop_dt datetime;

    declare @current_dt datetime, @current_week datetime;

    declare @dw int;

    declare @calc_end_dt datetime;

    declare @min_end_date datetime, @max_start_date datetime, @dt datetime;

    declare @next_run_date int, @next_run_time int, @next_run_dt datetime;

    --select @start_dt = '2014-02-21', @end_dt = '2014-04-28';

    -- first get all jobs that are scheduled, with job name and the schedule details 

    create table #tmp (

      schedule_id int

    , jobname sysname

    ,schedule_name    sysname

    ,freq_type    int

    ,freq_interval    int

    ,freq_subday_type    int

    ,freq_subday_interval    int

    ,freq_relative_interval    int

    ,freq_recurrence_factor    int

    ,active_start_date    char(8)

    ,active_end_date    char(8)

    ,active_start_time    char(8)

    ,active_end_time    char(8)

    ,active_start_date_int   int

    ,active_end_date_int int

    ,active_start_time_int int

    ,active_end_time_int   int

    ,next_run_date int

    ,next_run_time int


    insert into #tmp (schedule_id, jobname, schedule_name, freq_type

















    select s.schedule_id,,, s.freq_type






    ,cast(s.active_start_date    as char(8))

    ,cast(s.active_end_date as char(8))

    ,substring(right('00000' + cast(s.active_start_time as varchar(6)), 6),1,2)+':'+    substring(right('00000' + cast(s.active_start_time as varchar(6)), 6),3,2)+':'+substring(right('00000' + cast(s.active_start_time as varchar(6)), 6),5,2)

    ,substring(right('00000' + cast(s.active_end_time as varchar(6)), 6),1,2)+':'+    substring(right('00000' + cast(s.active_end_time as varchar(6)), 6),3,2)+':'+substring(right('00000' + cast(s.active_end_time as varchar(6)), 6),5,2)

    , s.active_start_date

    , s.active_end_date

    , s.active_start_time

    , s.active_end_time

    , js.next_run_date

    , js.next_run_time

    from msdb.dbo.sysschedules s

    inner join msdb.dbo.sysjobschedules js

    on js.schedule_id = s.schedule_id

    inner join msdb.dbo.sysjobs j

    on j.job_id = js.job_id

    where s.enabled=1

    and j.enabled =1

    and js.next_run_date > 0

    and = case @ScheduleName when '' then else @ScheduleName end

    and case @JobName when '' then else @JobName end

    -- delete all job schedules that are beyond the boundary of the @start_dt and @end_dt

    -- for example, if a job is scheduled to be run in Jan, 2020, and if @start_dt='2018-01-01' and @end_dt = '2018-02-01', 

    -- then we do not need to consider this job schedule (i.e. Jan 2020)

    delete from #tmp

    where convert(datetime, active_start_date +' '+ active_start_time ) > @end_dt

    or convert(datetime, active_end_date +' '+ active_end_time ) < @start_dt;

    -- create a temp table to hold those jobs that will be started between @start_dt and @end_dt window

    create table dbo.#job (

      ServerName sysname default @@servername

    , jobname sysname

    , ScheduleName sysname

    , run_datetime datetime


    declare @schedule_id int, @schedule_name sysname, @active_start_dt datetime, @active_end_dt datetime;

    declare @freq_type int, @freq_interval  int

    , @freq_subday_type   int

    , @freq_subday_interval  int

    , @freq_relative_interval  int

    , @freq_recurrence_factor  int

    , @active_start_time_int int

    , @active_end_time_int int;

    declare curSch cursor for

    select distinct schedule_id, schedule_name, active_start_dt=convert(datetime, active_start_date +' '+ active_start_time )

    , active_end_dt=convert(datetime, active_end_date +' '+ active_end_time )

    , freq_type, freq_interval

    , freq_subday_type

    , freq_subday_interval

    , freq_relative_interval

    , freq_recurrence_factor

    , active_start_time_int

    , active_end_time_int

    , next_run_date

    , next_run_time

    from #tmp;

    open curSch;

    fetch next from curSch into @schedule_id, @schedule_name, @active_start_dt, @active_end_dt

    , @freq_type, @freq_interval

    , @freq_subday_type

    , @freq_subday_interval

    , @freq_relative_interval

    , @freq_recurrence_factor

    , @active_start_time_int

    , @active_end_time_int

    , @next_run_date

    , @next_run_time;

    while @@fetch_status = 0

    begin -- loop

        if @freq_type = 1 -- one time only

        begin -- one time only

            if @active_start_dt between @start_dt and @end_dt

                insert into #job (jobname, ScheduleName, run_datetime)

                select jobname, @schedule_name, convert(datetime, active_start_date +' '+ active_start_time)

                from #tmp

                where schedule_id = @schedule_id

        end -- one time only


        begin -- else not one time only

            select @next_run_dt = convert(datetime, cast(@next_run_date as varchar(10)) +' ' + substring(right('00000' + cast(@next_run_time as varchar(6)), 6),1,2)+':'+    substring(right('00000' + cast(@next_run_time as varchar(6)), 6),3,2)+':'+substring(right('00000' + cast(@next_run_time as varchar(6)), 6),5,2))

            select @max_start_date = case when @next_run_dt >= @start_dt then @next_run_dt else @start_dt end,

            @min_end_date = case when @active_end_dt >= @end_dt then @end_dt else @active_end_dt end;

            select @calc_start_dt = dateadd(month, datediff(month, 0, @max_start_date), 0); -- switch to the month's first day for @max_start_date

            select @calc_end_dt = dateadd(month, datediff(month, 0, @min_end_date)+1, 0)-1; -- witch to the month's last day for @min_end_date

            -- generate a calendar table btw @calc_start_dt and @calc_end_dt

            set @inner_loop_dt = @calc_start_dt

            truncate table #tblCalendar;

            begin tran

            while @inner_loop_dt <= @calc_end_dt


                insert into #tblCalendar (dt, [year], [month], [day], [weekday], [week_of_month])

                select    @inner_loop_dt, year(@inner_loop_dt), month(@inner_loop_dt), day(@inner_loop_dt), datepart(dw, @inner_loop_dt + @@datefirst -1)

                , datepart(week, @inner_loop_dt) - datepart(week, dateadd(month, datediff(month, 0, @inner_loop_dt), 0)) + 1 ;

                set @inner_loop_dt = @inner_loop_dt + 1;


            commit tran

            delete from @tbl;

        if @freq_type = 4 -- daily 

        begin -- daily

            insert into @tbl (dt)

            select dt from #tblCalendar

            where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

            and datediff(day, @active_start_dt, dt)%@freq_interval = 0;

        end -- daily

        if @freq_type = 8 -- weekly 

        begin -- weekly

            insert into @tbl (dt)

            select dt from #tblCalendar

            where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

            and datediff(week, @next_run_dt, dt)%@freq_recurrence_factor = 0

            and power(2, datepart(dw, dt+@@datefirst-1)%7) & @freq_interval >=1;

        end -- weekly

        if @freq_type = 16

        begin -- monthly, @freq_type = 16

            insert into @tbl (dt)

            select dt from #tblCalendar

            where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

            and datediff(month, @active_start_dt, dt)%@freq_recurrence_factor = 0

            and [day]=@freq_interval;

        end -- monthly, @freq_type = 16

        if @freq_type = 32

        begin -- monthly, @freq_type = 32

            -- find the days that the scheduled jobs will be run (later, we will loop through time to figure out the exact date/time)

            if @freq_relative_interval in (1, 2, 4, 8)


                if @freq_interval < 8


                    ; with c as (

                        select dt, [year], [month], [weekday], [day], [week_of_month], rn=row_number() over (partition by year, month order by dt asc)

                        from #tblCalendar

                        where dt between @calc_start_dt and @calc_end_dt

                        and [weekday] = case @freq_interval when 1 then 7 else @freq_interval -1 end


                    insert into @tbl (dt)

                    select dt from c

                    where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

                    and rn =  log(@freq_relative_interval)/log(2)+1

                    and datediff(month, @calc_start_dt, dt)%@freq_recurrence_factor = 0;


                if @freq_interval = 8

                    insert into @tbl

                    select * from #tblCalendar

                    where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

                    and [day] = @freq_relative_interval

                    and datediff(month, @calc_start_dt, dt)%@freq_recurrence_factor = 0;

                if @freq_interval=9 -- weekday

                    insert into @tbl

                    select * from #tblCalendar

                    where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

                    and week_of_month =  log(@freq_relative_interval)/log(2)+1

                    and [weekday] <6

                    and datediff(month, @calc_start_dt, dt)%@freq_recurrence_factor = 0;

                if @freq_interval=10 -- weekend

                    insert into @tbl

                    select * from #tblCalendar

                    where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

                    and week_of_month =  log(@freq_relative_interval)/log(2)+1

                    and [weekday] in (6, 7)

                    and datediff(month, @calc_start_dt, dt)%@freq_recurrence_factor = 0;


            else -- @freq_relative_interval = 16 -- last

            begin -- @freq_relative_interval = 16

                if @freq_interval < 8


                    ; with c as (

                        select dt, [year], [month], [weekday], [day], [week_of_month], rn=row_number() over (partition by year, month order by week_of_month desc)

                        from #tblCalendar

                        where dt between @calc_start_dt and @calc_end_dt


                    insert into @tbl

                    select dt, [year], [month], [weekday], [day], [week_of_month] from c

                    where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

                    and week_of_month =  log(@freq_relative_interval)/log(2)+1

                    and [weekday] = case @freq_interval when 1 then 7 else @freq_interval -1 end

                    and rn=1

                    and datediff(month, @calc_start_dt, dt)%@freq_recurrence_factor = 0;


                if @freq_interval = 8


                    ; with c as (

                        select dt, [year], [month], [weekday], [day], [week_of_month], rn=row_number() over (partition by year, month order by [day] desc)

                        from #tblCalendar

                        where dt between @calc_start_dt and @calc_end_dt


                    insert into @tbl

                    select dt, [year], [month], [weekday], [day], [week_of_month] from c

                    where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

                    and rn =1

                    and datediff(month, @calc_start_dt, dt)%@freq_recurrence_factor = 0;


                if @freq_interval=9 -- weekday


                    ; with c as (

                        select dt, [year], [month], [weekday], [day], [week_of_month], rn=row_number() over (partition by year, month order by week_of_month desc)

                        from #tblCalendar

                        where dt between @calc_start_dt and @calc_end_dt


                    insert into @tbl

                    select dt, [year], [month], [weekday], [day], [week_of_month] from c

                    where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

                    and [weekday] <6 and rn=1

                    and datediff(month, @calc_start_dt, dt)%@freq_recurrence_factor = 0;


                if @freq_interval=10 -- weekend


                    ; with c as (

                        select dt, [year], [month], [weekday], [day], [week_of_month], rn=row_number() over (partition by year, month order by week_of_month desc)

                        from #tblCalendar

                        where dt between @calc_start_dt and @calc_end_dt


                    insert into @tbl

                    select dt, [year], [month], [weekday], [day], [week_of_month] from c

                    where datediff(day, @max_start_date, dt) >=0  and datediff(day, dt, @min_end_date)>=0

                    and [weekday] in (6, 7) and rn=1

                    and datediff(month, @calc_start_dt, dt)%@freq_recurrence_factor = 0;


            end    -- @freq_relative_interval = 16    

        end -- monthly, @freq_type = 32

        if exists (select * from @tbl) -- we do have days that match the schedules, now we need to loop through time to check whether the job will be run


            declare curT_daily cursor for

            select dt from @tbl;

            open curT_daily;

            fetch next from curT_daily into @dt;

            while @@fetch_status =0

            begin -- while loop

                    set @inner_loop_dt = dateadd(second, @active_start_time_int/10000*3600+@active_start_time_int%10000/100*60+@active_start_time_int%100, @dt);

                    while @inner_loop_dt <= dateadd(second, @active_end_time_int/10000*3600+@active_end_time_int%10000/100*60+@active_end_time_int%100, @dt)


                        if @inner_loop_dt between @max_start_date and @min_end_date

                            insert into #job (jobname, ScheduleName, run_datetime)

                            select jobname, @Schedule_Name, @inner_loop_dt

                            from #tmp

                            where schedule_id = @schedule_id

                        if @freq_subday_type = 1


                            set @inner_loop_dt = @inner_loop_dt + 1;



                        if @freq_subday_type = 2

                            set @inner_loop_dt = dateadd(second, @freq_subday_interval, @inner_loop_dt);

                        else if @freq_subday_type = 4

                                set @inner_loop_dt = dateadd(MINUTE, @freq_subday_interval, @inner_loop_dt);


                                set @inner_loop_dt = dateadd(HOUR, @freq_subday_interval, @inner_loop_dt);


                fetch next from curT_daily into @dt;

            end -- while loop

            close curT_daily;

            deallocate curT_daily;


        end -- -- else not one time only

        fetch next from curSch into @schedule_id, @schedule_name, @active_start_dt, @active_end_dt

        , @freq_type, @freq_interval

        , @freq_subday_type

        , @freq_subday_interval

        , @freq_relative_interval

        , @freq_recurrence_factor

        , @active_start_time_int

        , @active_end_time_int

        , @next_run_date

        , @next_run_time;

    end -- loop

    close curSch;

    deallocate curSch;

    select ServerName, JobName, ScheduleName, Run_DateTime from #job

end -- proc


So how can we use this code in a more innovative way other than finding the future job scheduled times? Here are a few user cases:

1. You need to check whether there is any job not running at an expected datetime.

A job can miss its execution because the last running is past the current schedule time. For example, if a job is scheduled to run every 10 min and usually it takes only 5 min to finish the run, now if the job (run at 10:00am) suddenly runs 12 min, then the expected 10:10am run will not start at all. There are some other weird incidents that can cause the job not running, like failover happened and caused sql agent service not started during the time that the job is scheduled to run etc.

2. You need to make sure a job indeed runs.

I have seen in some environments, some jobs are configured with a completion notification, i.e. whether the job succeeds or fails, an email alert will be sent. But 99% of the time, the job succeeds, yet, as a DBA, I still need to bear the pain to receive and delete a job saying “The job xxxx completes”.  What I really want is if the job fails, send me an alert, otherwise, do not send me anything. But I need to make sure the job indeed runs. The job schedule can be something weird, like the every 3 months, the 3rd weekend days from 10:00am to 10:00pm every 2 hours.

The solution: Every  1st day of each month (or any day you like), I will run the script and dump the result to a table, let’s call it tblJobFutureSchedule, and then at the end of every day (or every hour/week to your own choice), I will run a check script to compare the result in  the msdb.dbo.sysjobhistory with tblJobFutureSchedule to make sure (1) there is no missing job execution and (2) the job indeed runs. If issue found, an alert will be sent, and we can do further investigation.

3. If there is a scheduled server patching work (implying that there are multiple reboots during the patching process), we can use the code to generate a list of jobs that will be scheduled to run during the patching window, so we can disable these jobs before the patching starts and later enable these jobs after the patching window.


Posted in Administration, SQL Development | Tagged , | Leave a comment