TSQL Tuesday #96: Folks Who Have Made a Difference

 

tsql2sday150x150 (1)

 

This post is part of TSQL2sday (or T-SQL Tuesday) which was started by Adam Machanic (t/b).  The topic for this month has been set by Ewald Cress‏ (t b) and is, to quote: “the opportunity to give a shout-out to people (well-known or otherwise) who have made a meaningful contribution to your life in the world of data.”

I suppose there’s no better place to start than with Adam Machanic himself, who’s T-SQL Tuesday idea has been one of my move valued resources as I’ve been learning SQL! Thanks Adam!

Context

I’m going to use this post to thank people who have contributed to getting me where I am now in my career.  For some context, before 19th August 2016 my sum knowledge of SQL had been a few modules during my Masters Degree at university back in 2006 / 7.  About all I would have been able to accomplish would have been SELECT *FROM Table at a push. Now, as I type this, whilst still admittedly very junior in the SQL world, over 50% of my main duties everyday revolve around SQL in some way or another.

Thank Yous

I want to start with one particularly large thank you. Bear with me, it’s a  bit of a long one!

At the end of July 2016, the Head of IT where I work had left the company.  I was going to be around 3-4 months until he was being replaced, which left the IT department to just me and 1 other member of staff. My infinitesimal SQL knowledge somehow left me as the expert in the field!  Little worry though, little of the work carried out by our department involved any SQL knowledge. I work in the Legal Sector and whilst our main Case Management System runs on a SQL database, we have full support from the vendors for this.

Fast forward about 6 weeks, it’s the morning of 19th August 2016 and I arrive in to work at 07:00 to find that the aforementioned Case Management System has completely and utterly fallen over.  No one can access the system at all, nothing, luckily it’s early morning but if I can’t this fixed, work will utterly grind to a halt!!! All I can figure out is that we have somehow completely run out of disk space.  After a bit of digging, I discover that, for reasons I don’t need to get into, adding more disk space to our SQL Server is not an option, and on top of that something called the Transaction Log is full, and is apparently the cause of all my problems! In absolute desparation I send out this tweet and pray :

Notice the times tamp, it’s just after 08:00 at this point, and the support company we have do not begin accepting calls until 09:00. Like I said, I was praying! Thankfully, I received a reply from Ryan Yates (t) who drops a mention to Thomas Rushton (t/b). Thomas basically spends the morning letting me message him back and forth, explaining what my problem may be, how to find out what has gone on, and once our 3rd part support finally kicks in, translates for me what they are telling me and advises me on a whole heap of things to read up on.  So, for my 1st thank yous, Ryan, thank you for replying to me that day, if you hadn’t pointed me towards Thomas I would most likely have been a crying quivering mess under my desk! And Thomas, I’ve wanted to say this for a while, thank you!!! Not just for metaphorically holding my hand over Twitter that morning, but our conversations inspired me to learn about SQL, and now, just over a year and a bit later, it has become the thing I am most focused on in my career and development and I owe it entirely to you.  I honestly would not be where I am today if I hadn’t spoken to you that morning. Thank you.

More Thank Yous:

Now that the giant one is out of the way, there’s a few more people I’d like to thank. I’ve never met a single one of these people in “real life” and I wouldn’t be surprised (or offended) if the vast majority not only didn’t realise that they have helped me, but probably couldn’t pick me out in a line-up! I can honestly say though, each one of these people is owed some sort of thank you for helping me get to where I now am:

Itzik Ben-Gan (t/b) : I have bought 3 of your books and each one has proved a truly valuable resource on my journey to learning SQL.  I am currently working through the MCSA course and your training kits and fundamental books are by far my best resource. Thank you.

Tao Klerks – PoorSQL (t/w) : The PoorSQL website (and latterly the SSMS plugin) have been immensely helpful when trying to reverse engineer existing SQL reports that are in such terrible formatting that they are a nightmare to read. This simple tool has saved me countless headaches! Thank you.

Shane O’Neill (t/b) : Shane ends up answering a hell of a lot of my questions I come up with on twitter, even joined Reddit to reply to a few on the SQL subreddit when I’ve posted some long form questions there. His advice has been unbelievably helpful, and it was him that introduced me to #TSQL2sday! He also doesn’t judge me too badly when occasionally the problems I have are ridiculous typos / logic oversights on my side! Thank you.

Assorted Other Thank Yous:

Chrissy LeMaire (t/b), Rob Sewell (t/b), Ben Weissman(t/w), Pat Phelan (t/w), Koen Verbeek (t/b), Todd James (t), Taiob Ali (t/b), Derik Hammer (t/b), Andre Kamman (t), Thomas LaRock (t/b), Kenneth Fisher (t/b), Argenis Fernandez (t/b), Allen McGuire(t/b), Mladen Prajdic (t), Jason Kyle (t), Kerry Tyler (t/b), Neil Gelder (t/b), Chris Taylor (t), Constantine Kokkinos (t/b), Andrew Pruski (t/b), Pinal Dave (t/b) and probably more I don’t remember.  Each one of these people have either answered questions online, provided links to valuable resources, or in some way have just inspired me to keep going when learning gets tough.  Thank you all!

Lastly, thanks to Randolph West (t) for encouraging me to get this blog posted today when I was whinging on twitter this morning that I completely spaced on the date and forgot to write it!

SQL Data Types – Strings

This is the first post in what I hope to be a mini-series into the main data types that a beginner will come across when using T-SQL.  First in this series is Character Functions and how to manipulate Strings.

What is a String?

A string is a data type used for values that are made up of ordered sequences of characters, such as “hello world”. A string can contain any sequence of characters, visible or invisible, and characters may be repeated. The number of characters in the string is called its length, and “hello world” has length 11 – made up of 10 letters and 1 space. There is usually a restriction on the maximum length of a string. There is also such a thing as an empty string, which contains no characters – length 0. (credit for the definition of a string goes to BBC Bitesize).  It is fair to assume that most databases contain strings and, as such, this is probably a good place to start this series.

String Manipulation and Functions

Whilst T-SQL may not have been designed with advanced character string manipulation as one of its key features, it is something that I find myself doing quite frequently.  Here is an outline of the main ways T-SQL provides to  manipulate strings.

Concatenation

Concatenation is the act of combining multiple string together to form 1 new string.  For instance “First Name” added to “Surname” to form “Full Name”.  T-SQL provides 2 ways to do this, the plus (+) operator, and the CONCAT function.

This first example is string concatenation using the + operator.

SELECT empid
    , country
    , region
    , city
    , country + N',' + region + N',' + city AS 'location'
FROM HR.Employees

And this is what is returned :

Capture

As you can see, when any value of the inputs is NULL then the value returned using the + operator is also NULL.  Clearly, there will be times when NULL is not what you want in your output.  Luckily, there are 2 ways around this, CONCAT and COALESCE.

CONCAT – By default, this substitutes NULLs with empty strings.

COALESCE (<expression>, ”) – This is used to join to show an empty string when the region is NULL in the below example.

SELECT empid
    , country
    , region
    , city
    , CONCAT (country, N',' + region, N',' + city) AS 'location_concat'
    , country + COALESCE( N',' + region, N'') + N',' + city AS 'location_coalesce'
FROM HR.Employees

(not sure why CONCAT didn’t go pink in this code!)

And you can see the results are the same from each of these below :

Capture

Substrings

There are a few functions to perform actions to a part of a string such as pattern matching, and extracting part of a string.

The SUBSTRING function allows you to extract only a section of a string.  you specify the input string, the position of the string to start from, and the length of the substring required. e.g, to return abc from abcdef you could use:


SELECT SUBSTRING('abcdef',1,3) AS' substring'

There are also LEFT and RIGHT functions that work in much the same way, except that you only need to specify the number of characters required from the left or right ends of the string.  For instance :


SELECT LEFT ('abcdef' ,3) AS 'left'

    , RIGHT('abcdef' ,3) AS 'right'

Capture

Finally with SUBSTRING there is CHARINDEX and PATINDEX which return the position in a string of particular characters or patterns as a numeric value.  For instance the below returns 5 as ‘ef’ starts at the 5th position of the string ‘abcdef’


SELECT CHARINDEX('ef', 'abcdef')

PATINDEX works in much the same way but can be used to find patterns rather than a constant string.

You can combine a few of these together to get some quite clever results.  Here I used CHARINDEX and LEFT to separate a 1st name from a 2nd name :


SELECT LEFT('Daniel Blank', CHARINDEX(' ', 'Daniel Blank') -1)

This just returns ‘Daniel’

Length

T-SQL has 2 functions for measuring the length of an input value, LEN and DATALENGTH.  LEN returns the length of an input string in terms of the number of characters (ignoring trailing spaces).  DATALENGTH returns  the length of the input in terms of the number of bytes a string takes up.  For instance :

SELECT LEN ('Daniel Blank') AS 'len'
      , DATALENGTH (N'Daniel Blank') AS 'datalength'

Capture

Alteration

There are three supported T-SQL functions used to alter an input string.  REPLACE, REPLICATE, and STUFF.  These are pretty self-explanatory from the code below but I will give a brief explanation of each.


SELECT REPLACE ('Daniel.Blank','.',' ')  AS 'replace'

, REPLICATE('a',10) + 'rgh!' AS 'replicate'

, STUFF('abcdefghijklmnop', 5, 5, '12345') AS 'stuff'

Capture

 

REPLACE lets you replace part of a string by specifying the input, the substring to be replaced, and the substring to replace with.

REPLICATE allows you to repeat an input string a specified number of times.  In the above, we replicated the letter ‘a’ 10 times.

STUFF deletes a section of a string and adds in another string in its place. You specify the input string, the character position to start the delete, how many characters to delete, and then the string to ‘stuff’ into its place.

Formatting

Lastly we have the functions to apply formatting options to a string.  These are UPPER, LOWER, LTRIM, RTRIM and FORMAT.  The first  four are fairly self-explanatory once you see what they do, but FORMAT may take a little explanation.


SELECT UPPER('DAniEl bLANk') AS 'upper'

, LOWER('DAniEl bLANk') AS 'lower'

, LTRIM('              Daniel Blank') AS 'ltrim' , RTRIM('Daniel Blank              ') AS 'rtrim'

, RTRIM(LTRIM('     DanielBlank     ')) AS 'l_and_r_trim'

, FORMAT(123456,'000000000') AS 'format'

Capture

UPPER and LOWER allow you to format a string in either upper or lowercase characters.

LTRIM and RTRIM allow the removal of either leading or trailing spaces respectively.

FORMAT allows you to format an input value based on a format string.  So in the above example the number 123456 is formatted as a character string with a size of 9 characters with leading zeros.

Outro

String are one of the fundamental data types that you will come across when working with databases.  Above I have covered some of the most common way of dealing with this data types.  Hopefully someone out there finds it interesting / useful.  For me, it’s good to have this typed up to reinforce what I have been covering whilst studying for my MCSA.

Note: A lot of this information has been sourced from the official Microsoft 70-461 Exam Training Kit which I am currently working through.

#TSQL2sday – Recovering Orphaned Files with Powershell and SQL

Being relatively new to SQL, this’ll be my first post for TSQL2sday.  For those not in the know, TSQL2sday (or T-SQL Tuesday) was started by Adam Machanic (t/b) and, I quote from this month’s host Rob Sewell (t/b), is :

“…a chance for you to join in the SQL Server community and write a blog post on a suggested topic. It makes for a great way to find a bunch of blog posts showing the same subject from many different viewpoints”

This month’s topic (if you haven’t already guessed from the title) is Powershell.

Now, Powershell is my 1st coding love. I’m not brilliant at it, but I’m definitely more than a beginner.  It’s what I spend most of my working life playing with, and it was the reason I eventually got in to trying to learn me some SQL.  Baring all this in mind, this was the first month that I’ve felt confident enough to join in, so here goes!

Creating a Scheduled Powershell Task to Clean up Orphaned Files in our SQL Database.

Catchy title, right?  Basically, what I have been playing with this month is using a combination of SQL and Powershell to help with a small mess that we have recently found ourselves in.  For this to make sense, I guess it’s going to need some sort of context.  I’ll try and be brief.

Context

I work at a relatively small but successful law firm, and our practice pretty much lives and dies with our Case Management System (CMS).  if you don’t know what that is then, suffice to say, it’s like an IT Service Desk tool on steroids, lots of steroids.  This software stores all documents, letters, email and anything else that could possibly be associated with a legal case, and organises them all in some sort of logical fashion. Suffice to say we have literally millions of these files.  One of the main ways that these files are imported into the CMS is by an Outlook plugin that files the email and / or attachments.  The users is prompted to enter where they want to file them, and off they go.

We also use folder redirection for our users, so that their files are on a server rather than on their local computers.  It works pretty well, has some issues here or there, but it’s OK. Folder Redirection isn’t the star of this show.

Over the last couple of months, there have been a couple of small issues with the server that houses the folder redirection data.  This has seemingly had a knock on effect where emails filed into the CMS during a folder redirection blip are seemingly lost.

What’s the Problem?

Well, each file in the CMS has a database entry that points to the files location on a server somewhere.  During a Folder Redirection blip, if you were to simply look at the CMS, it would appear that the files have been imported correctly. However, when trying to open them, you are greeted with “file can not be found”.  What seems to have happened is that the file path has only the file name in it, not the actually crucial path part.  After a lot of digging, these files (mostly) seem to have ended up in the root directory of the Folder Redirection share.

The first couple of times this happened, it took us ages to manually figure out where these were supposed to be filed, drag and drop the files there, and then manually updating the paths.  The first time, this was irritating. The second time, infuriating. The third time lead me to this post, I wanted to never have to do this again!

There are a few things to note about this issue. Firstly, not all files I find in the database have been located in the Folder Redirection share root. Secondly, not all files in the Folder Redirection share root belong to Database entries with missing paths.  Lastly, all files are associated with a legal case, these take the form of an entity (or “customer”) and a matter number.  An example of this would be Entity ABC123 Matter 6 or ABC123/6.  The files associated with this matter need to be stored in a path based on this, so for this matter the files would be stored in \\KSLFS01\ptrdata\docs\A\B\C\ABC123\6.  To get each file going into the correct folder is something I needed to figure out.

The solution

This is my wonderful solution.  Normally I’d use the code formatting of WordPress, but because its a Powersehll script with SQL code in there, it just didn’t format well, so excuse the picture :

code
My wonderful solution!

This solution is essentially doing 3 separate functions combined together:

  1. Query the database to find all files in the database that meet my missing path criteria.
  2. Check the content of the Folder Redirection share root folder and check to see if the files are in there.
  3. If the files from the 1st step are found in the folder from the 2nd step, migrate the file to the correct location and update it’s path in the database.

SQL was obviously used to find the relevant files and then update the the file path when required.  CONCAT and SUBSTRING were used to form the correct file path, and are things that I had not used before this,

What I like about this solution is that it is now something that can be scheduled weekly on the SQL server, and hopefully means that this problem should never really bother us again.

Now, neither the Powershell code, nor the SQL code, are all that complex here. This is the first time I’ve really tried to combine the two into some useful function.  The Powershell itself is pretty straight forward but the SQL did take me a bit of research / trial and error to get right (the evolution can be found in this Github Gist, and this Twitter thread).

Outro

Whether what I have come up with is the best way to resolve this issue or not, is not important here.  For me, this solution to me represents progress.  For the last 10 months I have been teaching myself SQL, and I’ve been trying to learn Powershell for a good 4 years (but that last 2 seriously).  This is the first time I am really seeing the fruits of my labour.  The combination of using SQL and Powershell together made what would have been hours of tedious work into a relatively quick and painless process.

Going forward, I can see how combining SQL and Powershell could really open the doors to more solutions for problems I already have, problems I don’t know exist yet, or just improving some of  my already existing solutions.  I want to start using both of these languages to the full, and have already begun looking at resources, such as DBA Tools, (t/w) a great Powershell module for SQL admins, and have started reading some books by Itzik Ben-Gan (t/b), and working towards my MCSA.  I’m really hoping that this is the beginning of something special!

1vo3u1

Customising the Outlook Contact Cards and Adding User Photos

Recently, I decided that I wanted to add staff photographs to the Global Address List. This is something that I had previously inherited in a former job, and thought would be good to implement for my current employer.

There were a few reasons for wanting to do this :

  • Staff awareness of each other

Whilst working in IT, i seem to know everyone in the company (we aren’t massive, <200 staff), it turns out that most people struggle with who people are outside of their own department.

  • Friendly face on emails

People treat each other better (even internally) when they can put a face to who they are communicating with, especially over email.  It’s too easy to dehumanise someone who is just words on a page.

Anyway, the technical stuff.

Group Policy

First thing we needed to do was to install the Office 2013 Administrative Template files (ADMX/ADML) and Office Customization Tool.  This gives the extra options to be able to manage a whole host of customisation settings within the office environment.

The relevant ones for the Contact Card are in User Configuration>Policies > Administrative Templates > Microsoft 2013 > Contact Card > Contact Tab

ContatCard2.jpg

This part can get a little fiddly.  There are a few default values for the Contact Cards and if these fields are competed in an AD User then they are displayed.  To display different ones (we wanted to have the Pager field displayed, which is not default) then you have to replace one of the defaults.

There are 3 types of options here :

  1. Replace AD
  2. Replace Label
  3. Replace MAPI

In all honesty, this whole process seriously confused me, and took a HUGE amount of trial and error to get the result I wanted.   I know a lot of these “how to” posts give full technical explanations, but I still don’t fully understand this.  This is just an explanation of what I did to get it working!

As an example, to add the Pager field (where we store our users extensions numbers) to the Contact card ,I removed the Work Address default value by setting the following policy values:

  • Replace Label -Work Address = “Extension” (this made it so that instead of the field being displayed as Pager, it was displayed as Extension)
  • Replace AD – Work Address = “ms-Exch-Telephone-Personal-Pager” (This is the full AD Attribute name for the Pager field in AD – a hell of a lot of googling went into this!)
  • Replace MAPI -Work Address  =  975241247 (this is the Binary value  for the hexadecimal property tag for the Pager property – again, a lot of googling!)

Once the group policy was set up and working (FINALLY!) the next task was to actually add the pictures.

Pictures

Turns out the photos have to be less than 10 Kb for them to be stored in AD. I used GIMP to shrink all user photos down until they were all under the size limit and stored them in a share on one of our server.  You can set the photos against the indivisual users using Powershell.

To set all photos at once :


$PhotoPath = "\SERVER\SHARE\photos\Outlook\*.*" $PhotoPath = "\SERVER\SHARE\photos\Outlook\*.*"

ForEach ($PhotoFile in gci $PhotoPath)

{ $User = '' + $PhotoFile.Name.substring(0, $PhotoFile.Name.Length - 4) + ''
Import-RecipientDataProperty -Identity $User -Picture -FileData ([Byte[]]$(Get-Content -Path $PhotoFile.Fullname -Encoding Byte -ReadCount 0))
}

To add a photo for an individual user  ;


Import-RecipientDataProperty -Identity "Daniel Blank" -Picture -FileData ([Byte[]]$(Get-Content -Path "\\SERVER\SHARE\photos\Outlook\Daniel Blank.jpg" -Encoding Byte -ReadCount 0))

Finally, to remove a user’s photo:


set-mailbox "Daniel Blank" -RemovePicture

After all that, here is the end result :

ContatCard

Final Thoughts

In all honesty, given the amount of googling, trial and error, and photo editing involved, this was a seriously frustrating project. However, it does seem to have gone down well with management, and the users, so I guess it was worth it.

Hope this saves someone else the amount of pain I had to go through to get this sorted!

 

 

Creating Custom Parameters in a Powershell Function

Custom Parameters in Powershell.  My god, I wish I’d started using these earlier than I did!  It would have saved me SO much time, effort and, as it turns out, literally thousands of lines of code!

When I started my current job, one of the first thing I wanted to improve was the New Starter process.  It was being done completely manually and was taking a serious amount of time for one of my colleagues.  I figured that the best of of improving this situation was to put together a proper New Starter script.  Now, this was 2/3 years ago, and my Powersehll skills were not what they are now.  At the time i thought the best idea was to write a big script, with constant user prompts asking yes or no questions.  This ended up taking me a good 3 months, and in the end was way over 2500 lines.  To be fair, there was nothing wrong with this approach, it got the job done.  That being said, it did have it’s problems.  Updating it when things changed was a nightmare due to all the hard-coded elements, it was longer than it needed to be, and it was clunky.

Fast forward a couple of years, and I discover Powershell Functions (along with SWITCH statements, but that will be covered in a later post!).  To say that this blew my mind is an understatement.  This gave me a way to eliminate user questions (like “What is the persons name?”, “What is their department?”, “What is their job title?” etc etc.) and simply start using a purpose built cmdlet of my own!

Anyway, on to using parameters.  This is all it took me, to get my script to use my own set of 6 parameters :


param(
[Parameter(Mandatory=$True,Position=0)]
[ValidateNotNull()]
[string]$Name,

[Parameter(Mandatory=$True,Position=1)]
[string]$Title,

[Parameter(Mandatory=$True, Position=2)]
[string]$Department,

[Parameter(Position=3)]
[switch]$Fee,

[Parameter(Position=4)]
[string]$Initials,

[Parameter(Position=5)]
[int]$Ticket
)

As you can see, it’s fairly self explanatory.  I have 6 parameters, 3 of which are mandatory (ie script will error if they are not filled in) and 3 that are optional.  That was it.  That’s all that I needed to do.  yes, there are more things that you can do with parameters, but as a starting point, that was all it needed.  Six parameters, their types set to string, switch of integer, and some marked as mandatory.

If anyone has not ventured into using parameters yet, i strongly advise it.  It literally revolutionised my Powershell coding!

…..god damn it, I wish I had learned this years ago!

Audit File Deletion on Server 2012

We often get complaints that important files have been “deleted” from Shared folders that will need restoring.  Restoring them from backups, or even from Previous Versions, is easy, but sometimes you need to know who removed the file, or when the file was removed.  This is when Auditing comes in to play.

The first step is to enable Auditing on the machine in question (in our case, the server where the Shares reside).  To do this  :

  1. On the Server, go to Local Group Policy management.
  2. Browse to Computer Configuration –> Windows Settings –> Security Settings –> Local Policies –> Audit Policy –> Audit object Access.
  3. Select “Success” in the options screen (unless you want to also log when users have failed to delete items).

Once the policy is set up you then to configure the auditing itself.  To do this:

  1. Go to the Folder that needs monitoring, right click and select Properties.
  2. Click on the Security tab, then go to Security –> Advanced –> Auditing Tab.
  3. Click the Disable Inheritance button if available.
  4. Click Add then select the principal (i.e. the group or users that you want to monitor) and change the Type drop-down to Success.
  5. In the Basic Permissions, select which events you want to audit (in this case they are the deletion events)
  6. Finally, OK out of the menus, watch as the permissions apply and you are good to go.

To view the audited events, open Event Viewer and under Windows logs, choose the Security logs and then set up a filter for even ID 4663.  This will show you the delete events for the folder.

What Queries Are Currently Running on SQL Server

Sometimes, your SQL Server seems to grind to a halt.  The first thing you’ll probably want to do in this situation is try and see if there are any queries eating all the available resources.

This can be found by running this Query :

SELECT sqltext.TEXTSELECT sqltext.TEXT
, req.session_id
, req.status
, req.command
, req.cpu_time
, req.total_elapsed_time
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
AS sqltext

Once you have the list of running queries, you can kill that task using this  :

 KILL xxx 

And if you want to do some finger pointing, you can find out the offending user with this:

 EXEC sp_who 'xxx' 

Since finding this out (all credit goes to Pinal Dave) it has proved hugely helpful for me on a number of occasions!

Regional Options Preference – Group Policy Settings

I recently created a Group Policy change the time / date / currency setting to UK from US.  For some reason, these stettings were not being applied and I couldn’t figure out why.

After a bit of research, I found this post on Experts Exchange explaining why.  To summarise :

In the Regional Options Properties window within Group Policy Management, the setting are, by default, underlined in red (as below)

Capture

Whilst being underlined red, these settings are effectively set as disabled.  You’ll notice that when you create a Preference item like the Regional Options, and then save them, when you open them later , the settings you made are just reverted to the defaults.   This is because these settings need to be enabled.  To do this :

F5 – all settings activated (green)  (on the tab you have opened)
F6 – a single setting change from red to green (deactive/active)
F7 – a single setting change from green to red (active/deactive)
F8 – all settings deactivated (red dotted)

Now, for some reason, Microsoft does not specify this anywhere in the window, meaning that unless you happen to know this, there’s no easy way of finding out.

Brilliant!

Drive for Certification

So, I’ve been saying for nearly a decade that I want to get Microsoft certification. Vista, Windows 7, Windows 8, Server 2012.  These are all subject areas that I never got round to really pushing myself to certification in.  I know some people don’t value them (they say that it’s just theory with no practice) but I really think having them on your CV is a big plus.  It’s a t least a way of signalling that people should expect for you to have a base level knowledge in a certain subject area.

Over the last few years I’ve been really getting in to Powershell, and this year SQL Server is my new thing, and I’m really enjoying it.  I’ve decided that this is the year that I finally get certified, and it’s going to be SQL Server certified.  That’s it, no going back.  By the end of the year I want to be well on the way towards my MCSA.

Now, I definitely still have a HUGE amount to learn before I even think about taking exams. And, yes, every time I even look at the books I still find them massively daunting. My confidence though is on the way up, I’ve been watching a course on Pluralsight and so far, I’m not just keeping pace, I already know most of whats being covered.  That means that my self taught 6 months has actually stated to work and I’m getting somewhere!

My concern is that my motivation slips away when it gets hard.  Much like this whole blogging thing, I’ve started many times only to become frustrated and leave it.  Not this time though.  This time I’m determined.  This is my certification year, and nothing’s going to stop me.

And, hell, I even bought the books, and if that’s not commitment, then I don’t know what is!

Books!