Export-CSV -NoTypeInformation (#PSHelp)

I ask a lot of questions about scripting online. A. LOT.  It’s probably been one of the best ways I’ve found for me to learn, and I’ve been doing it for years, mostly on Twitter or Reddit.  Recently I found that my level of coding ability in Powershell has got to the point where I’m actually now able to start answering other people questions. Generally its the simpler stuff as a good 80% of what I see is still way beyond me, but it’s a good sign of progress regardless.

Anyway, a question was posted recently using the #pshelp hashtag on Twitter.  How do you replace the header line of multiple csv files at once?

I saw this and maybe cockily thought to myself, “that’s got to be an easy 2 liner, bet I can boost my ego and quickly write out a solution”.  A good 2 hours, 1 reddit post for help, and a discussion on twitter later, I was just getting my experimental code snippets together into something workable.  Turns out this had opened up quite the hole in an a relatively “simple” task.  I won’t go into the gory details of what I tried and failed at, but here is what was left over from my experiments in the ISE window :


This is what i eventually got working :

$CSVLocation = "C:\Stuff\CSVs"
$NewHeaders = @("6","7","8", "9", "10")
$CSVs = Get-ChildItem $CSVLocation -Recurse | select -ExpandProperty name

foreach($CSV in $CSVs)
$data = Import-CSV $CSVLocation\$CSV -Header $NewHeaders
$data2 = $data[1..($data.Length)]
$data2 | Export-CSV $CSVLocation\$CSV -NoTypeInformation

This is actually quite simple. It simply sets variables for the location of the CSV files, the new header row, and grabs each of the CSV files names.  It then imports the CSV file with the additional header row (which will be positioned 2nd) and calls this $data.  $data is then converted to $data2 where it just skips the first line (ie the original header row). Finally it exports $data2 out as a CSV.  There are probably better ways of doing this, but it works well enough for me.

There we go, all in all, once I understood what was going on, this was relatively straight forward. Nice to learn something new! Hope this helps someone else one day 🙂

UPDATE: I had this post scheduled to go tomorrow but today I found out something that I thought should be tagged on to the end of this………….$PSDefaultParameterValues! Basically, Shane O’Neill mentioned this to me on Twitter today and I thought it was brilliant!  It had been mentioned that -NoTypeInformation is now default in Powershell Core and I had been wondering if there was something similar I could do outside of Core.

Turns out that all you need to do to stop having to remember to set -NoTypeInforation is this:



Pivoting Data in SQL Server

Hi Blog, long time no see! How’s it going?!  I know, I really should up my blog game….

I was recently approached by my firms Marketing Manager with a request for some information.  She wanted to know “Which departments have our top clients never done any work for?”.  For some clarity, I work in a law firm with 11 departments. The request seems pretty straightforward at first. Then once I got thinking about how the output of this report would be presented it made me reconsider quite how simple a request this was.  She handed me a drawing with her vision for the output.  What she wanted was :

  1. Client’s names down the left,
  2. List of Departments across the top,
  3. Ticks and crosses at the intersection to show whether they had or had not done work for them.

Finding out the basic information she was asking for was not going to be tough, but I had concerns about the presentation. Sure, I could output it in any old way, import into Excel, do some Pivot Table jiggery-pokery and get it to look how she wanted.  But where is the opportunity for learning in that approach?! No, I decided that I was going to have the query itself output in the way she wanted somehow….Pivot Table…… Pivot……. Table….That’s when 2 things hit me:

  1. I had recently been reading a chapter in my SQL MSCA book about pivoting and unpivoting data that I had been struggling with.  I had been waiting for a real world example to sink my teeth into, and this seemed like just the thing.
  2. Pivot Tables in Excel and PIVOT in SQL are not called the same thing by accident!

Technical Bit

before we can concern ourselves with PIVOTING any data, the first thing you need to do is get your data t be be pivoted. This requires 3 elements :

  1. Grouping Column – this is what will be your rows,
  2. Spreading Columns – this will be the columns,
  3. Aggregation Column – this will be what is displayed at the intersection of your rows and columns.

For me, my Grouping Column was the Client’s name, the Spreading Columns were the departments, and the Aggregation Column was whether the client had done work for that department ( I was going to get this data by doing a count of the number of legal matters that had been opened).

Once you have your data, then the next thing to do is to PIVOT it. Below you can see the syntax for setting out your 3 data columns as a CTE to then be pivoted.

WITH PivotData AS
        [grouping column]
        [spreading columns]
        [aggregation column]
SELECT [select list]
FROM PivotData
    PIVOT ([aggregate function]([aggregation column])
        FOR [spreading column] IN ([distinct spreading values]) ) AS P;

When I first saw this, the actual pivoting part looked pretty daunting.  Turns out, its really not too bad. It just looks daunting when its laid out in front of you. Once you understand what’s going on, it make a lot of sense.

  • [SELECT LIST] –  This is where you say what you want to take from your original query and lay out as columns.  So for my example, my first column is the clients, and the subsequent columns are each of my departments individually.
  • [AGGREGATE FUNCTION]([AGGREGATION COLUMN]) – You need to tell SQL what to do with the data that you want at the intersection of your columns. For me, this was a count of the number of cases that each client had worked on for each department.
  • [SPREADING COLUMN] IN ([DISTINCT SPREADING VALUES] – What values for your initalls specified spreading column (the data that will make up your columns headings) you want. My spreading column was our Departments, and my distinct spreading values was each of the names of the Departments I wanted to check against.

I appreciate that my explanation may still sound somewhat complicated, but hang in there.  When you see my final working code and it’s output, hopefully it’ll all fall into place.

WITH PivotData
AS (
        SELECT E.Name AS 'Client'
            , D.Description AS 'Dept'
            , COUNT(D.Description) AS 'DCount'
        FROM DAB_MattersALL AS M
        JOIN Users AS U ON M.FeeEarnerRef = U.Code
        JOIN Departments AS D ON U.Department = D.Code
        JOIN Entities AS E ON M.Entityref = E.Code
        WHERE M.EntityRef IN (
                         SELECT TOP (20) E.Code AS 'Client'
                         FROM Departments AS D
                         INNER JOIN Users AS U ON D.Code = U.Department
                         INNER JOIN Ac_Billbook AS A ON U.Code = A.SubmittingFeeEarner
                         INNER JOIN Entities AS E ON A.EntityRef = E.Code
                         WHERE A.BillDate >= '2008-01-01 00:00:00'
                             AND A.BillDate < GETDATE()
                         GROUP BY E.Code
                             , E.Name
                         ORDER BY CAST(SUM(A.CostsNet) AS MONEY) DESC
        GROUP BY E.Name
            , D.Description

    , [Banking ]
    , [Commercial and IP]
    , [Commercial Property]
    , [Construction]
    , [Corporate]
    , [Employment]
    , [Family]
    , [Intellectual Property]
    , [Licensing]
    , [Litigation]
    , [Management]
    , [Real Estate]
    , [Residential Property]
    , [Tax and Probate]
FROM PivotData
                        , [Commercial and IP]
                        , [Commercial Property]
                        , [Construction]
                        , [Corporate]
                        , [Employment]
                        , [Family]
                        , [Intellectual Property]
                        , [Licensing]
                        , [Litigation]
                        , [Management]
                        , [Real Estate]
                        , [Residential Property]
                        , [Tax and Probate]
                        )) AS P

That looks like a giant wall of text, right?!  It’s not, don’t worry.  When you break it down it’s simple!  The first part is just setting out what I want (Clients, Departments, and the count of the Departments).

The PIVOT part is saying – Give me some columns named “Clients”, and then subsequent columns named after each possible department they may have done any work for.  At the intersection between each client and each department give me the number of cases each client has had with the departments in the following list.

Finally, this is the output (with client names poorly redacted)  that you get from all this :



So, I hope that made sense. Writing this out really helped cement it in my brain as something that I now understand. I find reading about new techniques in SQL (or anything else really) can be really daunting and demoralising until I find a practical application.  That’s when it all seems to make sense.

Pivoting is now something I feel pretty confident with and is another arrow in my quiver of SQL functions I can call upon when necessary.

TSQL Tuesday #96: Folks Who Have Made a Difference


tsql2sday150x150 (1)


This post is part of TSQL2sday (or T-SQL Tuesday) which was started by Adam Machanic (t/b).  The topic for this month has been set by Ewald Cress‏ (t b) and is, to quote: “the opportunity to give a shout-out to people (well-known or otherwise) who have made a meaningful contribution to your life in the world of data.”

I suppose there’s no better place to start than with Adam Machanic himself, who’s T-SQL Tuesday idea has been one of my move valued resources as I’ve been learning SQL! Thanks Adam!


I’m going to use this post to thank people who have contributed to getting me where I am now in my career.  For some context, before 19th August 2016 my sum knowledge of SQL had been a few modules during my Masters Degree at university back in 2006 / 7.  About all I would have been able to accomplish would have been SELECT *FROM Table at a push. Now, as I type this, whilst still admittedly very junior in the SQL world, over 50% of my main duties everyday revolve around SQL in some way or another.

Thank Yous

I want to start with one particularly large thank you. Bear with me, it’s a  bit of a long one!

At the end of July 2016, the Head of IT where I work had left the company.  I was going to be around 3-4 months until he was being replaced, which left the IT department to just me and 1 other member of staff. My infinitesimal SQL knowledge somehow left me as the expert in the field!  Little worry though, little of the work carried out by our department involved any SQL knowledge. I work in the Legal Sector and whilst our main Case Management System runs on a SQL database, we have full support from the vendors for this.

Fast forward about 6 weeks, it’s the morning of 19th August 2016 and I arrive in to work at 07:00 to find that the aforementioned Case Management System has completely and utterly fallen over.  No one can access the system at all, nothing, luckily it’s early morning but if I can’t this fixed, work will utterly grind to a halt!!! All I can figure out is that we have somehow completely run out of disk space.  After a bit of digging, I discover that, for reasons I don’t need to get into, adding more disk space to our SQL Server is not an option, and on top of that something called the Transaction Log is full, and is apparently the cause of all my problems! In absolute desparation I send out this tweet and pray :

Notice the times tamp, it’s just after 08:00 at this point, and the support company we have do not begin accepting calls until 09:00. Like I said, I was praying! Thankfully, I received a reply from Ryan Yates (t) who drops a mention to Thomas Rushton (t/b). Thomas basically spends the morning letting me message him back and forth, explaining what my problem may be, how to find out what has gone on, and once our 3rd part support finally kicks in, translates for me what they are telling me and advises me on a whole heap of things to read up on.  So, for my 1st thank yous, Ryan, thank you for replying to me that day, if you hadn’t pointed me towards Thomas I would most likely have been a crying quivering mess under my desk! And Thomas, I’ve wanted to say this for a while, thank you!!! Not just for metaphorically holding my hand over Twitter that morning, but our conversations inspired me to learn about SQL, and now, just over a year and a bit later, it has become the thing I am most focused on in my career and development and I owe it entirely to you.  I honestly would not be where I am today if I hadn’t spoken to you that morning. Thank you.

More Thank Yous:

Now that the giant one is out of the way, there’s a few more people I’d like to thank. I’ve never met a single one of these people in “real life” and I wouldn’t be surprised (or offended) if the vast majority not only didn’t realise that they have helped me, but probably couldn’t pick me out in a line-up! I can honestly say though, each one of these people is owed some sort of thank you for helping me get to where I now am:

Itzik Ben-Gan (t/b) : I have bought 3 of your books and each one has proved a truly valuable resource on my journey to learning SQL.  I am currently working through the MCSA course and your training kits and fundamental books are by far my best resource. Thank you.

Tao Klerks – PoorSQL (t/w) : The PoorSQL website (and latterly the SSMS plugin) have been immensely helpful when trying to reverse engineer existing SQL reports that are in such terrible formatting that they are a nightmare to read. This simple tool has saved me countless headaches! Thank you.

Shane O’Neill (t/b) : Shane ends up answering a hell of a lot of my questions I come up with on twitter, even joined Reddit to reply to a few on the SQL subreddit when I’ve posted some long form questions there. His advice has been unbelievably helpful, and it was him that introduced me to #TSQL2sday! He also doesn’t judge me too badly when occasionally the problems I have are ridiculous typos / logic oversights on my side! Thank you.

Assorted Other Thank Yous:

Chrissy LeMaire (t/b), Rob Sewell (t/b), Ben Weissman(t/w), Pat Phelan (t/w), Koen Verbeek (t/b), Todd James (t), Taiob Ali (t/b), Derik Hammer (t/b), Andre Kamman (t), Thomas LaRock (t/b), Kenneth Fisher (t/b), Argenis Fernandez (t/b), Allen McGuire(t/b), Mladen Prajdic (t), Jason Kyle (t), Kerry Tyler (t/b), Neil Gelder (t/b), Chris Taylor (t), Constantine Kokkinos (t/b), Andrew Pruski (t/b), Pinal Dave (t/b) and probably more I don’t remember.  Each one of these people have either answered questions online, provided links to valuable resources, or in some way have just inspired me to keep going when learning gets tough.  Thank you all!

Lastly, thanks to Randolph West (t) for encouraging me to get this blog posted today when I was whinging on twitter this morning that I completely spaced on the date and forgot to write it!

SQL Data Types – Strings

This is the first post in what I hope to be a mini-series into the main data types that a beginner will come across when using T-SQL.  First in this series is Character Functions and how to manipulate Strings.

What is a String?

A string is a data type used for values that are made up of ordered sequences of characters, such as “hello world”. A string can contain any sequence of characters, visible or invisible, and characters may be repeated. The number of characters in the string is called its length, and “hello world” has length 11 – made up of 10 letters and 1 space. There is usually a restriction on the maximum length of a string. There is also such a thing as an empty string, which contains no characters – length 0. (credit for the definition of a string goes to BBC Bitesize).  It is fair to assume that most databases contain strings and, as such, this is probably a good place to start this series.

String Manipulation and Functions

Whilst T-SQL may not have been designed with advanced character string manipulation as one of its key features, it is something that I find myself doing quite frequently.  Here is an outline of the main ways T-SQL provides to  manipulate strings.


Concatenation is the act of combining multiple string together to form 1 new string.  For instance “First Name” added to “Surname” to form “Full Name”.  T-SQL provides 2 ways to do this, the plus (+) operator, and the CONCAT function.

This first example is string concatenation using the + operator.

SELECT empid
    , country
    , region
    , city
    , country + N',' + region + N',' + city AS 'location'
FROM HR.Employees

And this is what is returned :


As you can see, when any value of the inputs is NULL then the value returned using the + operator is also NULL.  Clearly, there will be times when NULL is not what you want in your output.  Luckily, there are 2 ways around this, CONCAT and COALESCE.

CONCAT – By default, this substitutes NULLs with empty strings.

COALESCE (<expression>, ”) – This is used to join to show an empty string when the region is NULL in the below example.

SELECT empid
    , country
    , region
    , city
    , CONCAT (country, N',' + region, N',' + city) AS 'location_concat'
    , country + COALESCE( N',' + region, N'') + N',' + city AS 'location_coalesce'
FROM HR.Employees

(not sure why CONCAT didn’t go pink in this code!)

And you can see the results are the same from each of these below :



There are a few functions to perform actions to a part of a string such as pattern matching, and extracting part of a string.

The SUBSTRING function allows you to extract only a section of a string.  you specify the input string, the position of the string to start from, and the length of the substring required. e.g, to return abc from abcdef you could use:

SELECT SUBSTRING('abcdef',1,3) AS' substring'

There are also LEFT and RIGHT functions that work in much the same way, except that you only need to specify the number of characters required from the left or right ends of the string.  For instance :

SELECT LEFT ('abcdef' ,3) AS 'left'

    , RIGHT('abcdef' ,3) AS 'right'


Finally with SUBSTRING there is CHARINDEX and PATINDEX which return the position in a string of particular characters or patterns as a numeric value.  For instance the below returns 5 as ‘ef’ starts at the 5th position of the string ‘abcdef’

SELECT CHARINDEX('ef', 'abcdef')

PATINDEX works in much the same way but can be used to find patterns rather than a constant string.

You can combine a few of these together to get some quite clever results.  Here I used CHARINDEX and LEFT to separate a 1st name from a 2nd name :

SELECT LEFT('Daniel Blank', CHARINDEX(' ', 'Daniel Blank') -1)

This just returns ‘Daniel’


T-SQL has 2 functions for measuring the length of an input value, LEN and DATALENGTH.  LEN returns the length of an input string in terms of the number of characters (ignoring trailing spaces).  DATALENGTH returns  the length of the input in terms of the number of bytes a string takes up.  For instance :

SELECT LEN ('Daniel Blank') AS 'len'
      , DATALENGTH (N'Daniel Blank') AS 'datalength'



There are three supported T-SQL functions used to alter an input string.  REPLACE, REPLICATE, and STUFF.  These are pretty self-explanatory from the code below but I will give a brief explanation of each.

SELECT REPLACE ('Daniel.Blank','.',' ')  AS 'replace'

, REPLICATE('a',10) + 'rgh!' AS 'replicate'

, STUFF('abcdefghijklmnop', 5, 5, '12345') AS 'stuff'



REPLACE lets you replace part of a string by specifying the input, the substring to be replaced, and the substring to replace with.

REPLICATE allows you to repeat an input string a specified number of times.  In the above, we replicated the letter ‘a’ 10 times.

STUFF deletes a section of a string and adds in another string in its place. You specify the input string, the character position to start the delete, how many characters to delete, and then the string to ‘stuff’ into its place.


Lastly we have the functions to apply formatting options to a string.  These are UPPER, LOWER, LTRIM, RTRIM and FORMAT.  The first  four are fairly self-explanatory once you see what they do, but FORMAT may take a little explanation.

SELECT UPPER('DAniEl bLANk') AS 'upper'

, LOWER('DAniEl bLANk') AS 'lower'

, LTRIM('              Daniel Blank') AS 'ltrim' , RTRIM('Daniel Blank              ') AS 'rtrim'

, RTRIM(LTRIM('     DanielBlank     ')) AS 'l_and_r_trim'

, FORMAT(123456,'000000000') AS 'format'


UPPER and LOWER allow you to format a string in either upper or lowercase characters.

LTRIM and RTRIM allow the removal of either leading or trailing spaces respectively.

FORMAT allows you to format an input value based on a format string.  So in the above example the number 123456 is formatted as a character string with a size of 9 characters with leading zeros.


String are one of the fundamental data types that you will come across when working with databases.  Above I have covered some of the most common way of dealing with this data types.  Hopefully someone out there finds it interesting / useful.  For me, it’s good to have this typed up to reinforce what I have been covering whilst studying for my MCSA.

Note: A lot of this information has been sourced from the official Microsoft 70-461 Exam Training Kit which I am currently working through.

#TSQL2sday – Recovering Orphaned Files with Powershell and SQL

Being relatively new to SQL, this’ll be my first post for TSQL2sday.  For those not in the know, TSQL2sday (or T-SQL Tuesday) was started by Adam Machanic (t/b) and, I quote from this month’s host Rob Sewell (t/b), is :

“…a chance for you to join in the SQL Server community and write a blog post on a suggested topic. It makes for a great way to find a bunch of blog posts showing the same subject from many different viewpoints”

This month’s topic (if you haven’t already guessed from the title) is Powershell.

Now, Powershell is my 1st coding love. I’m not brilliant at it, but I’m definitely more than a beginner.  It’s what I spend most of my working life playing with, and it was the reason I eventually got in to trying to learn me some SQL.  Baring all this in mind, this was the first month that I’ve felt confident enough to join in, so here goes!

Creating a Scheduled Powershell Task to Clean up Orphaned Files in our SQL Database.

Catchy title, right?  Basically, what I have been playing with this month is using a combination of SQL and Powershell to help with a small mess that we have recently found ourselves in.  For this to make sense, I guess it’s going to need some sort of context.  I’ll try and be brief.


I work at a relatively small but successful law firm, and our practice pretty much lives and dies with our Case Management System (CMS).  if you don’t know what that is then, suffice to say, it’s like an IT Service Desk tool on steroids, lots of steroids.  This software stores all documents, letters, email and anything else that could possibly be associated with a legal case, and organises them all in some sort of logical fashion. Suffice to say we have literally millions of these files.  One of the main ways that these files are imported into the CMS is by an Outlook plugin that files the email and / or attachments.  The users is prompted to enter where they want to file them, and off they go.

We also use folder redirection for our users, so that their files are on a server rather than on their local computers.  It works pretty well, has some issues here or there, but it’s OK. Folder Redirection isn’t the star of this show.

Over the last couple of months, there have been a couple of small issues with the server that houses the folder redirection data.  This has seemingly had a knock on effect where emails filed into the CMS during a folder redirection blip are seemingly lost.

What’s the Problem?

Well, each file in the CMS has a database entry that points to the files location on a server somewhere.  During a Folder Redirection blip, if you were to simply look at the CMS, it would appear that the files have been imported correctly. However, when trying to open them, you are greeted with “file can not be found”.  What seems to have happened is that the file path has only the file name in it, not the actually crucial path part.  After a lot of digging, these files (mostly) seem to have ended up in the root directory of the Folder Redirection share.

The first couple of times this happened, it took us ages to manually figure out where these were supposed to be filed, drag and drop the files there, and then manually updating the paths.  The first time, this was irritating. The second time, infuriating. The third time lead me to this post, I wanted to never have to do this again!

There are a few things to note about this issue. Firstly, not all files I find in the database have been located in the Folder Redirection share root. Secondly, not all files in the Folder Redirection share root belong to Database entries with missing paths.  Lastly, all files are associated with a legal case, these take the form of an entity (or “customer”) and a matter number.  An example of this would be Entity ABC123 Matter 6 or ABC123/6.  The files associated with this matter need to be stored in a path based on this, so for this matter the files would be stored in \\KSLFS01\ptrdata\docs\A\B\C\ABC123\6.  To get each file going into the correct folder is something I needed to figure out.

The solution

This is my wonderful solution.  Normally I’d use the code formatting of WordPress, but because its a Powersehll script with SQL code in there, it just didn’t format well, so excuse the picture :

My wonderful solution!

This solution is essentially doing 3 separate functions combined together:

  1. Query the database to find all files in the database that meet my missing path criteria.
  2. Check the content of the Folder Redirection share root folder and check to see if the files are in there.
  3. If the files from the 1st step are found in the folder from the 2nd step, migrate the file to the correct location and update it’s path in the database.

SQL was obviously used to find the relevant files and then update the the file path when required.  CONCAT and SUBSTRING were used to form the correct file path, and are things that I had not used before this,

What I like about this solution is that it is now something that can be scheduled weekly on the SQL server, and hopefully means that this problem should never really bother us again.

Now, neither the Powershell code, nor the SQL code, are all that complex here. This is the first time I’ve really tried to combine the two into some useful function.  The Powershell itself is pretty straight forward but the SQL did take me a bit of research / trial and error to get right (the evolution can be found in this Github Gist, and this Twitter thread).


Whether what I have come up with is the best way to resolve this issue or not, is not important here.  For me, this solution to me represents progress.  For the last 10 months I have been teaching myself SQL, and I’ve been trying to learn Powershell for a good 4 years (but that last 2 seriously).  This is the first time I am really seeing the fruits of my labour.  The combination of using SQL and Powershell together made what would have been hours of tedious work into a relatively quick and painless process.

Going forward, I can see how combining SQL and Powershell could really open the doors to more solutions for problems I already have, problems I don’t know exist yet, or just improving some of  my already existing solutions.  I want to start using both of these languages to the full, and have already begun looking at resources, such as DBA Tools, (t/w) a great Powershell module for SQL admins, and have started reading some books by Itzik Ben-Gan (t/b), and working towards my MCSA.  I’m really hoping that this is the beginning of something special!


Customising the Outlook Contact Cards and Adding User Photos

Recently, I decided that I wanted to add staff photographs to the Global Address List. This is something that I had previously inherited in a former job, and thought would be good to implement for my current employer.

There were a few reasons for wanting to do this :

  • Staff awareness of each other

Whilst working in IT, i seem to know everyone in the company (we aren’t massive, <200 staff), it turns out that most people struggle with who people are outside of their own department.

  • Friendly face on emails

People treat each other better (even internally) when they can put a face to who they are communicating with, especially over email.  It’s too easy to dehumanise someone who is just words on a page.

Anyway, the technical stuff.

Group Policy

First thing we needed to do was to install the Office 2013 Administrative Template files (ADMX/ADML) and Office Customization Tool.  This gives the extra options to be able to manage a whole host of customisation settings within the office environment.

The relevant ones for the Contact Card are in User Configuration>Policies > Administrative Templates > Microsoft 2013 > Contact Card > Contact Tab


This part can get a little fiddly.  There are a few default values for the Contact Cards and if these fields are competed in an AD User then they are displayed.  To display different ones (we wanted to have the Pager field displayed, which is not default) then you have to replace one of the defaults.

There are 3 types of options here :

  1. Replace AD
  2. Replace Label
  3. Replace MAPI

In all honesty, this whole process seriously confused me, and took a HUGE amount of trial and error to get the result I wanted.   I know a lot of these “how to” posts give full technical explanations, but I still don’t fully understand this.  This is just an explanation of what I did to get it working!

As an example, to add the Pager field (where we store our users extensions numbers) to the Contact card ,I removed the Work Address default value by setting the following policy values:

  • Replace Label -Work Address = “Extension” (this made it so that instead of the field being displayed as Pager, it was displayed as Extension)
  • Replace AD – Work Address = “ms-Exch-Telephone-Personal-Pager” (This is the full AD Attribute name for the Pager field in AD – a hell of a lot of googling went into this!)
  • Replace MAPI -Work Address  =  975241247 (this is the Binary value  for the hexadecimal property tag for the Pager property – again, a lot of googling!)

Once the group policy was set up and working (FINALLY!) the next task was to actually add the pictures.


Turns out the photos have to be less than 10 Kb for them to be stored in AD. I used GIMP to shrink all user photos down until they were all under the size limit and stored them in a share on one of our server.  You can set the photos against the indivisual users using Powershell.

To set all photos at once :

$PhotoPath = "\SERVER\SHARE\photos\Outlook\*.*" $PhotoPath = "\SERVER\SHARE\photos\Outlook\*.*"

ForEach ($PhotoFile in gci $PhotoPath)

{ $User = '' + $PhotoFile.Name.substring(0, $PhotoFile.Name.Length - 4) + ''
Import-RecipientDataProperty -Identity $User -Picture -FileData ([Byte[]]$(Get-Content -Path $PhotoFile.Fullname -Encoding Byte -ReadCount 0))

To add a photo for an individual user  ;

Import-RecipientDataProperty -Identity "Daniel Blank" -Picture -FileData ([Byte[]]$(Get-Content -Path "\\SERVER\SHARE\photos\Outlook\Daniel Blank.jpg" -Encoding Byte -ReadCount 0))

Finally, to remove a user’s photo:

set-mailbox "Daniel Blank" -RemovePicture

After all that, here is the end result :


Final Thoughts

In all honesty, given the amount of googling, trial and error, and photo editing involved, this was a seriously frustrating project. However, it does seem to have gone down well with management, and the users, so I guess it was worth it.

Hope this saves someone else the amount of pain I had to go through to get this sorted!



Creating Custom Parameters in a Powershell Function

Custom Parameters in Powershell.  My god, I wish I’d started using these earlier than I did!  It would have saved me SO much time, effort and, as it turns out, literally thousands of lines of code!

When I started my current job, one of the first thing I wanted to improve was the New Starter process.  It was being done completely manually and was taking a serious amount of time for one of my colleagues.  I figured that the best of of improving this situation was to put together a proper New Starter script.  Now, this was 2/3 years ago, and my Powersehll skills were not what they are now.  At the time i thought the best idea was to write a big script, with constant user prompts asking yes or no questions.  This ended up taking me a good 3 months, and in the end was way over 2500 lines.  To be fair, there was nothing wrong with this approach, it got the job done.  That being said, it did have it’s problems.  Updating it when things changed was a nightmare due to all the hard-coded elements, it was longer than it needed to be, and it was clunky.

Fast forward a couple of years, and I discover Powershell Functions (along with SWITCH statements, but that will be covered in a later post!).  To say that this blew my mind is an understatement.  This gave me a way to eliminate user questions (like “What is the persons name?”, “What is their department?”, “What is their job title?” etc etc.) and simply start using a purpose built cmdlet of my own!

Anyway, on to using parameters.  This is all it took me, to get my script to use my own set of 6 parameters :



[Parameter(Mandatory=$True, Position=2)]




As you can see, it’s fairly self explanatory.  I have 6 parameters, 3 of which are mandatory (ie script will error if they are not filled in) and 3 that are optional.  That was it.  That’s all that I needed to do.  yes, there are more things that you can do with parameters, but as a starting point, that was all it needed.  Six parameters, their types set to string, switch of integer, and some marked as mandatory.

If anyone has not ventured into using parameters yet, i strongly advise it.  It literally revolutionised my Powershell coding!

…..god damn it, I wish I had learned this years ago!

Audit File Deletion on Server 2012

We often get complaints that important files have been “deleted” from Shared folders that will need restoring.  Restoring them from backups, or even from Previous Versions, is easy, but sometimes you need to know who removed the file, or when the file was removed.  This is when Auditing comes in to play.

The first step is to enable Auditing on the machine in question (in our case, the server where the Shares reside).  To do this  :

  1. On the Server, go to Local Group Policy management.
  2. Browse to Computer Configuration –> Windows Settings –> Security Settings –> Local Policies –> Audit Policy –> Audit object Access.
  3. Select “Success” in the options screen (unless you want to also log when users have failed to delete items).

Once the policy is set up you then to configure the auditing itself.  To do this:

  1. Go to the Folder that needs monitoring, right click and select Properties.
  2. Click on the Security tab, then go to Security –> Advanced –> Auditing Tab.
  3. Click the Disable Inheritance button if available.
  4. Click Add then select the principal (i.e. the group or users that you want to monitor) and change the Type drop-down to Success.
  5. In the Basic Permissions, select which events you want to audit (in this case they are the deletion events)
  6. Finally, OK out of the menus, watch as the permissions apply and you are good to go.

To view the audited events, open Event Viewer and under Windows logs, choose the Security logs and then set up a filter for even ID 4663.  This will show you the delete events for the folder.

What Queries Are Currently Running on SQL Server

Sometimes, your SQL Server seems to grind to a halt.  The first thing you’ll probably want to do in this situation is try and see if there are any queries eating all the available resources.

This can be found by running this Query :

, req.session_id
, req.status
, req.command
, req.cpu_time
, req.total_elapsed_time
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
AS sqltext

Once you have the list of running queries, you can kill that task using this  :

 KILL xxx 

And if you want to do some finger pointing, you can find out the offending user with this:

 EXEC sp_who 'xxx' 

Since finding this out (all credit goes to Pinal Dave) it has proved hugely helpful for me on a number of occasions!