Export-CSV -NoTypeInformation (#PSHelp)

I ask a lot of questions about scripting online. A. LOT.  It’s probably been one of the best ways I’ve found for me to learn, and I’ve been doing it for years, mostly on Twitter or Reddit.  Recently I found that my level of coding ability in Powershell has got to the point where I’m actually now able to start answering other people questions. Generally its the simpler stuff as a good 80% of what I see is still way beyond me, but it’s a good sign of progress regardless.

Anyway, a question was posted recently using the #pshelp hashtag on Twitter.  How do you replace the header line of multiple csv files at once?

I saw this and maybe cockily thought to myself, “that’s got to be an easy 2 liner, bet I can boost my ego and quickly write out a solution”.  A good 2 hours, 1 reddit post for help, and a discussion on twitter later, I was just getting my experimental code snippets together into something workable.  Turns out this had opened up quite the hole in an a relatively “simple” task.  I won’t go into the gory details of what I tried and failed at, but here is what was left over from my experiments in the ISE window :


This is what i eventually got working :

$CSVLocation = "C:\Stuff\CSVs"
$NewHeaders = @("6","7","8", "9", "10")
$CSVs = Get-ChildItem $CSVLocation -Recurse | select -ExpandProperty name

foreach($CSV in $CSVs)
$data = Import-CSV $CSVLocation\$CSV -Header $NewHeaders
$data2 = $data[1..($data.Length)]
$data2 | Export-CSV $CSVLocation\$CSV -NoTypeInformation

This is actually quite simple. It simply sets variables for the location of the CSV files, the new header row, and grabs each of the CSV files names.  It then imports the CSV file with the additional header row (which will be positioned 2nd) and calls this $data.  $data is then converted to $data2 where it just skips the first line (ie the original header row). Finally it exports $data2 out as a CSV.  There are probably better ways of doing this, but it works well enough for me.

There we go, all in all, once I understood what was going on, this was relatively straight forward. Nice to learn something new! Hope this helps someone else one day 🙂

UPDATE: I had this post scheduled to go tomorrow but today I found out something that I thought should be tagged on to the end of this………….$PSDefaultParameterValues! Basically, Shane O’Neill mentioned this to me on Twitter today and I thought it was brilliant!  It had been mentioned that -NoTypeInformation is now default in Powershell Core and I had been wondering if there was something similar I could do outside of Core.

Turns out that all you need to do to stop having to remember to set -NoTypeInforation is this:



#TSQL2sday – Recovering Orphaned Files with Powershell and SQL

Being relatively new to SQL, this’ll be my first post for TSQL2sday.  For those not in the know, TSQL2sday (or T-SQL Tuesday) was started by Adam Machanic (t/b) and, I quote from this month’s host Rob Sewell (t/b), is :

“…a chance for you to join in the SQL Server community and write a blog post on a suggested topic. It makes for a great way to find a bunch of blog posts showing the same subject from many different viewpoints”

This month’s topic (if you haven’t already guessed from the title) is Powershell.

Now, Powershell is my 1st coding love. I’m not brilliant at it, but I’m definitely more than a beginner.  It’s what I spend most of my working life playing with, and it was the reason I eventually got in to trying to learn me some SQL.  Baring all this in mind, this was the first month that I’ve felt confident enough to join in, so here goes!

Creating a Scheduled Powershell Task to Clean up Orphaned Files in our SQL Database.

Catchy title, right?  Basically, what I have been playing with this month is using a combination of SQL and Powershell to help with a small mess that we have recently found ourselves in.  For this to make sense, I guess it’s going to need some sort of context.  I’ll try and be brief.


I work at a relatively small but successful law firm, and our practice pretty much lives and dies with our Case Management System (CMS).  if you don’t know what that is then, suffice to say, it’s like an IT Service Desk tool on steroids, lots of steroids.  This software stores all documents, letters, email and anything else that could possibly be associated with a legal case, and organises them all in some sort of logical fashion. Suffice to say we have literally millions of these files.  One of the main ways that these files are imported into the CMS is by an Outlook plugin that files the email and / or attachments.  The users is prompted to enter where they want to file them, and off they go.

We also use folder redirection for our users, so that their files are on a server rather than on their local computers.  It works pretty well, has some issues here or there, but it’s OK. Folder Redirection isn’t the star of this show.

Over the last couple of months, there have been a couple of small issues with the server that houses the folder redirection data.  This has seemingly had a knock on effect where emails filed into the CMS during a folder redirection blip are seemingly lost.

What’s the Problem?

Well, each file in the CMS has a database entry that points to the files location on a server somewhere.  During a Folder Redirection blip, if you were to simply look at the CMS, it would appear that the files have been imported correctly. However, when trying to open them, you are greeted with “file can not be found”.  What seems to have happened is that the file path has only the file name in it, not the actually crucial path part.  After a lot of digging, these files (mostly) seem to have ended up in the root directory of the Folder Redirection share.

The first couple of times this happened, it took us ages to manually figure out where these were supposed to be filed, drag and drop the files there, and then manually updating the paths.  The first time, this was irritating. The second time, infuriating. The third time lead me to this post, I wanted to never have to do this again!

There are a few things to note about this issue. Firstly, not all files I find in the database have been located in the Folder Redirection share root. Secondly, not all files in the Folder Redirection share root belong to Database entries with missing paths.  Lastly, all files are associated with a legal case, these take the form of an entity (or “customer”) and a matter number.  An example of this would be Entity ABC123 Matter 6 or ABC123/6.  The files associated with this matter need to be stored in a path based on this, so for this matter the files would be stored in \\KSLFS01\ptrdata\docs\A\B\C\ABC123\6.  To get each file going into the correct folder is something I needed to figure out.

The solution

This is my wonderful solution.  Normally I’d use the code formatting of WordPress, but because its a Powersehll script with SQL code in there, it just didn’t format well, so excuse the picture :

My wonderful solution!

This solution is essentially doing 3 separate functions combined together:

  1. Query the database to find all files in the database that meet my missing path criteria.
  2. Check the content of the Folder Redirection share root folder and check to see if the files are in there.
  3. If the files from the 1st step are found in the folder from the 2nd step, migrate the file to the correct location and update it’s path in the database.

SQL was obviously used to find the relevant files and then update the the file path when required.  CONCAT and SUBSTRING were used to form the correct file path, and are things that I had not used before this,

What I like about this solution is that it is now something that can be scheduled weekly on the SQL server, and hopefully means that this problem should never really bother us again.

Now, neither the Powershell code, nor the SQL code, are all that complex here. This is the first time I’ve really tried to combine the two into some useful function.  The Powershell itself is pretty straight forward but the SQL did take me a bit of research / trial and error to get right (the evolution can be found in this Github Gist, and this Twitter thread).


Whether what I have come up with is the best way to resolve this issue or not, is not important here.  For me, this solution to me represents progress.  For the last 10 months I have been teaching myself SQL, and I’ve been trying to learn Powershell for a good 4 years (but that last 2 seriously).  This is the first time I am really seeing the fruits of my labour.  The combination of using SQL and Powershell together made what would have been hours of tedious work into a relatively quick and painless process.

Going forward, I can see how combining SQL and Powershell could really open the doors to more solutions for problems I already have, problems I don’t know exist yet, or just improving some of  my already existing solutions.  I want to start using both of these languages to the full, and have already begun looking at resources, such as DBA Tools, (t/w) a great Powershell module for SQL admins, and have started reading some books by Itzik Ben-Gan (t/b), and working towards my MCSA.  I’m really hoping that this is the beginning of something special!


Creating Custom Parameters in a Powershell Function

Custom Parameters in Powershell.  My god, I wish I’d started using these earlier than I did!  It would have saved me SO much time, effort and, as it turns out, literally thousands of lines of code!

When I started my current job, one of the first thing I wanted to improve was the New Starter process.  It was being done completely manually and was taking a serious amount of time for one of my colleagues.  I figured that the best of of improving this situation was to put together a proper New Starter script.  Now, this was 2/3 years ago, and my Powersehll skills were not what they are now.  At the time i thought the best idea was to write a big script, with constant user prompts asking yes or no questions.  This ended up taking me a good 3 months, and in the end was way over 2500 lines.  To be fair, there was nothing wrong with this approach, it got the job done.  That being said, it did have it’s problems.  Updating it when things changed was a nightmare due to all the hard-coded elements, it was longer than it needed to be, and it was clunky.

Fast forward a couple of years, and I discover Powershell Functions (along with SWITCH statements, but that will be covered in a later post!).  To say that this blew my mind is an understatement.  This gave me a way to eliminate user questions (like “What is the persons name?”, “What is their department?”, “What is their job title?” etc etc.) and simply start using a purpose built cmdlet of my own!

Anyway, on to using parameters.  This is all it took me, to get my script to use my own set of 6 parameters :



[Parameter(Mandatory=$True, Position=2)]




As you can see, it’s fairly self explanatory.  I have 6 parameters, 3 of which are mandatory (ie script will error if they are not filled in) and 3 that are optional.  That was it.  That’s all that I needed to do.  yes, there are more things that you can do with parameters, but as a starting point, that was all it needed.  Six parameters, their types set to string, switch of integer, and some marked as mandatory.

If anyone has not ventured into using parameters yet, i strongly advise it.  It literally revolutionised my Powershell coding!

…..god damn it, I wish I had learned this years ago!