Saturday, November 28, 2015

The ContentViewDefinition with name DocumentsBackend does not exist in configuration files.

I recently attempted to access the Documents section of my Sitefinity v. 8.1 site when I was suddenly faced with the following error message:

Well I immediately suspected that the upgrade of my Sitefinity to v. 8.1 did not go as smoothly as hoped.

After contacting Sitefinity Support, I was informed of the following:

The error message stating that LibraryImagesBackend can't be found is due to the fact that this view has been replaced after Sitefinity 6.0 and is no longer in use.  The site must have started to experience this after upgrade passing through Sitefinity 6.0.

To resolve the error, force the upgrade to run again by making the below configuration change in App_Data\Sitefinity\Configuration folder in SystemConfig.config:


<systemConfig xmlns:config="urn:telerik:sitefinity:configuration" xmlns:type="urn:telerik:sitefinity:configuration:type" config:version="5.4.4040" build="4040">
<add name="Libraries" />

After restarting my site to trigger the upgrade, I could view my Images once again!

Wednesday, November 25, 2015

Installing Windows Features dynamically using PowerShell

I recently had to install a set of Windows IIS features on a Windows Server 2008 R2 box, when I was suddenly faced with this error message:

"The all option is not recognized in this context."

I had been using this article for DISM to execute my commands in the past on Windows Server 2012 and Windows Server 2012 R2: https://technet.microsoft.com/en-us/library/hh824822.aspx 

Even though the article says that it applies to Windows Server 2008 R2, according to the error message, this does not appear to work!

Therefore, I decided to write my own custom PowerShell script that would allow me to "dynamically" install Windows Features without manually having to re-write my existing Windows Feature command for different platforms.

This is finally what I came up with:

$IISFeatures = @("NetFx3", "IIS-ASPNET", "IIS-NetFxExtensibility", "IIS-ApplicationDevelopment","IIS-WebServer", "IIS-WebServerRole", "IIS-DefaultDocument", "IIS-CommonHttpFeatures", "IIS-ISAPIFilter", "IIS-ISAPIExtensions", "IIS-NetFxExtensibility", "IIS-RequestFiltering", "IIS-Security")
 
$IISFeatureNames = New-Object System.Collections.ArrayList
 
 
Clear-Host
 
foreach ($feature in $IISFeatures)
{
    $FeatureName = "/FeatureName:" + $feature
    Write-Host $FeatureName
    $IISFeatureNames.Add($FeatureName)
}
 
 
#Dynamically enable the list of features
& DISM /Online /Enable-Feature $IISFeatureNames

When I want to add or install a new Windows Feature, I simply add the name of the Windows Feature to the $IISFeatures array.

You can get a list of available Windows Features by running this command:

dism /online /get-features | more

Setting up and deploying your own NuGet server Step-by-Step

You may have read this article about how to set up your own NuGet Server: https://docs.nuget.org/create/hosting-your-own-nuget-feeds

However, this article leaves out several critical elements required to set up, configure and deploy your own NuGet Server.

Before you get started, though, you will need to make sure your systems are set up with the necessary prerequisites:

  1. Make sure you install .NET Framework v. 4.5 (or later) on your target web server
  2. Make sure you install Web Deploy for IIS on your target web server (http://www.iis.net/downloads/microsoft/web-deploy)
  3. I am using Visual Studio 2015 for this example, so make sure you have either Visual Studio 2013 or Visual Studio 2015 in order to build and publish the NuGet Server web deployment package.
  4. From the Tools menu in Visual Studio, you can select "Create GUID" to generate the GUID that you will need for your NuGet Server API Key.  You will need to generate the GUID in "Registry Format" and then remove the curly braces ({}) from the GUID before entering it into your NuGet Server's Web.config file.
  5. From Visual Studio, you will need to Publish your Web Application as a Web Deploy package.
  6. On the target web server, you will need to select the option for "Import Application" to begin the process of importing the Web Deploy package. 
  7. If you have done everything correctly, when you browse to http://myserver/NuGet, you will get the NuGet page shown below!
 

Below are the steps needed to create and deploy your very own NuGet Server:




























Tuesday, November 24, 2015

Setting up a Bootstrap menu in ASP.NET MVC

If you want to set up a Bootstrap menu in ASP.NET MVC, obviously, the first part of doing that is to learn how to create a Bootstrap menu.

Of course, when you initially create an ASP.NET MVC project, the template will create a very basic Bootstrap menu for you, however, if you want to do something more sophisticated, you need to know how Bootstrap menus work as outlined here: http://www.w3schools.com/bootstrap/bootstrap_navbar.asp

Therefore, if you want to construct a Bootstrap menu with dropdown lists using ASP.NET MVC, you will probably end up with code similar to this:

<div class="navbar-collapse collapse">
        <ul class="nav navbar-nav">
            <li>@Html.ActionLink("Home", "Index", "Home")</li>
            <li>@Html.ActionLink("About", "About", "Home")</li>
            <li>@Html.ActionLink("Contact", "Contact", "Home")</li>
            <li class="dropdown">
                <a class="dropdown-toggle" data-toggle="dropdown" href="#">
                    Dropdown Links Menu
                    <span class="caret"></span>
                </a>
                <ul class="dropdown-menu">
                    <li>@Html.ActionLink("Link1", "Method1", "Link")</li>
                    <li>@Html.ActionLink("Link2", "Method2", "Link")</li>
                    <li>@Html.ActionLink("Link3", "Method3", "Link")</li>
                    <li>@Html.ActionLink("Link4", "Method4", "Link")</li>
                </ul>
            </li>
      </ul>

Monday, November 23, 2015

Attribute routing in ASP.NET MVC

You may be familiar with Attribute-based routing in ASP.NET Web API, but you may not know that you have the same capability in ASP.NET MVC!

With the release of ASP.NET MVC, you also have Attribute-based routing available to you in the ASP.NET MVC Framework as well as described here: http://blogs.msdn.com/b/webdev/archive/2013/10/17/attribute-routing-in-asp-net-mvc-5.aspx

However, instead of making your changes to support attribute-based routing in the WebApiConfig.cs file found in the App_Start folder, you instead make your changes to the RouteConfig.cs file also in the App_Start folder.

You then can use Routing attributes in a similar manner as you add routes in ASP.NET Web API such as using Route Prefixes and standard Route conventions.

In earlier releases of ASP.NET MVC, you would have to define an ActionName attribute to distinguish different routes as described here: http://www.codeproject.com/Articles/846403/Can-we-overload-MVC-controller-action-methods-MVC

Thursday, November 19, 2015

The end of an Era for Sitefinity and Small Business Owners

I have been using Sitefinity since v. 3.7 when the licensed edition still only cost $799.  Later, after Sitefinity v. 4.0 was released, the Small Business Edition version was released at a cost of $499 and the Standard edition price was subsequently significantly jacked up in price.

Not too long ago, Sitefinity got rid of their Sitefinity Community Edition, making Sitefinity no longer a viable option for non-profit, community or open-source projects.

Today, with the current release of Sitefinity v. 8.2, Sitefinity has done away with the Small Business Edition and now the cheapest edition of Sitefinity (Standard Edition) is priced at a whopping $3000! (http://www.sitefinity.com/editions)

Therefore, Small Business Owners that are looking for a reasonably priced and affordable ASP.NET-based Content Management system will now have no choice but to look elsewhere.

Sitefinity is now only priced for medium-sized to large corporations with big budgets to spend on their Content Management Systems for their publicly-facing websites.

Well, as a Small Business Owner myself and for all the other Small Business owners out there, I am saying so long Sitefinity, it was nice knowing you, too bad you could not stick around to meet our needs and budgets!!




Downloading a file from an ASP.NET MVC View

If you want to allow a user to download a file from your ASP.NET MVC View, then you will need to return a FileStreamResult (or any of the derived classes from FileResult): https://msdn.microsoft.com/en-US/library/system.web.mvc.fileresult%28v=vs.118%29.aspx

This article provided some great insight on how to accomplish this: https://gist.github.com/johnmmoss/8ee16837513ab69de4f3

Since I needed to render an RDF file to the browser, this was the code that I ultimately ended up using:

[HttpGet]
public FileStreamResult DownloadTurtleFile()
{
 
    string rdfContent = GetRDFContent();
 
    byte[] fileContent = Encoding.Unicode.GetBytes(rdfContent);
    var stream = new MemoryStream(fileContent);
    var fileStreamResult = new FileStreamResult(stream, "application/x-turtle");
    fileStreamResult.FileDownloadName = "RDFFile.ttl";
    return fileStreamResult;
}

RDF processing in C#

I recently started working on a project that used the RDF standard (http://www.w3.org/RDF/)

I needed to develop an application that leveraged this standard using C#/.NET, so I naturally began hunting for any .NET libraries that would make handling this platform much easier.

Fortunately, I found such a library called dotNetRDF!  http://dotnetrdf.org/

Even better, this library is also available as a NuGet package!


Wednesday, November 18, 2015

Data at the root level is invalid. Line 1, position 1.

I was recently working on a project that required using Xml so I naturally used LINQ-to-XML to load my documents.

Unfortunately, as soon as I used the XDocument.Parse method, I encountered this dreaded error message:

"Data at the root level is invalid.  Line 1, position 1."

Apparently, after doing some searching on the Internet, this error was very prevalent, though the solution was not readily obvious!

Fortunately, this article provides some insight on how to workaround this problem:  http://stackoverflow.com/questions/2111586/parsing-xml-string-to-an-xml-document-fails-if-the-string-begins-with-xml

I ended up using a combination of an XmlReader as well as a set of XmlReader settings to ignore my DTD instruction in the Xml Document as follows:

public static string RemoveDeclarationFromXDocument(string xmlDoc)
{
    XmlReaderSettings settings = new XmlReaderSettings();
    settings.DtdProcessing = DtdProcessing.Ignore; //Ignore any DTDs in the Xml Document
 
    XDocument xdoc;
 
    //The Xml Document is encoded using UTF-16 rather than UTF-8
    //therefore, you need to use Encoding.Unicode instead of Encoding.UTF8
    using (var xmlStream = new MemoryStream(Encoding.Unicode.GetBytes(xmlDoc)))
    {
        using (XmlReader xmlReader = XmlReader.Create(xmlStream, settings))
        {
            xdoc = XDocument.Load(xmlReader);
        }//using
    
    }//using
 
    //The XDocument string provides the Xml Document without the Xml Declaration heading
    return xdoc.ToString();
}

Tuesday, November 17, 2015

Windows 10 Exciting Licensing Changes!!

Windows 10 has decided to change their licensing scheme to allow Windows 10 to be licensed using valid Windows 7 or Windows 8 License Keys!!

Wow!  How cool is that??

This now allows existing Windows 7 or Windows 8 users to be able to install a clean/fresh copy of Windows 10 simply using their existing Windows 7 or Windows 8 License keys rather than attempting to install Windows 10 as an upgrade of existing Windows 7 or Windows 8 installations.

This removes a lot of the hassles with trying to rule out incompatible software from an existing Windows 7 or Windows 8 installation and allows simply performing a clean install and applying/installing the compatible software from the very beginning.

Way to go Microsoft!!

You can read more about this change here: http://www.forbes.com/sites/gordonkelly/2015/11/16/microsoft-windows-10-free-upgrade-rule-changes/?utm_campaign=yahootix&partner=yahootix

Where can I find the System.Web.Http assembly?

Visual Studio 2015 now informs you of "Potential fixes" when you have forgotten to reference a particular assembly in your project, however, if the assembly is part of a NuGet package, it offers no insight into which NuGet package reference you have to add to your project!  You are still left on your own to figure out which NuGet package is required!

Well, in the case of the System.Web.Http assembly, this belongs to the Microsoft.AspNet.WebApi.Core NuGet package.



Once you add this reference to your project via NuGet Package Manager, your assembly reference error message should disappear!

Monday, November 16, 2015

Unable to install .NET Framework 3.5 on Windows Server 2012 R2

I was recently attempting to install .NET Framework v. 3.5 on my Windows Server 2012 R2 virtual machine, when I suddenly discovered that the feature was never being installed!

Well, a quick search of the problem revealed this article: http://blogs.technet.com/b/askpfeplat/archive/2014/09/29/attempting-to-install-net-framework-3-5-on-windows-server-2012-r2-fails-with-error-code-0x800f0906-or-the-source-files-could-not-be-downloaded-even-when-supplying-source.aspx

This also led me to the following subsequent articles:

https://support.microsoft.com/en-us/kb/3002547

https://support.microsoft.com/en-us/kb/3005628

After installing the required update on my virtual machine, I was finally able to install .NET Framework v. 3.5 once again!


Thursday, November 12, 2015

Cannot find server certificate with thumbprint when restoring a SQL Server database

I was recently attempting to restore a database backup that I received from a vendor when I suddenly encountered the following error message:






Well, as it turns out, based on this Microsoft support article: https://support.microsoft.com/en-us/kb/2300689, the database was backed up using Transparent Data Encryption (TDE).  Therefore, without the certificate information, I would not be able to restore the backups back to my database server.

If the vendor was willing to provide me with the certificate information, then I could use the following method to restore the database: http://sqlserverzest.com/2013/10/03/sql-server-restoring-a-tde-encrypted-database-to-a-different-server/

However, considering that would probably be a compromise of their security, the best thing to do is as advised in the Microsoft support article:


Alter database testDB set encryption off 

Once TDE encryption is turned off, a backup of the database could be created and then provided to us to restore!

Importing Certificates using PowerShell

If you would like to be able to import certificates into the Certificate store using PowerShell, there are now a built-in set of PowerShell cmdlets that allow you to do this!

Starting with Windows 8.1 and Windows Server 2012 R2, you can use either of the following PowerShell cmdlets:

  • Import-Certificate
  • Import-PfxCertificate

You can read more about these PowerShell Cmdlets here:

https://technet.microsoft.com/en-us/%5Clibrary/hh848630%28v=wps.630%29.aspx

https://technet.microsoft.com/en-us/library/hh848625%28v=wps.630%29.aspx

Unfortunately, if you are using an older version of Windows (such as Windows Server 2008 R2 or even Windows Server 2012), you will need to implement your own custom PowerShell script to accomplish this similar to what is implemented here:

http://www.orcsweb.com/blog/james/powershell-ing-on-windows-server-how-to-import-certificates-using-powershell/

Where is MakeCert.exe?


If you are using Windows 8.1 or Windows Server 2012 R2 with the Windows 8.1 SDK, you can find makecert.exe here instead:

C:\Program Files (x86)\Windows Kits\8.1\bin\x86\makecert.exe

There is also a 64-bit version available here:

C:\Program Files (x86)\Windows Kits\8.1\bin\x64\makecert.exe

For guidance on using makecert.exe, you can check out these articles:


https://msdn.microsoft.com/en-us/library/bfsktky3%28v=vs.110%29.aspx

 https://msdn.microsoft.com/en-us/library/windows/desktop/aa386968%28v=vs.85%29.aspx

https://redmondmag.com/articles/2015/01/16/create-a-self-signed-certificate.aspx

http://blogs.msdn.com/b/winsdk/archive/2009/11/13/steps-to-sign-a-file-using-signtool-exe.aspx

 

Wednesday, November 11, 2015

Where is SignTool.exe?

If you are looking for SignTool.exe and you are running Windows 7 or Windows Server 2008 R2 with the Windows 7 SDK, you can find SignTool.exe here:

C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\bin\signtool.exe

If you are using Windows 8.1 or Windows Server 2012 R2 with the Windows 8.1 SDK, you can find SignTool.exe here instead:

C:\Program Files (x86)\Windows Kits\8.1\bin\x86\signtool.exe

This article also provides excellent screenshots of where to find SignTool.exe: revolution.screenstepslive.com/s/revolution/m/10695/l/112948-installing-signtool-exe

Code signing an executable or installation package

If you want to sign your installation packages on Windows, you will need the tool called SignTool that ships with the Windows SDK.

You can find more information about how to use SignTool here:

https://msdn.microsoft.com/en-us/library/windows/desktop/aa387764%28v=vs.85%29.aspx

https://msdn.microsoft.com/en-us/library/8s9b9yaz%28v=vs.110%29.aspx

This article has some good examples on using SignTool to sign a file: https://msdn.microsoft.com/en-us/library/windows/desktop/aa388170%28v=vs.85%29.aspx

If you want to learn how to generate your own Code Signing Certificate using makecert.exe, you should definitely check out this article: http://blogs.msdn.com/b/winsdk/archive/2009/11/13/steps-to-sign-a-file-using-signtool-exe.aspx

Discovering Windows file locks

If you ever encounter an issue whereby Windows informs you that a file or folder is locked, you may be wondering how to detect what applications or processes are holding that file lock!

Fortunately, there are several tools that allow you to accomplish just that!

One of the tools that I have used frequently in the past is Unlocker: http://emptyloop.com/unlocker/

However, in recent months it seems that in recent months it has been labeled as an "Unsafe site", therefore, you may not even find this website in Google search results.

 Lucky for us, there is another company which produces a similar product called LockHunter which provides much of the same functionality and still displays in Google search results: http://lockhunter.com/index.htm

Using one of these tools you can easily discover the source of the file or folder lock and either unlock the process or even delete the file or folder directly from within the tool!

How cool is that??


Tuesday, November 10, 2015

Creating a Self-Extracting Executable using WinRaR

I recently had a requirement to create a Self-Extracting Executable and I automatically thought of using WinRaR to accomplish this task.

Tools such as 7-Zip and WinZip also accomplish this, but the functionality for doing this is not built into the tools.  7-Zip requires extra modules to create this while WinZip requires the purchase of a separately licensed piece of software called WinZip Self-Extractor.

Therefore, for the $29 price tag of WinRaR, the ability to create Self-Extracting Executables directly without any additional modules or licensed software is definitely appealing.

Unfortunately, the documentation/help file for creating an SFX using WinRaR is pretty poor.  Therefore, it took a few hours of playing around with different command-line switches before I finally came across the right combination.

Ultimately, I used a commands text file that is provided in the Help documentation as the following text:

Title=Calculator 3.05
Text
{
Calculator is shareware. Anyone may use this
software during a test period of 40 days.
Following this test period or less, if you wish
to continue to use Calculator, you MUST register.
}
Path=Calculator
Overwrite=1
Setup=setup.exe

This was the PowerShell script that I used to run WinRaR:
 
$ZipFile = "C:\Temp\Temp.zip"
$CmdsFile = "C:\Temp\Test.txt"
$WinRarExe = "C:\Program Files\WinRAR\WinRAR.exe"
$RarFileArgs = @"
a -sfx $ZipFile -z$CmdsFile
"@
Write-Host """$WinRarExe""" $RarFileArgs
Start-Process $WinRarExe  $RarFileArgs


This is also a variation that allows creating an SFX directly:

 



$ZipFile = "C:\Temp\Temp.zip"
$CmdsFile = "C:\Temp\Test.txt"
$WinRarExe =  $Env:ProgramFiles + "\WinRAR\WinRAR.exe"
$RarFileArgs = @"
s $ZipFile -z$CmdsFile
"@
 
Write-Host """$WinRarExe""" $RarFileArgs
Start-Process $WinRarExe  $RarFileArgs

Date and Time formatting in PowerShell

If you need to know how to format Dates and Times in PowerShell, you will definitely want to check out this article: https://technet.microsoft.com/en-us/library/ee692801.aspx

The most common type of date formatting  you will probably be using will be something like the following:
 
$currentDate = Get-Date -format yyyyMMdd

That is all there is to it!!


Passing parameters to a PowerShell build step in Jetbrains TeamCity

If you are using PowerShell scripts as part of your build steps in Jetbrains TeamCity, you may come across this article which describes how to pass parameters to your PowerShell scripts:  https://thebutlershome.wordpress.com/2013/04/28/team-city-powershell-and-parameters/

However, the example is a bit outdated for the latest version of TeamCity (v. 9.1.3 as of this writing).

Instead, you will need to pass parameters as Script arguments in this format instead:

-arg1 "test1" -arg2 "test2"

Using the "=" sign while passing parameter arguments will lead to unexpected behavior in TeamCity.

In addition, the option for "Execute .ps1 script with -File argument" no longer exists.  Instead, it has been replaced with "Execute .ps1 from external file"





Jetbrains TeamCity vs. Jenkins

I have been using Jetbrains TeamCity for a large number of years pretty much in lieu of many other continuous integration build systems including Team Foundation Server.

However, I recently had the opportunity on a recent project to work with Jenkins as part of a migration to TeamCity so I thought this was a perfect way of comparing and contrasting the 2 platforms!


  1. First up, from a visual perspective, Jenkins uses a lot more pictures/images/icons than TeamCity.  TeamCity is a rather text-heavy web application and from a visually appealing perspective, is not very pleasing to the eye.
  2. Both Jenkins and TeamCity are built on Java and support multiple platforms including Windows. 
  3. Both platforms are free, but TeamCity has a limitation of 20 build configurations and 3 build agents.  You can purchase an additional 10 build configurations for a nominal cost of $299 or buy an Enterprise Server license for $1999 which offers unlimited build configurations. 
  4. Both Jenkins and TeamCity support plug-ins, however, Jenkins seems to have a wider variety of plug-ins available for the platform than TeamCity.  On the other hand, TeamCity also has a great deal of functionality built directly into the platform that Jenkins would otherwise leverage through a plug-in.
  5. Next up, in terms of overall ease-of-use and usability, TeamCity wins hands-down! 
Here are some of the things I noticed immediately:

  1. When I want to copy a project in Jenkins, I have to select "Copy existing item" and then MANUALLY  type in the name of the project I wish to copy.  In TeamCity, I simply select the project/build configuration I want to copy and from the "More..." menu item, choose "Copy build configuration"
  2. When creating a new item in Jenkins, I am not presented with an option to create a "build configuration template" which is a reusable template for creating future build configurations.  TeamCity presents the option to create either a build configuration or a build template directly from the Project Settings page.
  3. In order to create a build configuration, TeamCity provides a wizard-like navigation to configure the various aspects of a build configuration while Jenkins provides a very simple (if not downright primitive) top-down layout for displaying all of the build configuration settings.
  4. When creating builds in Jenkins, you can create a parameterized build, but you cannot easily parameterize individual build steps (such as the Execute Windows batch command build step)!  In TeamCity, every single build step is capable of accepting parameters including the Command Line build step.
  5. Jenkins has no concept of VCS Roots.  Instead, a single VCS repository has to be configured for each build configuration that you create.  TeamCity provides a centralized repository of VCS Roots which allows you to configure 1 or more VCS Roots which can then be shared readily among all of your build configurations.  This is particularly handy if you are using build templates or builds which need to be chained.
  6. After a check-in occurs triggering a build, Jenkins will display the check-in comments from the associated check-ins, but TeamCity will not only display the check-in comments and the person who committed the check-in, but also provide you with a diff view of the files that were changed as part of the check-in!!
  7. TeamCity offers built-in issue tracking integration with JIRA, Bugzilla and YouTrack.  https://confluence.jetbrains.com/display/TCD9/Integrating+TeamCity+with+Issue+Tracker   TeamCity also offers plug-ins for various other issue tracking systems such as Team Foundation Server. https://blog.jetbrains.com/teamcity/2014/11/integrating-teamcity-and-visual-studio-online-work-items/
  8. TeamCity also allows you to store your TeamCity project settings in a variety of source control repositories such as Git, Mercurial, Perforce and Subversion.  https://confluence.jetbrains.com/display/TCD9/Storing+Project+Settings+in+Version+Control
  9. Jenkins has the option to view the Console Output, but TeamCity has not only the option to view the build log, but also to be able to download the build log for a further detailed review!  You can even download the full build log as a large text file or a zipped version of the build log!
  10. Jenkins has no option to name any of your build steps to distinguish what they are doing.  The only information you get about build steps in Jenkins is what they are such as "Execute Windows batch command" or "Invoke Ant" etc.  TeamCity allows you to name each of your build steps to better discern what each build step is actually accomplishing.
  11. When creating build triggers in Jenkins, you have to manually type in syntax to specify a build schedule (even for the Poll SCM option).  However, with TeamCity you get a nice User Interface by which you can choose options such as a VCS Trigger (which will poll the VCS every time a check-in occurs), or use a Schedule Trigger which allows specifying the schedule for which a particular build configuration will run such as daily, weekly or advanced.
  12. When creating a build chain or build dependency, once again Jenkins requires you to MANUALLY type in the name of the project that is part of the build chain, while TeamCity allows you to easily choose among the various existing projects.  
  13. Once you configure build steps in Jenkins, there is no easy way to re-arrange the order of the build steps.  TeamCity allows you to easily drag and drop the build steps in order to re-arrange their order.
  14. In Jenkins, the only option to prevent a build step from executing is to Delete the build step.  However, TeamCity allows you to easily disable a build step rather than simply deleting it.  Therefore, if you are copying a build configuration or using a build template, you can selectively modify your build configuration to disable certain build steps while still inheriting from the parent build template.
  15. Jenkins has no built-in build step for building Visual Studio solutions while TeamCity not only has a built-in build step for Visual Studio solutions, but can also auto-detect Visual Studio solution build steps based on the source control repository as well as even NuGet build steps!!
  16. In Jenkins, there is no option to copy an existing build step directly.  Instead, you have to create a brand new build step and then copy and paste the appropriate commands into the new build step.  However, in TeamCity, you can directly copy a build step and then modify/edit as needed to suit your needs.
  17. In Jenkins, when you create a build step, there is no option to change the type of build step once it has been created.  Instead, you have to delete the build step and create a brand new one.  However, in TeamCity, by simply choosing the type of build step from a drop down list, you can change the build step from something like a Command Line Build Step to a PowerShell Build Step!  This is extremely convenient when you want to provide more sophisticated functionality and need to switch to something more powerful such as PowerShell.  TeamCity keeps all of your information intact while you begin the process of switching it over to a PowerShell script instead.
  18. When you view your build steps in Jenkins, you can only view them top-down and Jenkins provides no details as to how many build steps in your build configuration.  However, TeamCity provides you with a good visual of all of your build steps allowing you to perform a variety of actions on them such as reordering, editing, disabling, copying or deleting them as well as a left hand navigation view which tell you how exactly how many Build Steps exist in your Build Configuration!
This is just a small sampling of the differences between Jenkins and TeamCity.  Once you begin truly evaluating TeamCity for yourself, you will soon discover that TeamCity will also be your Continuous Integration build server/platform of choice!!

Saturday, November 7, 2015

Formatting strings in PowerShell

If you done programming in .NET/C# before, you are probably very familiar with using the string.Format function to provide structure and string formatting for your strings.

However, if you are new to PowerShell, you may not know how to perform the same task in PowerShell.

Fortunately, using string formatting in PowerShell is just as easy as in .NET/C#!

You can read more about how to do string formatting in PowerShell here: http://blogs.technet.com/b/heyscriptingguy/archive/2013/03/12/use-powershell-to-format-strings-with-composite-formatting.aspx

The most common implementation is definitely this:
"This is my first string {0} and this is my second string {1}"  -f "Scrooge", "McDuck"

Friday, November 6, 2015

Working with double quotes in PowerShell

If you have worked with PowerShell for any length of time, you will soon realize that escaping double quote characters is troublesome task because double quotes is not only used to define strings but is also used to expand variables inside of strings!

Needless, to say, at some time or another, you will need to escape double quotes in your strings (particularly when working with paths used for command-line tools).

Fortunately, TechNet has an excellent article on managing double quotes in your PowerShell scripts: http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/28/writing-double-quotation-marks-to-a-text-file-using-windows-powershell.aspx

Though it is definitely the most easily readable for single line text, here-strings are definitely the easiest technique to use:  https://technet.microsoft.com/en-us/library/ee692792.aspx

The reason that Here-Strings are not the most readable is because Here-strings require the @" to begin a line and then the text on a separate line and finally the "@ on a 3rd line!  Therefore, a single line of text, suddenly needs to be stretched across 3 lines in your PowerShell script!

A situation in which I needed to use Here-Strings was when I was using the command-line tool RoboCopy:

 
$RoboCopyArgs = @"
"$SrcFileDirectory" "$DestDirectory" /MIR /FFT /Z /XA:H /W:5
"@
Write-Host "These are the RoboCopy Args:"
Write-Host $RoboCopyArgs
Start-Process $RoboCopy -Verb runAs $RoboCopyArgs

Beyond Compare now supports 64-bit!

If you have been using Beyond Compare as your differencing tool of choice for a while now, you may not have known that Beyond Compare now supports 64-bit!

The latest release of Beyond Compare is released with multi-platform support, so it will install as either a 32-bit application or a 64-bit application.

Therefore, if you are upgrading an earlier release of Beyond Compare, it will simply upgrade your existing 32-bit installation.

However, on fresh new installations of Beyond Compare, it will install as a 64-bit application!


You can download the latest release of Beyond Compare from here: http://scootersoftware.com/download.php

Thursday, November 5, 2015

Windows Photo Viewer on Windows 10

I recently performed a fresh installation of Windows 10 and was frustrated to discover that I no longer had the right click and "Preview" option when I clicked on Photos!

Well, as it turns out, Windows 10, by default removes the Windows Photo Viewer functionality for viewing pictures which removes access to that handy "Preview" feature.

Fortunately, though, I found a handy article which provides the necessary registry entries to bring back Windows Photo Viewer in Windows 10: http://www.howtogeek.com/225844/how-to-make-windows-photo-viewer-your-default-image-viewer-on-windows-10/

Once I merged/added these registry entries back to my Windows 10 system, I once again had my "Preview" option available when I right-clicked on my photos/pictures in Windows Explorer!

Connector attribute SSLCertificateFile must be defined when using SSL with APR

I was recently setting up SSL for Apache Tomcat based on this article: https://support.comodo.com/index.php?/Knowledgebase/Article/View/646/0/tomcat-ssl-connector

However, after setting up SSL in my server.xml file according to the article, I received the following error message:

"Connector attribute SSLCertificateFile must be defined when using SSL with APR"

Well, as it turns out, this line in my server.xml file was causing problems:

<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

Since I was running my Apache Tomcat SSL on the same port as AJP, it was causing conflicts and throwing this exception!

 

Therefore, my solution consisted of two parts:

 

Remove the redirectPort attribute from this element:

 



<Connector port="8009" protocol="AJP/1.3" />

Next, remove the Listener element from the server.xml file:

 


<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />

I created a handy little PowerShell script to accomplish these tasks:



[CmdletBinding()]
Param(
[Parameter(Mandatory=$True,Position=1)]
[string]$KeyStoreFile,
[Parameter(Mandatory=$True)]
[string]$KeyStorePwd,
[Parameter(Mandatory=$False)]
[string]$PortNumber="8443"
)
 
$BaseScriptDir = "C:\MyCerts"
$ApacheTomcatDir = "C:\Program Files\Apache Software Foundation\Tomcat 8.0"
$ApacheTomcatConfDir = "$ApacheTomcatDir\conf"
$JavaCertFile = "$BaseScriptDir\$KeyStoreFile"
$ServerXMLFile = "$ApacheTomcatConfDir\server.xml"
$TomcatServiceName = "Tomcat8"
$BakFileExtension = ".bak"
$AJPPortNumber = "8943"
 
Write-Host "Copy the Java SSL Certificate for Apache Tomcat SSL"
Copy-Item $JavaCertFile $ApacheTomcatConfDir
 
Write-Host "Create a backup of the Apache Tomcat server.xml file"
Copy-Item $ServerXMLFile ($ServerXMLFile + $BakFileExtension)
 
#Stop the Apache Tomcat Service
Stop-Service $TomcatServiceName
 
#Read the content of the XML File
$serverXMLDoc = [xml](Get-Content $ServerXMLFile)
 
#Update the Http Protocol connector
$connectorXPath = "//Connector[@protocol='HTTP/1.1']"
$connectorNode = Select-Xml -Xml $serverXMLDoc -XPath $connectorXPath | Select-Object -ExpandProperty Node
 
$connectorNode.SetAttribute("port", $PortNumber)
$connectorNode.SetAttribute("SSLEnabled", $true)
$connectorNode.SetAttribute("maxThreads", 150)
$connectorNode.SetAttribute("scheme", "https")
$connectorNode.SetAttribute("secure", $true)
$connectorNode.SetAttribute("keystoreFile", "$ApacheTomcatConfDir\$KeyStoreFile")
$connectorNode.SetAttribute("keystorePass", $KeyStorePwd)
$connectorNode.SetAttribute("clientAuth", $false)
$connectorNode.SetAttribute("sslProtocol", "TLS")
$connectorNode.SetAttribute("maxHttpHeaderSize", "8192")
 
$connectorNode.RemoveAttribute("redirectPort")
 
#Update the AJP connector
$AJPConnectorXPath = "//Connector[@protocol='AJP/1.3']"
$AJPConnectorNode = Select-Xml -Xml $serverXMLDoc -XPath $AJPConnectorXPath | Select-Object -ExpandProperty Node
 
$AJPConnectorNode.RemoveAttribute("redirectPort")
 
#Remove the AJP Listener
$ListenerXPath = "//Listener[@className='org.apache.catalina.core.AprLifecycleListener']"
$ListenerNode = [System.Xml.XmlElement](Select-Xml -Xml $serverXMLDoc -XPath $ListenerXPath | Select-Object -ExpandProperty Node)
 
$ServerXPath = "//Server"
$ServerNode = [System.Xml.XmlElement](Select-Xml -Xml $serverXMLDoc -XPath $ServerXPath | Select-Object -ExpandProperty Node)
 
$ServerNode.RemoveChild($ListenerNode)
 
#Save the changes and update the server.xml file
$serverXMLDoc.Save($ServerXMLFile)
 
#Restart the Apache Tomcat service
Start-Service $TomcatServiceName


 

Windows Explorer integration with Team Foundation Server

If you are used to using tools such as TortoiseHg or TortoiseSVN that has Windows Explorer integration, you may be looking for similar features and functionality with Team Foundation Server.

Fortunately, this is available through install the Team Foundation Server Power Tools!

You can install TFS 2013 Power Tools from here: https://visualstudiogallery.msdn.microsoft.com/f017b10c-02b4-4d6d-9845-58a06545627f

You can install TFS 2015 Power Tools from here: https://visualstudiogallery.msdn.microsoft.com/898a828a-af00-42c6-bbb2-530dc7b8f2e1

Removing all empty folders in a Directory Tree using PowerShell

If you have a requirement to remove all empty folders in a Directory Tree using PowerShell, you might come across this article: https://technet.microsoft.com/en-us/library/ff730953.aspx

Unfortunately, this article has a major flaw in the script which ends up listing out BOTH the parent as well as the child subdirectories!!

Therefore, if you pipe the Remove-Item command to the results, you will end up removing the entire directory tree structure rather than just the empty folders!

Instead, you will want to modify the script to something like the following:
 
[CmdletBinding()]
Param(
[Parameter(Mandatory=$True,Position=1)]
[string]$RootDir
)
 
Clear-Host
 
$emptyDir = Get-ChildItem $RootDir -Recurse | Where-Object {$_.PSIsContainer -eq $True} | Where-Object {$_.GetFiles().Count -eq 0 -and $_.GetDirectories().Count -eq 0} | Select-Object $_.FullName
 
foreach ($emptyDirItem in $emptyDir)
{
    Write-Host $emptyDirItem.FullName
    Remove-Item $emptyDirItem.FullName
}#foreach

If instead, you prefer piping commands to each other, you can do something like the following:


[CmdletBinding()]
Param(
[Parameter(Mandatory=$True,Position=1)]
[string]$RootDir
)
 
Clear-Host
 
$emptyDir = Get-ChildItem $RootDir -Recurse | Where-Object {$_.PSIsContainer -eq $True} | Where-Object {$_.GetFiles().Count -eq 0 -and $_.GetDirectories().Count -eq 0} | Select-Object $_.FullName `
| Remove-Item -Recurse #-WhatIf

That is all there is to it!

Wednesday, November 4, 2015

Working with INI files in PowerShell

I recently had a requirement to construct a file that would store information that I could then use to easily read back the information to construct a directory structure on the file system.

I did not want to construct an elaborate XML document to store this information, so the obvious choice became an .ini file, however, I was not sure if PowerShell had built-in support for INI files.

Well, fortunately, a PowerShell scripting expert provided the functionality for reading INI files for us!

http://blogs.technet.com/b/heyscriptingguy/archive/2011/08/20/use-powershell-to-work-with-any-ini-file.aspx

You can download his PowerShell script from here: https://gallery.technet.microsoft.com/scriptcenter/ea40c1ef-c856-434b-b8fb-ebd7a76e8d91

Working with Collections in PowerShell

If you have to work with Collections in PowerShell, you basically have 3 major options:

  1. Arrays
  2. ArrayLists
  3. Hashtables
Arrays are definitely the most common collections used in PowerShell scripts since they are built directly into the PowerShell framework and are by far the easiest to use: https://technet.microsoft.com/en-us/library/ee692791.aspx

Hashtables are similar to Dictionary collections and are also built directly into the PowerShell framework: https://technet.microsoft.com/en-us/library/ee692803.aspx?f=255&MSPPError=-2147217396

ArrayLists are NOT built into the PowerShell framework, but instead uses the capabilities of the .NET Framework in order to provide this functionality: https://technet.microsoft.com/en-us/library/ee692802.aspx.  However, given that you can tap into nearly any library that is provided by the .NET Framework using PowerShell, you could also potentially tap into Queues and Stacks as well! 

To learn more about Collections that are available with the .NET Framework, you can check out these articles:

https://msdn.microsoft.com/en-us/library/ybcx56wz.aspx

https://msdn.microsoft.com/en-us/library/0ytkdh4s%28v=vs.110%29.aspx



Tuesday, November 3, 2015

Looping through all files and folders in PowerShell

I recently had a requirement to loop through all Files and Folders in a PowerShell script and I readily found this PowerShell article which addressed this requirement: http://blogs.technet.com/b/heyscriptingguy/archive/2014/02/03/list-files-in-folders-and-subfolders-with-powershell.aspx

However, the behavior was not what I desired or expected!

Instead, of looping through and listing out all of the files along with their associated directory structures, instead, it simply listed out all of the directory names and then later listed out the file names!

Rather, I wanted the behavior just as I would expect when performing the same operation in C#: https://msdn.microsoft.com/en-us/library/bb513869.aspx
 
Well, after reading through a large number of PowerShell articles, I discovered a "fix" to making PowerShell behave the way I wanted as follows:

$files = Get-ChildItem -Path "$DirPath\*" -Recurse

foreach ($filePath in $files)

{

    $directoryPath = [System.IO.Path]::GetDirectoryName($filePath)

    $fileName = [System.IO.Path]::GetFileName($filePath)

 

    Write-Host $directoryPath

    Write-Host $fileName

    

}#foreach

The solution was much simpler than I expected!  I simply had to append an “\*” (wildcard character) at the end of my directory path structure!

I also was able to come up with an alternative solution as well:

$files = Get-ChildItem -Path $DirPath -Recurse -File

 

foreach ($filePath in $files)

{

    $directoryPath = $filePath.DirectoryName

    $fileName = $filePath.Name

 

    Write-Host $directoryPath

    Write-Host $fileName

}#foreach

That was all that was needed to achieve the same behavior as in C#!!


Monday, November 2, 2015

Passing null or empty strings as parameters to PowerShell

 

If you have a parameter that is mandatory in your PowerShell script, but you need to also be able to pass either null or empty strings to it, then you need PowerShell to recognize the null or empty string values that you pass!

Fortunately, PowerShell has a mechanism to do just this!

If you want to be able to accept null values for a PowerShell parameter, you can write something like the following:

[Parameter(Mandatory=$True)]
[AllowNull()]
$MyParameter


On the other hand, if you need to be able to accept empty strings, you can do something such as this instead:



[Parameter(Mandatory=$True)]
[AllowEmptyString()]
[string]$MyParameter

If you want to read more about the possible Parameter attribute declarations, you can check out this TechNet article here: https://technet.microsoft.com/en-us/library/hh847743%28v=wps.630%29.aspx