Monday, December 19, 2011

Ubuntu Desktop 10.11 on Hyper-V piix4_smbus

So, I needed a small virtual machine that had the Hyper-V Integration Components working it (I need the clean shutdown action).

XP is just too large, but usually works well.  Linux, why not.  And the RedHat family just incorporated the latest of the Hyper-V Linux Integration Components.  Perfect.

Now, the flavor.  RedHat?  No, no license.  Fedora? Well, now that I think of it, I forgot this one before.  CentOS?  Great for a server.  Ubuntu.  Yep, I chose Ubuntu.

I created a VM and installed Ubuntu Server without a hitch.  Just be sure to use a Legacy Network Adapter during the install process.

I then created a VM and tried to install Ubuntu Desktop.  Big fail.
First of all, for the life of me I could not get the installer screen to popup.  I could not figure out what was going on.  After all, the server VM went straight to the language selection.

Well, at the bottom of the screen is this cryptic graphic of a keyboard, and equal sign, and what looks to be a person with arms and legs spread.  After a few reboots I interpreted this symbol to mean; “if you want to install, hit enter now”.
Hey, the language selection pops up!

Now, I select my language and hit enter.  Error message, drop to a prompt.  Fail two.

The error:  piix4_smbus 0000:07.3: SMBus base address uninitialized – upgrade BIOS or use force_addr=0xaddr

I only play a Linux guy on TV.  And a bit of searching showed me that this error is common on VirtualBox and the settings change suggested is not a safe thing to try.  Oh, and that I needed to re-compile my initrd.

This is a brand new build and freshly downloaded media.  Crazy talk!

Well, I looked around and figured out a workaround.
I managed to get the installer to load by selecting the “other options” at the installation selection menu (F6) and setting “acpi=off” (highlight it, enter or spacebar, ESC to close the options dialog).

Oh, and a nice feature, on the Windows 8 Developer Preview I actually had a mouse during the GUI installer of Ubuntu Desktop.

Now, enabling the Integration Components..

I did just a couple things.  Bring up a terminal using “Ctrl + Alt + t”
Then you need to do sudo –i to switch to root for the remainder of the session or type sudo at the beginning on each command.

Using nano (I never got any handle on vi and always have fallen back to nano) added 4 lines in /etc/initramfs-tools/modules  ( “nano /etc/initramfs-tools/modules” ) and add:
hv_vmbus
hv_storvsc
hv_blkvsc
hv_netvsc

Save that.  Then generate an updated initramfs with “update-initramfs –u” and reboot.

If you had rebooted prior to this you may have noticed a storage layer inaccessibility error that is now gone.

I then ran “apt-get update” and “apt-get upgrade” – two hours go by…Not necessary, but it does update all packages.

This entire time I have been using a Legacy Network Adapter with the Ubuntu Desktop VM.  After the update completed I shutdown the VM, removed the Legacy Network Adapter, added a (synthetic) Network Adapter and all was good.

Friday, December 16, 2011

Provisioning Server (PVS) Cache on device issues

I recently ran into some issues streaming VMs and taking advantage of the local storage on the hypervisors for the local cache.

Many of these issues were due to my lack of experience with the product, so I am making notes here for future use.

First, if you are not familiar with Provisioning Server or have not taken a look at it for a few revisions, go download the latest version.  There have been a lot of improvements since the last time I looked at it (a few years ago). 

Cache on Device Hard Drive is actually from the PVS feature to stream to bare metal.  In the VM world it allows me to use local storage that is relatively inexpensive (unless you use SSDs – but hey, just another possibility) and just fine when you don’t want to persist the cache disk.

Okay, enough of that, back to my cache disk issue notes.
Firstly, when you create your Template VM, be sure to create the local cache disk as its local drive.

Secondly, don’t forget to format this disk for your streamed OS.  So, for Windows, format it NTFS (I am just booting the VM that will be my template to WinPE and using DiskPart to avoid ACLs.)
Third, make that cache disk large enough.  This was my most perplexing puzzle and I fiddled for a couple days with this off and on.

When I booted the VM and login I would get this strange error message:  “Windows created a temporary paging file on your computer”

This issue is specific to using the Cache on Device setting when your vDisk is set to shared mode.  The root is that the default page file location cannot be written to as it is the shared vDisk image.  The ancillary detail is that the page file gets moved to the device cache disk which is some drive other than %system%.

So, I had my local cache, all formatted, as small as it thought it needed to be.  I booted my vDisk and the error.  I set the vDisk to private mode and fiddled around with paging file settings.  I went back to my template VM, booted in shared mode, no paging file. 

I searched and found all kinds of related errors; moving it off C:, using a SCSI attached VHD instead of IDE (I happen to be using XenServer so this was not it), ACLs on files within the VHD, also obscure errors where the programs are expecting specific folders to exist on the cache disk.  All of these are totally valid errors and entirely possible in their scenario, just not mine.

Very frustrating.  It was when I booted with the VM template in private mode with the cache disk attached and I attempted to move the page file that I got a useful error:  the destination is not large enough.

Duh!  My device cache disk (the virtual disk of the VM) was not large enough for the caching of temporary files AND the page file of the VM operating system.  So silly.

I went back and recreated my template VM, with a larger cache disk – taking into account any dynamic memory settings.

Once I made this larger VHD (formatted with Diskpart) and booted using private image mode with cache on device hard drive all was good.

Thursday, December 15, 2011

Changing a VM vNIC MAC from dynamic to static

Through my VM creation exercise I am taking advantage of Hyper-V and its practice of assigning MAC address from a pool so that I don’t need to manage all that in my script.

Well, if you have not noticed, I have Provisioning Server in my environment.  And that means that I need to have static MAC addresses.  And I am using Export and Import to copy my MasterVM.

This means that I cannot set the MasterVM with a static MAC.  I would at least have to modify it to have a dynamic and then still set the individual VMs to static.

So, I just set the MasterVM to dynamic and do the rest when I assign the MAC to the VM by powering it on and off (I power the VM on to cause Hyper-V to assign the MAC to the vNIC – again, taking advantage of what the system does for me).

I have this array of new virtual machine names; $arrNewVms that I want to loop through.  I then power on each VM (I have to wait for the power on to complete), then I power off, then I modify the MAC.

Here is where things get tricky.  To discover the MAC of the VM I had previously used an association class this does not return a modifiable object.  So I have to actually get the VM vNIC, modify it, and then send it back to the ModifyVirtualSystemResources method.

foreach ($vmName in $arrNewVms) {
    $vm = Get-WmiObject Msvm_ComputerSystem -Filter "ElementName='$vmName'" -Namespace "root\virtualization" -ComputerName $hypervHost
    $Result = $vm.RequestStateChange(2) # start
    ProcessWMIJob $Result
    $vssd = $vm.getRelated("Msvm_VirtualSystemSettingData") | where {$_.SettingType -eq 3}
    $vmLegacyEthernet = $vssd.getRelated("Msvm_EmulatedEthernetPortSettingData") #This returns information not an actionable object
    foreach ($e in $vmLegacyEthernet) {
        $mac = $e.Address
        $macDash = $mac.Substring(0,2) + "-" + $mac.Substring(2,2) + "-" + $mac.Substring(4,2) + "-" + $mac.Substring(6,2) + "-" + $mac.Substring(8,2) + "-" + $mac.Substring(10,2)
        $arrUpdatedVms += ($vmName + "," + $macDash)
    }
    $vm.RequestStateChange(3) # stop
   
    #Set the MAC to static by first getting the vNIC object
    $vnic = Get-WmiObject Msvm_EmulatedEthernetPortSettingData -Filter "Address='$mac'" -Namespace "root\virtualization" -ComputerName $hypervHost
    $vnic.StaticMacAddress = 1
    $VMManagementService.ModifyVirtualSystemResources($vm.__PATH, $vnic.psbase.gettext(1))
}

Dealing with the Hyper-V default MAC pool

This is my little hack around dealing with the Hyper-V default MAC address pool range.

If you have been following, I have a bunch of virtual machines that I am creating.  And it is important that each has a unique MAC address.

Here is where software behavior gets into my way.

By default a Hyper-V Server MAC address range is only 255 MAC addresses ( 00 – FF ).  But a Hyper-V server can support 385 VMs, or VMs can have multiple virtual NICs.  So my pool can get depleted pretty quickly.

The other issue is that when assigning a MAC address to a VM the MACs are taken from the pool sequentially.  Not a big issue, and it actually adds some predictability.  However, once the end of the list is reached it starts over from the beginning.

The last issue, a Hyper-V Server only reserves the MAC of a VM when that VM is on that host.  Here is where my scenario makes all the problems.  I have a cluster of Hyper-V Servers, I am using one of them to create all my VM copies, then my VMs will migrate away.  Once the VM is no longer on the host where it was created its MAC is marked available in the pool (this is not tracked at the cluster level).

Combine this with the circular assignment action and this MAC can be assigned a second time, to a different VM.

SCVMM can solve this, as it tracks static MACs for all VMs that it manages.  But why use SCVMM just to prevent MAC duplication?

My hack?  Make the pool larger.  Only modify the octet that is necessary to achieve it.  And, randomize it to reduce the possibility of collision from a different host.

In a previous post I modified the MAC pool of a Hyper-V Server.  Here is where I embellish it to help with this duplicate MAC address issue.  Some interesting parts are converting from hexadecimal to decimal and then back again.

Additional background; leave the Hyper-V range "00155D", modify some part of the last 3 octets.  Increasing the next to last octet should give enough MAC addresses to prevent the range from rolling over and cover the number of VMs in a cluster (maximum of 1000).  I just want to make the pool bigger and the last octet of the MaximumMacAddress is always 255 (FF).

$VMManagementService = Get-WmiObject -Namespace root\virtualization -Class Msvm_VirtualSystemManagementService -ComputerName $hypervHost

$vsmssd = Get-WmiObject Msvm_VirtualSystemManagementServiceSettingData -Namespace "root\virtualization" -ComputerName $hypervHost

$intOctFive = [Convert]::ToInt32(($vsmssd.MaximumMacAddress.Substring(8,2)),16)

If ($intOctFive -lt 254) {
    $newIntOct = Get-Random -minimum ($intOctFive + 1) -maximum 255
    $newStrOct = ([Convert]::ToString($newIntOct, 16)).ToUpper()
    $vsmssd.MaximumMacAddress = ($vsmssd.MaximumMacAddress.Substring(0,8) + $newStrOct + $vsmssd.MaximumMacAddress.Substring(10,2))
}
Else {
    $intOctFour = [Convert]::ToInt32(($vsmssd.MaximumMacAddress.Substring(6,2)),16)
    $newIntOct = Get-Random -minimum ($intOctFour + 1) -maximum 255
    $newStrOct = ([Convert]::ToString($newIntOct, 16)).ToUpper()
    $vsmssd.MaximumMacAddress = ($vsmssd.MaximumMacAddress.Substring(0,6) + $newStrOct + $vsmssd.MaximumMacAddress.Substring(8,4))
}

$VMManagementService.ModifyServiceSettings($vsmssd.psbase.gettext(1))

Modifying Hyper-V Server settings with WMI

Here is a little thing that I recently put together to modify Server level settings of my Hyper-V Server.  Most of my focus so far has been on the VMs themselves, why not tweak the server?

Why might you need to change settings of the Hyper-V server itself?  Um, because you can, and therefore you should.  You might need to change the MAC address range, or the default path, or some other crazy thing.

What can you change?

Well, first, you have to get the Msvm_VirtualSystemManagementServiceSettingData WMI object.  This class is the settings of the hypervisor and things that the management OS must keep in check.

If you get the WMI object and then type it out in the PowerShell windows you will see something like this:


__GENUS                    : 2
__CLASS                    : Msvm_VirtualSystemManagementServiceSettingData
__SUPERCLASS               : CIM_SettingData
__DYNASTY                  : CIM_ManagedElement
__RELPATH                  : Msvm_VirtualSystemManagementServiceSettingData.InstanceID="Microsoft:BJEDAR2"
__PROPERTY_COUNT           : 13
__DERIVATION               : {CIM_SettingData, CIM_ManagedElement}
__SERVER                   : BJEDAR2
__NAMESPACE                : root\virtualization
__PATH                     : \\BJEDAR2\root\virtualization:Msvm_VirtualSystemManagementServiceSettingData.InstanceID="Microsoft:BJEDAR2"
BiosLockString             :
Caption                    : Hyper-V Virtual System Management Service
DefaultExternalDataRoot    : C:\Users\Public\VMs\
DefaultVirtualHardDiskPath : C:\Users\Public\VMs\
Description                : Settings for the Virtual System Management Service
ElementName                : Hyper-V Virtual System Management Service
InstanceID                 : Microsoft:BJEDAR2
MaximumMacAddress          : 00155D748DFF
MinimumMacAddress          : 00155D680A00
NumaSpanningEnabled        : True
PrimaryOwnerContact        :
PrimaryOwnerName           : Administrators
ScopeOfResidence           :

Now, how can I do something useful with this?  Read on.  I am going to modify the MAC address range of the host.

First, you have to get an instance of the Virtual Machine Management Service as this will give you the method you need to actually modify the WMI objects.  If you don’t do this, you only have a bunch of read only properties through CIM.

$VMManagementService = Get-WmiObject -Namespace root\virtualization -Class Msvm_VirtualSystemManagementService -ComputerName $hypervHost

Now, I want to get the object I typed out above:

$vsmssd = Get-WmiObject Msvm_VirtualSystemManagementServiceSettingData -Namespace "root\virtualization" -ComputerName $hypervHost

And see the MAC address range:

$vsmssd.MinimumMacAddress
$vsmssd.MaximumMacAddress

Then change the value:

$vsmssd.MaximumMacAddress = “00155D74FFFF”

Then put the change:

$VMManagementService.ModifyServiceSettings($vsmssd.psbase.gettext(1))

A return value of “0” is success, a return of “4096” is generally a long running job or something went bad.

Making VMs highly available with PowerShell

If you have been following you will know that I have a bunch of copies of a MasterVM.

These were created using Export / Import so they are all nice and tidy in their own folders on the same volume.  If your target volume was a CSV (Cluster Shared Volume) then you are all set for this one and are ready for a multi-node cluster.  If your destination was not a CSV, but possibly another local volume, you can still make your VMs Highly Available, but you won’t be able to migrate.  This is fine for testing, but not really of high value in production.

Here is the script:

if ($highlyAvailable -match "yes") {
    write-progress -Id 1 "Making each VM highly available" -Status "Executing"
    # Make a remote session here to run the below loop passing in $arrUpdatedVms
    $session = New-PSSession -computerName $hypervHost
    Invoke-Command -scriptblock { Import-Module FailoverClusters } -session $session
    Import-PSSession -module FailoverClusters -session $session
    foreach($e in $arrUpdatedVms){
        $vm = $e.Split(",")
        $deviceName = $vm[0]
        Add-ClusterVirtualMachineRole -VirtualMachine $deviceName
    }
    write-progress -Id 1 "Making each VM highly available" -Completed $TRUE
}

The most interesting thing that I do here is that I am using Import-PSSession to bring the Clustering cmdlets (installed on the remote Hyper-V server) into my local script execution session.  This allows me to use the cmdlets as if they were installed locally, but without installing them.  This is pretty useful.

The hitch is that your networking cannot incur any blips.  Your session must remain solid or the script simply begins to fail.

Monday, December 12, 2011

Joining PVS target devices to the domain

In my last post I created a bunch of PVS target devices (the VMs that I previously copied from my MasterVM).

I now want to have Provisioning Server create the Active Directory computer accounts for the VMs and do its magic of making each VM a unique machine without requiring sysprep – this is a nifty feature.

In my last post I imported the PVS cmdlets (MCLI) so I won’t repeat that step.

$arrUpdatedVms is simply my array of VMs, and all I need is the target device name (also the VM name) so I am doing a split.

if ($joinDomain -match "yes") {
    write-progress -Id 1 "Creating domain computer accounts" -Status "Executing"
    foreach($e in $arrUpdatedVms){
        $vm = $e.Split(",")
        $deviceName = $vm[0]
        #Create Active Directory Computer Accounts
        & Mcli-Run AddDeviceToDomain -p deviceName=$deviceName, OrganizationUnit=$ou
    }
    write-progress -Id 1 "Creating domain computer accounts" -Completed $TRUE
}

That is it.  Really fast.

Monday, November 28, 2011

Linking VMs to copies of the PVS Collection Target Device template.

This is all about taking a Target Device Template that exists within a Provisioning Server (PVS) Farm Collection and copying it to create target devices from a bunch of VMs.

If you have been following I took a MasterVM on a Hyper-V server and I copied that into a number of differently named / unique virtual machines (this includes local cache VHD and settings).  And then I cycled through powering them on, getting their assigned MAC address and powering them off.

I now want to add those VMs into PVS as Target Devices.

From the last step in my script I have this array of VM Names and MAC addresses; $arrUpdatedVms

One hairy bit of this is adding the PVS cmdlets into the session.  I do this as multiple steps but it could be a one-liner as well.

$installutil = $env:systemroot + '\Microsoft.NET\Framework64\v2.0.50727\installutil.exe'
& $installutil "$env:ProgramFiles\Citrix\Provisioning Services Console\McliPSSnapIn.dll"
Add-PSSnapin McliPSsnapin

Now, I want to iterate through the array and add each VM to the Collection, and I want to duplicate the settings of the Collection Template to save myself work later on.

(one note – this checks for the variable $copyTemplate to match “yes” – this just gives me a choice)

write-progress -Id 1 "Addding the VMs to the PVS collection" -Status "Executing"
foreach($e in $arrUpdatedVms){
    $vm = $e.Split(",")
    $deviceName = $vm[0]
    $deviceMac = $vm[1]
    "siteName=$siteName, collectionName=$collectionName, deviceName=$deviceName, deviceMac=$deviceMac"
    if($copyTemplate -match "yes"){
        #Optimally a Template was created that defines the vDisk and other settings. 
        & Mcli-Add Device -r siteName=$siteName, collectionName=$collectionName, deviceName=$deviceName, deviceMac=$deviceMac, copyTemplate=1
    }
    else{
        #adding the device
        & Mcli-Add Device -r siteName=$siteName, collectionName=$collectionName, deviceName=$deviceName, deviceMac=$deviceMac
    }
}
write-progress -Id 1 "Addding the VMs to the PVS collection" -Completed $TRUE

Don’t blink, this is a pretty fast loop to go through all the VMs.  Especially compared to the copy operation.

Tuesday, November 22, 2011

Hyper-V WMI powering on VMs and getting MAC addresses

Previously I have written about formatting MAC addresses from Hyper-V WMI into a format to be consumed by another program and about duplicating a Master (or Template) VM into multiple virtual machines.
Today I am going to put that together and produce an output that contains the each VM as a name, mac address pair in an array.
My Master / Template VM has the MAC address set to dynamic, so this property is going to repeat. I am also going to take advantage on the Hyper-V process of booting the VM to assign the MAC address.
From my previous duplication script I have an array of VMs: $arrNewVms
$arrUpdatedVms = @()

write-progress -Id 1 "Discovering MAC addresses" -Status "Executing"
foreach ($vmName in $arrNewVms) {
$vm = Get-WmiObject Msvm_ComputerSystem -Filter "ElementName='$vmName'" -Namespace "root\virtualization" -ComputerName $hypervHost
$Result = $vm.RequestStateChange(2) # start
ProcessWMIJob $Result
$vssd = $vm.getRelated("Msvm_VirtualSystemSettingData") | where {$_.SettingType -eq 3}
$vmLegacyEthernet = $vssd.getRelated("Msvm_EmulatedEthernetPortSettingData")
# $vnic = $vm.GetRelated("Msvm_EmulatedEthernetPort") # this only returns while the VM is running and $mac = $e.PermanentAddress
foreach ($e in $vmLegacyEthernet) {
$mac = $e.Address
$macDash = $mac.Substring(0,2) + "-" + $mac.Substring(2,2) + "-" + $mac.Substring(4,2) + "-" + $mac.Substring(6,2) + "-" + $mac.Substring(8,2) + "-" + $mac.Substring(10,2)
$arrUpdatedVms += ($vmName + "," + $macDash)
}
$vm.RequestStateChange(3) # stop
}
write-progress -Id 1 "Discovering MAC addresses" -Completed $TRUE
$arrUpdatedVms
In this I refer to a helper Function “ProcessWMIJob” this is handy in that it forces me to wait until the Start VM process is completed. If I didn’t use this my VMs would not be started and therefore will not in turn power off – it becomes a bit of a timing issue.
The Function can be found at the beginning of the script here.
Next time, something completely different.

Thursday, November 17, 2011

Copying a MasterVM into many with Hyper-V WMI

Now, I am not going to take any credit for this little and brilliant piece of PowerShell.  All the credit goes to Taylor Brown.

Back many moon ago Taylor was publishing a bunch of PowerShell scripts that use the Hyper-V WMI interface to do things.  (In more recent times Ben Armstrong has taken up this task)

Taylor’s original post is here: http://blogs.msdn.com/b/taylorb/archive/2008/06/07/hyper-v-wmi-cloning-virtual-machines-using-import-export.aspx

Did you notice that date?  The year was 2008 on that original post.

This is actually a highly useful bit of code if you want to make bunches of VMs that are identical in settings.  You create the first “MasterVM” – you could also call a template.  Then copy it.

I did tweak Taylor’s PowerShell a bit, but only to remove an error that annoyed me with Write-Progress and to pre-pend and post-pend the VM names, the count to start at and the number to create.

I will add more to this later as this is just the beginning.  After all of this creation of VMs there are other settings that can be applied and considerations that you must have in regards to the system as a whole.

What this script does is use the Hyper-V Export and Import feature.  The Export creates a copy of the VM (and all of its associated physical objects) in a nice tidy folder in the path that is specified.

This target can be local storage or if your target Hyper-V server is a node of a cluster it can be a CSV.  And the plus of using Export to copy is that the VM is all set to be Highly Available.

The Import takes advantage of Hyper-V doing all the work to generate the new virtual objects, apply security settings, modify any snapshots files, handle differencing disks, etc.  Let the system do all that work so I don’t have to.  After all, someone already went to all the trouble to build the method, and it gets tested.

Without further delay, here is my version of the script.

param
(
[string]$masterVm = $(Throw "MasterVM required"),
[string]$exportPath = $(Throw "Path required to the Hyper-V local storage where VMs will be created"),
[string]$namePrePend = $(Throw "A string to add to the front of the new vm name"),
[string]$namePostPend = $(Throw "A string to add to the end of the new vm name"),
[string]$hypervHost = $(Throw "The Hyper-V host where the template VM is registered and where VMs will be created"),
[int] $startAt = $(Throw "The number to begin the VM creation with (ie. 2 = start with VM 2)"),
[int]$numCopies = $(Throw "The number of VMs to create")

)

function ProcessWMIJob
{
    # Is there a way to loop through this and have these process and manage multiple jobs asynchronously?  The problem might be in overloading the storage layer.
    param (
    [System.Management.ManagementBaseObject]$Result
    )

    if ($Result.ReturnValue -eq 4096) {
        $Job = [WMI]$Result.Job

        while ($Job.JobState -eq 4) {
            Write-Progress -Id 2 -ParentId 1 $Job.Caption -Status "Executing" -PercentComplete $Job.PercentComplete
            Start-Sleep 1
            $Job.PSBase.Get()
        }
        if ($Job.JobState -ne 7) {
            Write-Error $Job.ErrorDescription
            Throw $Job.ErrorDescription
        }
    }
    elseif ($Result.ReturnValue -ne 0) {
        Throw $Result.ReturnValue
    }
    Write-Progress -Id 2 -Status "Complete" -Completed $TRUE
}

#Main Script Body
"Creation script is starting at: "+(Get-Date -uFormat "%j, %T")

$VMManagementService = Get-WmiObject -Namespace root\virtualization -Class Msvm_VirtualSystemManagementService -ComputerName $hypervHost
$SourceVm = Get-WmiObject -Namespace root\virtualization -Query "Select * From Msvm_ComputerSystem Where ElementName='$masterVm'" -ComputerName $hypervHost
$a = $startAt

$arrNewVms = @()

write-progress -Id 1 "Cloning Vm's" -Status "Executing"

while ($a -lt ($startAt + $numCopies)) {
    $tempVMName = ($namePrePend + $a.ToString("00000") + $namePostPend)
    $tempVMName
    # a simple test to see if the VM already exists and try the next
    if ((Get-WmiObject -Namespace root\virtualization -Query "Select * From Msvm_ComputerSystem Where ElementName='$tempVMName'" -ComputerName $hypervHost) -eq $NULL) {

        $VMSettingData = Get-WmiObject -Namespace root\virtualization -Query "Associators of {$SourceVm} Where ResultClass=Msvm_VirtualSystemSettingData AssocClass=Msvm_SettingsDefineState" -ComputerName $hypervHost
        $VMSettingData.ElementName = $tempVMName

        $Result = $VMManagementService.ModifyVirtualSystem($SourceVm, $VMSettingData.PSBase.GetText(1))
        ProcessWMIJob $Result

        $Result = $VMManagementService.ExportVirtualSystem($SourceVm, $TRUE, "$exportPath")
        ProcessWMIJob $Result

        $Result = $VMManagementService.ImportVirtualSystem("$exportPath\$tempVMName", $TRUE)
        ProcessWMIJob $Result

        $arrNewVms += $tempVMName
    }
    $a ++

}
write-progress -Id 1 "Cloning Vm's" -Completed $TRUE

$VMSettingData = Get-WmiObject -Namespace root\virtualization -Query "Associators of {$SourceVm} Where ResultClass=Msvm_VirtualSystemSettingData AssocClass=Msvm_SettingsDefineState" -ComputerName $hypervHost
$VMSettingData.ElementName = $masterVm

$Result = $VMManagementService.ModifyVirtualSystem($SourceVm, $VMSettingData.PSBase.GetText(1))
ProcessWMIJob $Result

"The following were created:"
$arrNewVms

Monday, November 14, 2011

Hyper-V WMI MAC address to a different format

Here is a simple one that I don’t want to lose.

In my previous post I had mentioned how to get the MAC address that is assigned to the vNIC of a VM.

Get the VM, then find the associated vNIC (and make sure the VM is running):

$vm = Get-WmiObject Msvm_ComputerSystem -Filter "ElementName='$vmName'" -Namespace "root\virtualization" -ComputerName $HyperVHost

$vnicEmulated = $vm.GetRelated("Msvm_EmulatedEthernetPort")

$vnicSynthetic = $vm.GetRelated("Msvm_SyntheticEthernetPort")

In the comments of my previous post Ben Armstrong made the following suggestion (which works kind of nice):

$vssd = $vm.getRelated("Msvm_VirtualSystemSettingData") | where {$_.SettingType -eq 3}
    $vmLegacyEthernet = $vssd.getRelated("Msvm_EmulatedEthernetPortSettingData")

The MAC address is the “PermanentAddress” property of the $vnic and the “Address” property of the $vmLegacyEthernet – so pay attention to the property you are fetching.

A quick way to pluck it out is just to loop through the collection like this:

foreach ($e in $vnic) {
    $mac = $e.PermanentAddress
}

This gives you a MAC address that looks like this:  00155D680A14

This is all fine and dandy if the reason you need the MAC can deal with that format.  And mine can’t.  I must have it defined with dashes.

I went searching for fancy REGEX ways to handle this and could not come up with anything.  But some nice validators for MAC addresses.  The best in the Splunk forums.

So, here is my less sophisticated but useful part:

$macDash = $mac.Substring(0,2) + "-" + $mac.Substring(2,2) + "-" + $mac.Substring(4,2) + "-" + $mac.Substring(6,2) + "-" + $mac.Substring(8,2) + "-" + $mac.Substring(10,2)

This gives me the following result:  00-15-5D-68-0A-14

Just add that into the foreach loop above and move on.

Friday, November 11, 2011

Hyper-V WMI association NULL returns and stopped VMs

I spent way too much time staring at this today until it dawned on me that my VM might need to be running for certain associations to be activated and working.

All I was trying to do is to get a PowerShell script to return the MAC address of the Legacy Network Adapter of a VM.

Ben Armstrong has been more than generous at giving samples to the community and the updated associations sample (that should have given me exactly what I wanted) is here.

In his example he merrily selects his VM:

$vm = gwmi MSVM_ComputerSystem -filter "ElementName='Home Server'" -namespace "root\virtualization"

Then the very easily returns the vNIC:

$vnic = $vm.GetRelated("MSVM_EmulatedEthernetPort")

That is all fine and dandy for the example.  But I tried it on a VM that is powered off, and you know what?  I got nothing back.  A NULL came back from WMI.

I searched for quite a long time trying to figure out what I was doing wrong.

In the end, I discovered I was doing nothing wrong.  The VM has to be running for this association to return the vNIC. 

I actually find this interesting.  As once the object is created, it exists, and it has properties.  I already powered the VM on and off for the MAC address to be automatically assigned.  And then I can’t query it.  It makes little sense to me.  Maybe I can get Ben to respond and fill in that lack in my understanding.

In the end I simply modified my script.  I have to briefly power on the VM and then off again to automatically assign the MAC, so why not take advantage of that.

Here is what I simply have now:

$vm = Get-WmiObject Msvm_ComputerSystem -Filter "ElementName=$vmName" -Namespace "root\virtualization"
$vm.RequestStateChange(2) # start
$vnic = $vm.GetRelated("Msvm_EmulatedEthernetPort") # this only returns while the VM is running
$vm.RequestStateChange(3) # stop

Wednesday, November 2, 2011

A dollar for your error

Last week I was trying desperately to understand why something was failing in a PowerShell script of mine.

I am still trying to figure a good way of understand what the problem is, but I ran across a nifty PowerShell feature that I had never heard of before.

$error

$error is a nifty thing.  It is an array with methods.  Within your script or session it is always capturing your error messages silently in the background and you move along.

Now, the real nifty part of this was when I was trying to capture errors within someone else's cmdlets as commands just kept failing and I could not understand why.

So, before I invoked the command I would clear $error with $error.Clear()

Then execute the command (which always had a clean exit by the way).

Then I would interrogate $error to see what went wrong.  Wow, it was huge.

Come to find out, the creator of the cmdlet that I was trying to find the problem with had –silentlycontinue all over the place.  So there were all kinds of failures but things just plugged along.

$error let me get all this out.  All kinds of errors that were hidden by the developer.  All it did was require a bunch of time on my part to unwind and find the actual error that I required.

Thursday, October 27, 2011

Editing a web.config using PowerShell–adding a node method 2

InnerXML is your friend to add a bunch of XML quickly.

The bit of XML that I want to add to my web.config is:

<location path="FederationMetadata">
    <system.web>
        <authorization>
            <allow users="*" />
        </authorization>
    </system.web>
</location>

The super duper easy way is to simply dump in your XML as text.

Begin by reading in your XML file (just as in the previous posts).

$path = (( Get-Website | where {$_.Name -match ($role.Id)} ).PhysicalPath + "\Mine\Site\web.config")
$file = get-item -path $path

# Get the content of the config file and cast it to XML to easily find and modify settings.

$xml = [xml](get-content $file)

Create the new element object.

$idModel = $xml.CreateElement("microsoft.identityModel")

Add the attribute

 

Populate it with the XML text.

$idModel.Set_InnerXML(
@"
<system.web>
    <authorization>
        <allow users="*" />
    </authorization>
</system.web>

"@
)

Save it back.

$xml.configuration.AppendChild($idModel)

All done.  Super simple, very little tree hopping.

Tuesday, October 25, 2011

IIS PowerShell and Type Mismatch when Set-Item

If you cannot tell I have been doing a lot with PowerShell and web sites (IIS) lately. 

Let’s just say that it is amazing what you can do with an Azure Web Role when you get creative and are not afraid of scripts.  Oh, and since this is Azure, it is IIS 7.

A couple time now I have run into an error with IIS where I am unable to apply settings using Set-Item.

The error I get in return is rather cryptic and gives no leads as to the real culprit:

$dsAppPool | Set-Item
Set-Item : Type mismatch. (Exception from HRESULT: 0x80020005 (DISP_E_TYPEMISMATCH))
At line:1 char:22
+ $dsAppPool | Set-Item <<<<
    + CategoryInfo          : InvalidArgument: (:) [Set-Item], COMException
    + FullyQualifiedErrorId : path,Microsoft.PowerShell.Commands.SetItemCommand

I don’t know about you, but this error does not give me much to go on, well a little; I know I have a type mismatch.

In this specific example I am setting an application pool to run under a user account.

I fetch my username and password into a variable.  These are strings.

But when I do a GetType() against the variable it returns as a String, system.object.  And this is the root of the type mismatch.

A simple cast to string fixes the problem:

$dsAppPool = (Get-ChildItem IIS:\AppPools | where {$_.Name -match ("Authentication")} )
$dsAppPool.processModel.identityType = "SpecificUser"
$dsAppPool.processModel.userName = [string]$siteUser
$dsAppPool.processModel.password = [string]$siteUserPwd
$dsAppPool | Set-Item

The same thing works in other areas of Application Pools as well.  Here I am modifying the default IIS cache flushing behavior (note the timespan declarations):

$mooAppPool = ( Get-ChildItem IIS:\AppPools | where {$_.Name -match ("Moo")} )

$mooAppPool.processModel.idleTimeout = [System.TimeSpan]::FromMinutes(43200)  # Idle Time-Out
$mooAppPool.recycling.periodicRestart.time = [System.TimeSpan]::FromMinutes(432000)  # Regular Time Interval

$mooAppPool | Set-Item

$schPath = $mooAppPool.ItemXPath + "/recycling/periodicRestart"
$p = $mooAppPool.recycling.periodicRestart.schedule.Collection.Count

Do {
    Remove-WebconfigurationProperty $schPath -Name schedule.collection -AtIndex ($p - 1) -Force
    $p--
}
Until ($p -eq "0")

Friday, October 21, 2011

Setting the WinRM HTTPS listener

A recent puzzle of mine has been to configure the HTTPS listener for WinRM.  You might ask; why?  Because it is supported!  And you supposedly can.

This applies to both WinRM and PowerShell remoting.

I say “supposedly” because this has been a messy trip.  And all of the documentation has been of nearly no help, leading me to believe that it is just supposed to work and I must be an idiot for it not working.

Back in 2008 I wrote about securing WinRM using a self signed certificate.  Well, guess what is now blocked.  Using self-signed certificates for WinRM.

So, we move forward, attempting to embrace the security model.

One thing to understand, when using HTTPS with WinRM you must:

  • have a certificate with a CN that matches the hostname of the server
  • use the hostname to connect to the remote machine

In my case I have a private CA that I use, and I simply import the CRL into each server as it is not published to allow CRL lookup.

There has been a litany of errors along the way, far too many for me to attempt to capture.  Lets just say that many have been very obscure and strange failures that have error messages with very little to do with the actual problem.

In the end there are two commands that you want to work on your Server; they are:

winrm create winrm/config/listener?Address=*+Transport=HTTPS @{Hostname=($env:COMPUTERNAME);CertificateThumbprint=($sslCert.Thumbprint)}

or

Set-WSManQuickConfig –useSSL

To get to this point you need to have a certificate authority, and you need to get your certificate on your server.  The CN of your certificate must match the servername (and this is picky too, the case / mixed case must match or you will get an error).  A tip – pay attention to the error message as it has the proper string that it is looking for.

WinRM runs under the NETWORK SERVICE security context.  This is pretty restrictive, so it most likely does not have access to the private key of your certificate.  Because it doesn’t have access it can’t do the encryption and nothing happens.  The setup fails.

Here is the little piece of script that fixes it all up:

$sslCert = Get-ChildItem Cert:\LocalMachine\My | where {$_.Subject -match $env:COMPUTERNAME}

"The installed SSL certificate: " + $sslCert.Subject

###  Give the Network Service read permissions to the private key.

$sslCertPrivKey = $sslCert.PrivateKey

$privKeyCertFile = Get-Item -path "$ENV:ProgramData\Microsoft\Crypto\RSA\MachineKeys\*" | where {$_.Name -eq $sslCertPrivKey.CspKeyContainerInfo.UniqueKeyContainerName}

$privKeyAcl = (Get-Item -Path $privKeyCertFile.FullName).GetAccessControl("Access")

$permission = "NT AUTHORITY\NETWORK SERVICE","Read","Allow"

$accessRule = new-object System.Security.AccessControl.FileSystemAccessRule $permission

$privKeyAcl.AddAccessRule($accessRule)

"Modifying the permissions of the certificate's private key.."

Set-Acl $privKeyCertFile.FullName $privKeyAcl

After you run this, then you can configure the HTTPS listener for WinRM using either of the commands I previously mentioned.

Monday, October 17, 2011

Editing a web.config using PowerShell–changing a value method 2

A few days back I had a post about changing a value of an XML document in which I treated the returned item as and array and simply looped through the array.

I have had a reason to handle things a bit differently.

Lets consider the same bit of XML.

 

However, this time lets treat this like an object.

The first thing is to fetch the XML element in a way that PowerShell keeps it as an element.

I began last time by attempting to get the node using:

$xml.GetElementsByTagName("routingPolicy")

If I use the GetType() method PowerShell returns that it is an XmlElementList.  However, if I try to type dot then tab I cannot walk through the XML. 

This must be because the containing element is actually an XML element in its own right.

Okay I think, then I will get that element in the same way and I try:

$xml.GetElementsByTagName("alternateAddress")

Nothing.  What gives.  I must need to get at this a different way.  Using a little bit of object handling from another XML handling post I tried to treat it as an object.

$altAddr = $xml.GetElementsByTagName("routingPolicy") | where {$_.alternateAddress -match "on"}

If I use the GetType() method on this object I see that I have an XmlElement.  And I can modify its attributes with a simple dot notation.

$altAddr.alternateAddress = "off"

So, I achieved the same result as a few posts ago, but actually handling this as an object instead of as an array.

Before:

$routePolicy = @($xml.GetElementsByTagName("routingPolicy")) 
 
foreach ($e in $routePolicy) { 
     if ($e.alternateAddress -eq "off" ) { 
     $e.alternateAddress = "on" 
     } 
     $e 
}

After:

$altAddr = $xml.GetElementsByTagName("routingPolicy") | where {$_.alternateAddress -match "off"}
$altAddr.alternateAddress = "on"

And that After could probably be changed to a one liner.  I just find many one liners difficult to understand, and someone else will most likely have to figure out what my script does.

Monday, October 10, 2011

Finding files with PowerShell without ‘and’

There is simply something intrinsically simply about using a selection statement with an “and” in it.

I constantly want to select for a file using a where clause with an '”and” – where just does not like this.  The error that I get each time I try and do this is: “Unexpected token 'and' in expression or statement.”

To get around that I started using a double pipe.

Now, I have to admit, I am not a PowerShell professional.  And many PowerShell one-liners leave me stumped when I try to read and understand what is happening.

Let me try to give you this one.

My goal – find a particular MSI file that exists in a folder.

There could be multiple MSI files, and there could be multiple files with the same name before the extension.  So I have to filter the mess.

I instinctively want to write the line this way:

$msiFile = get-childitem $exPath | where ({$_.Extension -match "msi"} and {$_.Name -match "WindowsIdentityFoundation"})

However, this gives me “unexpected token ‘and’” error all the time.  The quick solution is a double filter accomplished using a pipe and a pipe.

If I write the line this way it works:

$msiFile = get-childitem $exPath | where {$_.Extension -match "msi"} | where {$_.Name -match "WindowsIdentityFoundation"}

What is happening here? 

First is the get-childitem at the literal path $exPath.  Then that gets piped to the MSI filter selecting the extension.  Then that gets piped to the name filter selecting the correct MSI.  The end result gets returned to my variable $msiFile.

Wednesday, October 5, 2011

Editing a web.config using PowerShell–adding a node method 1

This is an extension of a recent bunch of posts about XML and handling XML in PowerShell.

One way of adding a node and content can be accomplished by building the node out of XML objects.  Some might consider the the true ‘object’ way of working through this problem.

The bit of XML that I want to add to my web.config is:

<location path="FederationMetadata">
  <system.web>
    <authorization>
      <allow users="*" />
    </authorization>
  </system.web>
</location>

This is a walkthrough of building that as pure XML objects using PowerShell.  And then inserting it into a very specific location in the larger XML document.

First, I have to load in the web.config and make it an XML document. 

$file = get-item -path $path

$xml = [xml](get-content $file)

Now, Create the ‘location’ node as an XML document.

$federationLocation = $xml.CreateElement("location")

Set the attribute of ‘path’

$federationLocation.SetAttribute("path", "FederationMetadata") 

Create a nested node ‘system.web’

$fedSystemWebNode = $xml.CreateElement("system.web") 

Create a nested node ‘authorization’

$fedAuthNode = $xml.CreateElement("authorization")

Create the ‘allow’ element

$fedAllowNode = $xml.CreateElement("allow")

Add the ‘users’ attribute

$fedAllowNode.SetAttribute("users", "*") 

Now, for the really fun part.  And the important part to pay attention to.

Reassemble the nodes in the correct order

Begin by appending the inner mode node to its parent

$fedAuthNode.AppendChild($fedAllowNode)

Repeat with the next node and its parent

$fedSystemWebNode.AppendChild($fedAuthNode)

Repeat the process again

$federationLocation.AppendChild($fedSystemWebNode) 

Now, I want to insert the XML document Node into a very specific place in the web.config XML configuration file.  That location is before the XML path ‘configuration/system.web’.

$xml.configuration.InsertBefore($federationLocation, ($xml.configuration.Item("system.web")))

That is it.  lots of lines. But performed in a true object style.

Oh, and don’t forget to save back to the file system.

$xml.Save($file.FullName)

Next time, the quick and dirty string method.

Monday, October 3, 2011

Editing a web.config using PowerShell–changing a value

It has been some time since I have been working with XML using PowerShell.  I needed to refresh my brain and work through some issues.

In bunches of searching I ran across only one article that really tries to explain what is going on: http://www.codeproject.com/KB/powershell/powershell_xml.aspx

And coupling this understanding with Tobias Weltner’s chapter on XML starts to put the pieces together:  http://powershell.com/cs/blogs/ebook/archive/2009/03/30/chapter-14-xml.aspx

And, there are clues here: http://devcentral.f5.com/weblogs/Joe/archive/2009/01/27/powershell-abcs---x-is-for-xml.aspx and here: http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/29/how-to-add-data-to-an-xml-file-using-windows-powershell.aspx

I finally sat down with a senior developer, who has always been very helpful, and he finally helped me wrap my brain around manipulating XML using PowerShell.  A bit of shared screen time and he could quickly recognize what PowerShell was doing to the XML document in handling each section.

This has prompted me to attempt to capture this understanding before I get pulled to a new project and I forget it all.

This is part 1 of a series of handling XML in PowerShell.  I decided to tackle something really simple.  Editing a value. 

The ‘partial’ XML kind of way.  I call this partial because I am only using XML handling to find the collection, not to modify it.  But this is a quick trick that I have used quite a bit to modify settings and configuration files.

This is an Azure related example, and in Azure there are no hard paths, so we keep things relative.

# Discover the path of the web.config file

$path = (( Get-Website | where {$_.Name -match ($role.Id)} ).PhysicalPath + "\Mine\Site\web.config")
$file = get-item -path $path

# Get the content of the config file and cast it to XML to easily find and modify settings.

$xml = [xml](get-content $file)

I want to edit the following line in the file:  <routingPolicy alternateFoo="off" />  I want to change the value to “on”

$routePolicy = $xml.GetElementsByTagName("routingPolicy")

This returns an object, $routePolicy.  And looking at that object I see no methods, but it does have a count property.  I actually got a collection back (or an array), not some XML element.

One way to modify this is to go along with this being an array and simply step through it taking advantage of the  dot tag type of XML browsing.

foreach ($e in $routePolicy) {
    if ($e.alternateFoo -eq "off" ) {
    $e.alternateFoo = "on"
    }
    $e
}

That is what I consider the really simple way.  It is quick and dirty and gets the job done.

Now, what I just did is not very “object oriented” which is the power of PowerShell.  So, how do I get in there and change this setting.  What is it really.

The way this setting is defined is that alternateFoo is an attribute of  <routingPolicy />.  It is not a node or an element within it.

Here is where things get really funky.  The following will modify that attribute without having to do the loop.  The important part is that each step of the way we are selecting each node as an individual object and therefore its own XML document within the larger XML document.  The difference here is that I have to know the path.  “routingPolicy” is a node under “system.web”, which is itself a node of the larger “configuration”. 

$xml.configuration.Item("system.web").Item("routingPolicy").SetAttribute("alternateFoo", "on")

Easier to code, but more difficult to understand what is really happening.  It is this stuff that is really not well written out there.

Oh, don’t forget to save the file back to the file system.

$xml.Save($file.FullName)

Thursday, September 8, 2011

The Cloud–a place for my stuff

I keep trying to come up with new ways to describe clouds and what it means to run things in the cloud and to do things in the cloud.

To quote a really old George Carlin skit; a place for my stuff.  “That is the whole meaning of life, to find a place for my stuff”.

In the cloud world there are private places and public places.  A private cloud is a cloud that you are responsible for.  A public cloud is a cloud that someone else is responsible for.

The “location” of your cloud infers the physical location and who owns and manages that physical location.  If it is private, then you own or manage that physical location.  If it is public then someone else does.

In the end.  A cloud is not a new thing, but a new word describing an old thing.  It describes infrastructure.  The new thing is how you interact with that infrastructure – this is where underwear gets all knotted up.

When moving to clouds – someone loses control.  And it is that change in control that things someone other than me is messing with my stuff.

Personally, I love my SkyDrive.  Cloud based storage that I can interact with from anywhere.  A place for my document and picture (files) stuff.

I also like Gmail and Hotmail.  Most folks don’t think of these as cloud either, but they are.  Because they are off, somewhere else.  And, they are a place for my email stuff.

I like Azure.  It is my cloud based infrastructure.  I don’t have to worry about controlling it, because Azure does that.  I just tell it to run my stuff and it does.  A place for my workload stuff.

So, for me.  I have a place for my stuff.  I know where that physical place is.  Coming form the IT Pro background I have a need to know that physical place to link my stuff with a location, even if that location is a service.

But, I just feel good that I have a place for my stuff.  And that I can get to my stuff.  From any of my other stuff.  And I don’t have to manage all aspects of my stuff.

Tuesday, September 6, 2011

How to ask a question in a technical forum

A friend of mine recently ran across this and forwarded it to me.

Personally, as a TechNet forum moderator, and frequent forum contributor, and as a person who handles a few forums for my employer; I find the information in this KB both lighthearted and highly useful at the same time.

http://support.microsoft.com/kb/555375

The title of the KB article:  “How to ask a question”

Please, check it out.

Thank you Daniel Petri!

Thursday, September 1, 2011

PowerShell to select a certificate and encrypt a password

Here is a quick little script to encrypt a password with PowerShell.  Yes, it requires the user to select a certificate and enter the password but that could be easy to change.. 

I have found this very handy when encrypting passwords for use in Azure Role settings, such as Azure Connect.  Secure String does not work as in Azure my local machine keys are not available, however I can use a Service Certificate to encrypt on my end and therefore decrypt on the Azure end.

I have one assumption – that the certificate is in the LocalMachine Personal certificate store.

$password = Read-Host -Prompt "Enter the password to encrypt"

$certs = dir cert:\LocalMachine\My
[System.Reflection.Assembly]::LoadWithPartialName("System.Security") | Out-Null
$collection = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2Collection
$certs | ForEach-Object { $collection.Add($_) } | Out-Null
$cert = [System.Security.Cryptography.x509Certificates.X509Certificate2UI]::SelectFromCollection($collection, "", "Select a certificate", 0)
$thumbprint = $cert[0].thumbprint
$pass = [Text.Encoding]::UTF8.GetBytes($password)
$content = new-object Security.Cryptography.Pkcs.ContentInfo -argumentList (,$pass)
$env = new-object Security.Cryptography.Pkcs.EnvelopedCms $content
$env.Encrypt((new-object System.Security.Cryptography.Pkcs.CmsRecipient(gi cert:\LocalMachine\My\$thumbprint)))
write-host "Writing encrypted password, cut/paste the text below the line to CSCFG file"
[Convert]::ToBase64String($env.Encode()) | Out-File .\encrypted_password.txt
Invoke-Item ".\encrypted_password.txt"

Thursday, August 25, 2011

Duplicating an IIS application in Azure with PowerShell

I have a need to add an additional application pool into IIS in my Azure Web Role.

The requirements from the developer are:

  1. I copy an existing application folder under IIS to a new folder.
  2. I turn this copy of the folder into an application.
  3. I add this application to the same application pool of the application I am copying.

I knew what I needed to do.  It can be done straight through the IIS administration console and Explorer.  Find the physical location of the IIS site, copy the folder, then in IIS right click the folder and convert to an application, selecting the existing application pool.  Sounds easy enough.  But scripting it was not that straightforward, or was it..

First, this is an Azure Web Role – no “Default Web Site” happening here.  Second, I have an installer that is creating site I am duplicating.  Third, I have an entire folder of IIS files to copy.

Begin with adding the Azure Service Runtime snap-in and importing the IIS PowerShell module.  And get information about the role where this script is running (we need that later).

Import-Module WebAdministration

add-pssnapin microsoft.windowsazure.serviceruntime

$role = Get-RoleInstance –Current

I have variables that I am passing in that will be Virtual Directory paths (note the foreslash).

$auth_virtual_path = "/MyApp/Authentication"
$fed_virtual_path = "/MyApp/FederatedIdentity"

Get the site object through the provider.  I get it this way because I need the PSPath attribute later on.

$site = Get-ChildItem IIS:\sites | where {$_.Name -match ($role.id)}

I mentioned there is no default web site, instead it is named with the Role Instance Id.

I now work through where IIS created the physical location of the default web site and stuck my virtual paths.  This is Azure, no assumptions as to where things might happen.

$authSitePath = (($site.PhysicalPath) + $auth_virtual_path) -replace "/", "\"
"The physical path of " + $auth_virtual_path + " is:  " + $authSitePath
$fedSitePath = (($site.PhysicalPath) + $fed_virtual_path) -replace "/", "\"
"The physical path of " + $fed_virtual_path + " will be:  " + $fedSitePath

Copy the original folder and contents to the new folder.

Copy-Item $authSitePath -destination $fedSitePath –recurse

Get the existing web application because I need to assign my copy to the same application pool.

$authApp = Get-WebApplication | where {$_.Path -match $auth_virtual_path}

Create the string for the new PSPath location.

$fedAuthPsPath = (($site.PSPath) + $fed_virtual_path) -replace "/", "\"

Convert the folder that I copied into an application and add it to the existing application pool.

ConvertTo-WebApplication -ApplicationPool $authApp.applicationPool -PSPath $fedAuthPsPath

That is it. 

Tuesday, August 23, 2011

NetJoinDomain failed with error code 8557

This was an interesting little one that happened in my Azure Service while using Azure Connect to join my Role instances to my on-premise domain controller.  Let me lay out the scenario..

Trying to apply some best practice to my environment I an using a regular domain user account in my Role configuration for Azure Connect (why would you ever embed a domain administrator account in a static configuration file ?!?!).

image

DomJoiner is simply a regular user, users can join machines to a domain.

Everything was working along perfectly fine until yesterday.  I applied the Roles to my Virtual Network group in the Azure Portal and nothing happened.  My machines did not reboot (domain join), they did not appear in the domain, nothing.

Finally I ran across a specific Azure Connect log file “integrator.log” found at %programfiles%\Windows Azure Connect\Endpoint\Logs

Within this log I could see the configuration being received, Azure Connect linking up and my error:

RRAS interface connected

DNS server configured on RRAS interface

NetJoinDomain failed with error code 8557. Target domain......

Oh, an error code, lets go trolling.  Search was letting me down.  All the error references were for Server 2000 Active Directory and I am using Server 2008 R2.  Also, no references to the error and Azure.  I can’t image I am the only one that has seen this.

Finally, the details of two articles had the knowledge:  KBs 251335 and 314462.  A user has a default (out of the box) limit of being able to join 10 computers to a domain.

I opened ADSIEDIT.msc, selected the properties of the correct naming context, then cleared the setting ms-DS-MachineAccountQuota.

image

This all happened because I am using the prudent practice of using a regular user account ( not a domain administrator) to join my Azure Role instances to my domain with Connect.  But then most developers I know would only be using a Domain Administrator account and may never see this issue.

Friday, August 19, 2011

Importing a Certificate Revocation List with PowerShell

This was an interesting one and a follow-up to my post about importing a Certificate (.cer) with PowerShell.

I now ran into the situation where I have an application that is highly enforcing certificate use by using the .Net libraries.  The disconnect came into play because the application was testing the Certificate Revocation List of the certificate that I provided with my private Certificate Authority.

Mind you, if you use a public Certificate Authority you will most likely never see problems as long as your machine can get to the internet to test the revocation list.  In my case my Certificate Authority server is not on the internet, but my application only is (yes, flying high in Azure).

The second hitch came because PowerShell does not have a method to deal with certificate revocation lists within the certificate handling object ( System.Security.Cryptography.X509Certificates ).

Thank goodness that my target system is an Azure Web Role with IIS installed, as that gave me a tool; certutil.exe

In my Azure Web Role I have a folder with all my scripts and additional binaries for use with Start Up Tasks:

image

In my PowerShell script I test for the path where the script is running from:

$exPath = Split-Path -parent $MyInvocation.MyCommand.Definition

I then look in this same path (the folder in the snip above) for the .cer and .crl (now just focusing on the .crl)

"Looking for included *.crl.."
$crlFile = get-childitem $exPath | where {$_.Extension -match "crl"}

If I found one, I try to import it into the same store where I imported the root CA certificate of my private Certificate Authority:

if ($crlFile -ne $NULL) {
    "Discovered a .cer as part of the Service, installing it in the LocalMachine\Root certificate store.."
    certutil -addstore Root $crlFile.FullName
}

Put it all together and it looks like this:

image

image

#################################################################################
###  Import a .cer to the LocalComputer\Root Certificate store
<# This allows the addition of a .cer that is the certificate of a private or Enterprise (non-public) Certificate Authority.
   The .cer is simply included with the other scripts in the Role project in Visual Studio #>

"Looking for included *.crl.."
$crlFile = get-childitem $exPath | where {$_.Extension -match "crl"}
if ($crlFile -ne $NULL) {
    "Discovered a .cer as part of the Service, installing it in the LocalMachine\Root certificate store.."
    certutil -addstore Root $crlFile.FullName
}

Wednesday, August 17, 2011

Getting remote PowerShell output back to your local machine

PowerShell remoting has been around for some time now but I have not had a reason to use it.  I finally found one, I needed some information from within a file on a remote system.

This is where WMI let me down.  It could give me information about the file(s) I needed, but I needed to read and parse the content of the file and I further needed to process that at the client machine that was calling the remote machine.

Searching on this really did very little for me, in the end the solution was relatively simple.  Cast my session output to a variable.

So…  PSRemoting.  First of all you need a PSCredential.  I am hitting remote systems, so I need to build a credential object using a username and password.  My password begins as plain text.

$securePass = ConvertTo-SecureString $password -AsPlainText -force
$creds = New-Object System.Management.Automation.PSCredential($userName,$securepass)

I then open a PS session.

$s = new-pssession -ComputerName $computername -Credential $creds

Now, I can simply invoke commands using this session.  Like this.

invoke-command -Session $s -ScriptBlock { 
    $file = get-wmiobject CIM_DataFile -filter 'Extension="config" and FileName="MyFile"'

    $fileContents = get-content -path ([string]$file.DesiredContent)
}

On the screen I have exactly what I want.  I have captured it to $fileContents.  The problem is that cannot use $fileContents, it does not exist within the local scope – it exists within the remote (PSSession) scope. 

My C# classes make me think global and remote variables.  But that isn’t quite right either.  It is closer to user security context and the fact that they are separate.  But I want to pass between them.  And I don’t just want to pass a variable in, that is straightforward.

Come to find out, I needed to change where I capture $fileContents.

$fileContents = invoke-command -Session $s -ScriptBlock {
$file = get-wmiobject CIM_DataFile -filter 'Extension="config" and FileName="MyFile"'

get-content -path ([string]$file.DesiredContent)
}

Now, I have a local object of $fileContents that I can manipulate.

Just remember to close your remote PS session if you are going to make multiple calls.

Thursday, July 28, 2011

Using PowerShell to write and execute CMD and BAT

Here is the important part, right up front:  remember your encoding when using Out-File.

There, done.

No, seriously – this is one of those silly things that I just spent a while figuring out.

I have a PowerShell script and it is querying for variables for me.  I have an executable that I need to run and it will not call properly using PowerShell.  The answer; write your commands to a CMD file and then simply call it.

"@echo on" | out-File -filepath "$exPath\Thing.cmd"
        '"' + $ThingExe + '"' + ' config /user:"' + $AccountName + '" /pwd:"' + $Password + '" /dsn:"' + $exPath + '\sql.dsn"' | out-File -filepath "$exPath\Thing.cmd" -append -noclobber
        "net stop ThingService" | out-File -filepath "$exPath\Thing.cmd" -append -noclobber
        "net start ThingService" | out-File -filepath "$exPath\Thing.cmd" -append –noclobber

That is the PoSh to write the CMD file.  Great.  Now, execute it.

Immediately, an error:  ‘<box>@’ is not the name of a command

What?!?  Where is this <box> character coming from?  It is coming from Unicode.

Simply edit each and every Out-File with –encoding ASCII.  The first line will look like this:

"@echo on" | out-File -encoding ASCII -filepath "$exPath\Thing.cmd"

Remember, you need to add the encoding to all Out-File commands that affect the file.  If you don’t, the files that have it missing will end up looking like “@  e c h o  o n” and the lines with the encoding set to ASCII will look like “@echo on”.

And then to execute the CMD just add the line to your script: 

& $exPath\Thing.cmd

PS – for me the $exPath is the current path where the script is executing.  You can get this with:

$exPath = Split-Path -parent $MyInvocation.MyCommand.Definition

Wednesday, July 20, 2011

Hyper-V appears to run out of RAM when there is plenty

Here is one issue that I have been tipping folks off to in the forum for some time.

The scenario is:  I have an environment, it is running great, totally stable.  At some point try to do something and I am told there are not enough resources.

If I start to look at memory counters it appears that the management OS of the Hyper-V Server is running out of RAM.

The other thing – if you do all the math, there is adequate RAM in the system for the management OS and all of the VMs.

What I have described is the behavior that is seen.  And the messages that folks see on the screen makes them believe that the server does not have enough RAM to do what it needs to do, such as starting a VM.  But it really does.

The other symptom that might be seen is that the system appears to be sluggish.

The resolution that I tell folks is to be sure to logout of the Hyper-V Server when they are done administering.

This is where an unmentioned common thread appears:  In almost all of these cases the symptom is on a Server 2008x with Hyper-V Full installation. 

And the most important part as that the server is administered using remote sessions connected to the server and the administrators do not log out of their sessions, they simply disconnect.

What is happening is that the user shell is slowly consuming more and more resources.  This especially gets high if you open the VM consoles using the console application or if you leave the consoles open and disconnect.

What I have noticed is that the simple practice of logging out of your remote session cleans up all this extra used RAM – and this simply enforces that this is a user level behavior.

My recommendation to you – always log out.  If you have Administrators that don’t comply well use an old fashioned Group Policy to automatically logout disconnected sessions.  Since these are VMs and VM console sessions nothing will be lost.

The other little trick – if this is a situation where the system won’t let you power on a VM – simply try to power it on three times.  That magical third attempt forces RAM recovery and the system will reclaim resources.

Also, this further links into the behavior that the RAM of the management OS is dynamic (always has been) and that it is also limited in that it cannot consume all of the RAM of the hardware.

Monday, July 18, 2011

Granting Network Service permissions to a Certificates Private key

There are many many reasons why you want your applications to run under the more restricted Network Service instead of the higher Local System.

The problem that you run in to is when certificates are involved.

A Local Machine Certificate is generally available to processes running as Local System by default.

In my case I have Azure injecting the certificate into the role for me and I have a legacy application component that needs to be able to use that certificate.  For security reasons this service runs as the pretty restricted Network Service. 

If I simply add the application and point to the certificate it cannot use the certificate to perform encryption because the application does not have access not the private key.  Once again, script it!

My script assumes one thing, that you have gotten the actual SSL Certificate that you want to use.  There are lots of ways to get the certificate; here is what I used:

$sslCert = Get-ChildItem Cert:\LocalMachine\My | where {$_.Subject -match "cloudapp.net"}

Here is the script that does the rest:

$sslCertPrivKey = $sslCert.PrivateKey
$privKeyCertFile = Get-Item -path "$ENV:ProgramData\Microsoft\Crypto\RSA\MachineKeys\*"  | where {$_.Name -eq $sslCertPrivKey.CspKeyContainerInfo.UniqueKeyContainerName}
$privKeyAcl = (Get-Item -Path $privKeyCertFile.FullName).GetAccessControl("Access")
$permission = "NT AUTHORITY\NETWORK SERVICE","Read","Allow"
$accessRule = new-object System.Security.AccessControl.FileSystemAccessRule $permission
$privKeyAcl.AddAccessRule($accessRule)
Set-Acl $privKeyCertFile.FullName $privKeyAcl

From the certificate we can discover its private key.  Using this I can then turn to the file system and discover that physical private key.

The gotcha resides in one line when setting $privKeyAcl  Notice the GetAccessControl(“Access”) – that only fetches the Access properties, if you don’t use that you get all properties and will end up with an error when you try to Set or Add the new permissions.  (thank you to Bilal Aslam for posting the workaround here.)

The rest simply covers modifying the ACL of the file system object.

I hope you found this one useful.

Thursday, July 14, 2011

FirstLogonCommands to configure your images on deployment when user context is required

This is an old trick and I am sure that there are more elegant ways to handle this.  However, I thought I would share how I am using AutoAdminLogon and FirstLogonCommands in my sysprep unattend answer file to do some heavy lifting for me to drive modifying server settings and application install and configuration.

This came about through using an Azure VM role and being forced to complete as much setup as I can without having to get to the console of the VM.  It is amazing how much you can automate when you get creative.

Let me get one thing out of the way:  Can this be used securely?  Sure, why not, with some proper precautions.  First, encrypt your admin user password; second, don’t use the built-in administrator; third, know that no one can get to the console of the machine (totally headless); fourth, disable the admin account and logout as your last step.

All of the settings that I am referring to are in the “Microsoft-Windows-Shell-Setup” section of your unattend.xml answer file.

First – you have to create and provide a password for the local administrator account.  Note, I am naughty by using the built-in local administrator.

<UserAccounts>
  <AdministratorPassword>
    <Value>IdidntEncryptMineButYouShould</Value>
    <PlainText>true</PlainText>
  </AdministratorPassword>
</UserAccounts>

Then I need to enable AutoAdminLogon

<AutoLogon>
  <Password>
    <Value>IdidntEncryptMineButYouShould</Value>
    <PlainText>true</PlainText>
  </Password>
  <Username>Administrator</Username>
  <LogonCount>1</LogonCount>
  <Enabled>true</Enabled>
</AutoLogon>

Then I define all of my tasks that run when the first user with administrator credentials logs on to the machine:

<FirstLogonCommands>
  <SynchronousCommand wcm:action="add">
    <Order>1</Order>
    <CommandLine>C:\MyFolder\vjRedist64\install.exe /q</CommandLine>
    <Description>Install Visual J# Redistribution</Description>
  </SynchronousCommand>
  <SynchronousCommand wcm:action="add">
    <Order>2</Order>
    <CommandLine>%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command start-sleep 120</CommandLine>
    <Description>Wait for j# install</Description>
  </SynchronousCommand>
  <SynchronousCommand wcm:action="add">
    <Order>3</Order>
    <CommandLine>%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command Import-Module ServerManager; Add-WindowsFeature Web-Server; Add-WindowsFeature Web-Asp-Net; Add-WindowsFeature Web-Windows-Auth; Add-WindowsFeature Web-Metabase</CommandLine>
    <Description>Add ASP.Net and IIS6 Metabase compatibility</Description>
  </SynchronousCommand>
  <SynchronousCommand wcm:action="add">
    <Order>4</Order>
    <CommandLine>%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command set-executionpolicy remotesigned -force >> C:\Users\Public\Documents\setExecution.log</CommandLine>
    <Description>Set the ExecutionPolicy to RemoteSigned for the setup script to run</Description>
  </SynchronousCommand>

</FirstLogonCommands>

Note that there is a sequence number, this way you can simulate a workflow by having having your tasks or scripts execute in a synchronous order.  You just have to watch out for those tasks that run off on their own threads and don’t execute within the context of the command windows where the script executes (that Visual J# installer is a perfect example).  You can manage these spawned processes with PowerShell, but not with a batch command.

Wednesday, July 13, 2011

Using the Azure Fabric to add certificates to your VM Role

A really useful feature of Azure is that it can inject elements into the Role instances as it applies the configuration.

This is super, extra useful because all roles are sysprep’d images.  This includes your VM Roles. 

If you follow the Azure rules for creating your VM Roles you must prepare the VHD image with sysprep.

I don’t think this is very important if you only have one instance – but the Azure assumption is multiple instances of any role.  With that assumption the use of sysprep applies.

The problem is certificates.  If I sysprep my VHD I break the private key of my certificate as a new private key is generated.

The Visual Studio interface does not have a Certificates tab for the VM Role.  However, don’t let this stop you.  It is a simple edit of the Service Definition and the Service Configuration.

In the ServiceDefinition.csdef add a Certificate entry that names the certificate and the certificate store in which to place it.

<Certificates>
  <Certificate name="MyCertificate" storeLocation="LocalMachine" storeName="My" />
</Certificates>

This example places the certificate “MyCertificate” in the Local Machine Personal store.

In the ServiceConfiguration.cscfg add a mapping entry for the certificate you added to your Azure Service and this Role.

<Certificates>
  <Certificate name="MyCertificate" thumbprint="8F4A08C8A0**************A**E482****CF4AB" thumbprintAlgorithm="sha1" />
</Certificates>

This maps the thumbprint that Azure knows to the name you assigned the certificate to the store in which to place it.  And since the certificate you load into Azure includes both the public and private keys the certificate is fully functional once the Role instance is provisioned.

Now, all this being described..  If you use a Web or Worker role – just use the Certificates tab in the GUI.  Hopefully as the VM Role evolves, it will become as easy – for all the same reasons.