Do IT urself

THE INFRAITHOBBIEGEEKBLOG

By

Powershell commands to create Hyper-v 2012 VMs

In this post I wanted to point some interesting way to create VMs.

A VM can host several type of virtual disks

First there is two file format VHD and the new VHDX. VHD can be up to 2TB where VHDX can go up to 64TB. Also VHDX are more resilient to VM hard shutdown but are only supported by WS 2012 and so I think also Windows 8.

Then there is three types of disks :

  • Fixed : The complete volume of the disk is created at the beginning. This kind of disk is more effective but will use the total allocated space.
  • Dynamic : this type of virtual disk create a small disk that contain the data hosted. Then as long as you fullfill the disk in the VM the hosted file will grow
  • Differencing : this type of disk is based on another disk called the parent disk. The child disk contain only the differences you made in the VM. Both disks (parent and child) could be Fixed or Dynamic disk but the format should be the same (VHD or VHDX). This kind of disk reduce disk space usage.

If you use the UI or the command line New-VM the default dynamic type will be used.If you want to use fixed or differencing size you’ll need to create the disk separatly.

Of course you can use the UI to create VHD and VM, it’s great to build a few VM but if you want to create several VM and always use the same configuration you really should write some Powershell Scripts.

Here are the Command to create all sort of disks :

Dynamic without source :

New-VHD -Path « C:\ClusterStorage\Volume1\BASE2012.VHDX » -Dynamic -SizeBytes 127GB -ComputerName hyperv01

Dynamic with source :

New-VHD -Path « C:\ClusterStorage\Volume1\VM001.VHDX » -ParentPath « C:\ClusterStorage\Volume1\BASE2012.VHDX » -Dynamic -SizeBytes 127GB -ComputerName hyperv01

Fixed without source :

New-VHD -Path « C:\ClusterStorage\Volume1\BASE2012.VHDX » -Fixed -SizeBytes 60GB -ComputerName hyperv01

Fixed with source :

New-VHD -Path « C:\ClusterStorage\Volume1\VM001.VHDX » -SourceDisk C:\ClusterStorage\Volume1\BASE2012.VHDX -Fixed -SizeBytes 60GB -ComputerName hyperv01

Differencing disk :

New-VHD -Path « C:\ClusterStorage\Volume1\VM001.VHDX » -ParentPath « C:\ClusterStorage\Volume1\BASE2012.VHDX » -Differencing -SizeBytes 127GB -ComputerName hyperv01

(The source is a base disk)

To create a new VM you can :

Create a VM and a new VHD (Dynamic) :

New-VM -Name VM01 -NewVHDPath « c:\ClusterStorage\Volume1\VM01.VHDX » -NewVHDSizeBytes 60GB -ComputerName Hyperv01

Create a VM with no disk :

New-VM -Name VM01 -noVHD -ComputerName Hyperv01

Create a VM and attach an existing VHD :

New-VM -Name VM01 -VHDPath « c:\ClusterStorage\Volume1\VM01.VHDX » -ComputerName Hyperv01

Ok so now that we have all the basics command let’s create a script to build 5 VM with fixed size disk (CreateVMFixedVhd) :

$vhdpath = « C:\ClusterStorage\Volume1\VHD\ »
$vmnetworkName = « External01″
$memorySize = 1GB
$VmtoCreate = 5
$DiskSize = 60GB
$HyperVhost = hyperv01

1..$VmtoCreate | % {
New-VHD -Path $vhdpath »VM$_.VHDX » -Fixed -SizeBytes $DiskSize -ComputerName $HyperVhost
New-VM -Name VM00$_ -VHDPath $vhdpath »VM$_.VHDX » -Memory $memorySize -SwitchName $vmnetworkName -ComputerName $HyperVhost
Start-VM VM00$_ -ComputerName $HyperVhost

}

The problem with this method is that you’ll have to install all OS. Thus, we’ll need a source disk.To do so we will first create the base VM to create the base image :

New-VHD -Path « C:\ClusterStorage\Volume1\VHD\BASE2012.VHDX » -Fixed -SizeBytes 60GB -ComputerName hyperv01

New-VM -Name VMBase2012 -VHDPath « C:\ClusterStorage\Volume1\VHD\BASE2012.VHDX » -ComputerName Hyperv01

Set-VMDVDDrive -VMName VMBase2012 -Path C:\ClusterStorage\Volume1\ISO\Win2012_RTM.iso -CmputerName Hyperv01

Start-VM VMBase2012 -ComputerName Hyperv01

  • Install the OS
  • Install integration services and make any changes you want
  • Sysprep it and shutdown
  • c:\Windows\System32\sysprep\sysprep.exe /generalize /oobe /shutdown
  • Remove the base2012 VM from Hyperv manager (Command : Remove-VM Base2012)
  • This command delete the VM but not the VHD

Then we’ll be able to use this disk as a source for our new virtual machines.

Indeed, I choose to build several VM using Differencing disks.

$vhdpath = « C:\ClusterStorage\Volume1\VHD\ »
$vmnetworkName = « External01″
$memorySize = 1GB
$VmtoCreate = 5
$DiskSize = 80GB
$HypervHost = « hyperv01″
1..$VmtoCreate | % {
New-VHD -Path $vhdpath »VM00$_.VHDX » -ParentPath $vhdpath »BASE2012.VHDX » -Differencing -SizeBytes $DiskSize -ComputerName $HypervHost
New-VM -Name VM00$_ -VHDPath $vhdpath »VM00$_.VHDX » -Memory $memorySize -SwitchName $vmnetworkName -ComputerName $HypervHost
Start-VM VM00$_ -ComputerName $HypervHost
}

Then if you want to add this Virtual machines to your failover cluster

$VmtoAdd = 5
$ClusterName = « Cluster01″
1..$VmtoAdd | % {
Add-ClusterVirtualMachineRole -VirtualMachine VM00$_ -Name VM00$_ -Cluster $ClusterName
}

Enjoy

Source :

http://technet.microsoft.com/en-us/library/hh848559.aspx

http://technet.microsoft.com/library/hh847239.aspx

By

Create a two node HyperV Cluster

After trying the two node no share « Cluster » I decided to move to a more traditional cluster. Why ? First because I am curious and second because a cluster offer failover ! Basically what we need more is a shared storage, I mean a SAS, Fibre Channel or ISCSI Volumes.
Microsoft Best Practice : Each host that you want to cluster must have access to the storage array.

  • The Multipath I/O (MPIO) feature must be added on each host that will access the Fibre Channel or iSCSI storage array. You can add the MPIO feature through Server Manager. If the MPIO feature is already enabled before you add a host to VMM management, VMM will automatically enable MPIO for supported storage arrays by using the Microsoft provided Device Specific Module (DSM). If you already installed vendor-specific DSMs for supported storage arrays, and then add the host to VMM management, the vendor-specific MPIO settings will be used to communicate with those arrays.If you add a host to VMM management before you add the MPIO feature, you must add the MPIO feature, and then manually configure MPIO to add the discovered device hardware IDs. Or, you can install vendor-specific DSMs.
  • If you are using a Fibre Channel storage array network (SAN), each host must have a host bus adapter (HBA) installed, and zoning must be correctly configured. For more information, see your storage array vendor’s documentation.
  • If you are using an iSCSI SAN, make sure that iSCSI portals have been added and that the iSCSI initiator is logged into the array. Additionally, make sure that the Microsoft iSCSI Initiator Service on each host is started and set to Automatic. For more information about how to create an iSCSI session on a host when storage is managed through VMM

Source : http://technet.microsoft.com/en-us/library/gg610630.aspx

For my test lab i’ll use ISCSI storage. Unfortunatly I don’t have physical NAS or SAN so I’ll use the new ISCSI Server on Win 2012. To do so I’ll create a new server named Infra02 with a normal config (1Cpu, 1GB Ram, join domain). On my previous post I had two volume 750GB mounted on each HyperV server, I just mooved it to the new server INFRA02.

 

Storage pools : Before doing anything let’s just explain the new Storage Pool service provided in Win2012. Usually, on a server with several disks you can build a RAID unit using the physical RAID card.. But if the server doen’t provide RAID card, if you have a JBOD device connected to it, if you want to put together NAS storages (strange thing but why not), if you have different disk type and volumes or all this things then you can use the Storage Pool service proposed by Microsoft. The storage pool allow us to put together several storage unit in a logical pool. Then you can use this pool to create Virtual Disk.

So you can combine your 3 DAS storage on a server maybe with your old JBODs and a NAS array to create one or two Pool and then create several Virtual Disk that will be presented as Iscsi Volumes such as :

In my case the two 750GB volume represent two physical Direct Attached Storage (DAS). And so the INFRA02 server will allow us to build a real ISCSI storage device for the HyperV Cluster.

Ok I can create Iscsi virtual disk directly on physical disk but i repeat we are in a test environment.

First of all we need to add the Iscsi server features on INFRA02 :

add-WindowsFeature FS-iSCSITarget-Server

To build the cluster we’ll need :

  • 1 Volume for the cluster (witness disk)
  • Shared Storage for Virtual machines

So, to begin and in order to be able to run our own scripts we need to se the execution policy.

Set-executionPolicy Unrestricted
Then we need to create a Storage pool, add our disk to it and create the Virtual disks.
To do so, create a script that containing the following code (createStorage.ps1) and put it under c:\scripts:

$disks = Get-PhysicalDisk –CanPool $true $storagesub = Get-StorageSubsystem New-StoragePool -FriendlyName StoragePool01 -StorageSubsystemFriendlyName $storagesub.FriendlyName -PhysicalDisks $disks $newSpace = New-VirtualDisk –StoragePoolFriendlyName StoragePool01 –FriendlyName Storage01 -Size (1500GB) -ResiliencySettingName Simple -ProvisioningType Fixed

You’ll then have one new « StoragePool01″ a new disk named « Storage01″

  • In the Server manager disk view bring the disk online
  • Inittialize it
  • Create a partition using the full disk space
  • Use x: letter
  • (Script coming. One day … or not)
Then we are going to create the Iscsi target and assign the three disks to it :

New-IscsiServerTarget -TargetName FileCluster -InitiatorID IPAddress:10.0.0.10, IPAddress:10.0.0.11

#Witness Disk

New-IscsiVirtualDisk -DevicePath X:\iScsiVirtualDisks\witness.VHD -Size 5GB Add-iSCSIVirtualDiskTargetMapping -TargetName FileCluster -DevicePath X:\iScsiVirtualDisks\witness.VHD

#Storage disks

1..2 | % {New-IscsiVirtualDisk -DevicePath X:\iScsiVirtualDisks\LUN0$_.VHD -Size 700GB Add-iSCSIVirtualDiskTargetMapping -TargetName FileCluster -DevicePath X:\iScsiVirtualDisks\LUN0$_.VHD }

In the Iscsi target I also allow 10.0.0.10 and 10.0.0.11 to connect to the iScsi Server

Create another named script   »Connectiscsi.ps1″ with:

Set-Service MSiSCSI -StartupType automatic

Start-Service MSiSCSI

New-iSCSITargetPortal -TargetPortalAddress 10.0.0.2

Get-iSCSITarget | Connect-iSCSITarget

Get-iSCSISession | Register-iSCSISession

Execute the script remotely on both servers with :

1::2 | % {Invoke-Command -ComputerName Hyperv0$_ -FilePath « C:\script\Connectiscsi.ps1″ }

So far our Storage is on the network and our two HyperV hosts can access it !

Now we can configure the cluster.
On our management server (INFRA01 for me) run the following command to install the Clustering Remote Server Administration Tool:

add-WindowsFeature RSAT-Clustering

Run the following command to install failover clustering feature on each HyperV server :

1..2 | % {Invoke-Command -ComputerName Hyperv0$_ -scriptblock {add-windowsFeature Failover-Clustering} }

From one Hyperv host open the Server manager disk view and :

  • Bring the disk online
  • Inittialize it
  • Create a partition using the full disk space
  • Do not assign letter
  • (Script coming. One day … or not)

Add a new network adapter to each HyperV server and configure it to be : Hyperv01 : 10.0.1.1 /24 Hyperv02 : 10.0.1.2 /24 This network directly link the two hyperv server and will be use to simulate a redunduncy network.

Here we go ! We now have two hosts linked by two networks and each host is connected to 3 iScsi Volumes (1 for witness and 2 for storage). We now have all pre-requisite to build the cluster.

To check if everything is ok you can (should) run a cluster validation test

If you don’t have error and manage warnings then you can create your cluster :

New-Cluster -Name Cluster01 -Node hyperv01,hyperv02 –StaticAddress 10.0.1.3, 10.0.0.3 -noStorage

Then we’ll need to add all available storage to the cluster :

Get-ClusterAvailableStorage Cluster01 | add-ClusterDisk

Configure Cluster Quorum :

Set-ClusterQuorum -NodeAndDiskMajority « Cluster Disk 2″ -Cluster Cluster01

Add the two other disk to be Cluster Shared Volumes

Add-ClusterSharedVolume « Cluster Disk1″, »Cluster Disk 3″ -Cluster Cluster01

This two shared volumes are now under both hyperv server :

C:\ClusterStorage

We can therefore re-configure HyperV to use C:\ClusterStorage\Volume1 to be the default storage location :

1..2 | % {Invoke-Command -ComputerName Hyperv0$_ -scriptblock {set-VMHost –VirtualHardDiskPath C:\ClusterStorage\Volume1\VHD\ –VirtualMachinePath C:\ClusterStorage\Volume1\VM\} }

To test the failover cluster :

Create a test VirtualMachine :

New-VM VM01 -Memory 1GB -ComputerName Hyperv01

Add-ClusterVirtualMachineRole -VirtualMachine -VM01 -name VM01 -Cluster Cluster01

Start-vm VM01 -ComputerName Hyperv01

Cut the network on Hyperv01 and see if the VM shut down and reboot on Hyperv02 !

In my next post I’ll come with more VM creation options and powershell code.

Enjoy

By

Build a two node hyperv 2012 no share live migration “Cluster”

In this post I’ll describe the steps to build a two node “cluster”. Why cluster is between quotes, it’s simply because we will not use the clustering features of MS and you’ll neither have fault tolerance with this infrastructure but you’ll be able to live migrate manually VM between two hosts.
To build this infrastructure I’ll use my lab environment which is a normal computer (8cpu, 32GB, 3To) under win7 with VMWare workstation tech preview.

  • Infra01 :
    1,5GB
    2 CPU
    40GB
  • Hyperv01 et Hyperv02
    10GB
    4CPU
    40GB
    Path

To install the VM I’m using the unsuported HyperV os version.
screen02

Then install Windows Server 2012. As I explained in a previous post I prefer to install my server with the full GUI version and then after configuration disable it. So :

screen03

Then it’s a normal install.
Install VMTools
Rename your servers

Set network :

With Cmd

netsh interface ip set address name=Ethernet static 10.0.0.2 255.255.255.0
 netsh interface ip set dns name=Ethernet static 10.0.0.1

Or PowerShell

Get-NetIPAddress
Set-NetIPInterface –InterfaceAlias Ethernet –DHCP Disabled
New-NetIPAddress –InterfaceAlias Ethernet –IPAddress 10.0.0.10 –PrefixLength 24
Set-DnsClientServerAddress –InterfaceAlias Ethernet –ServerAddresses 10.0.0.1

Disable Firewall :
Set-NetFirewallProfile -Enabled False

Install HyperV :

screen04

Install-WindowsFeature Hyper-V

and if you also want the management tools

Install-WindowsFeature RSAT-Hyper-V-Tools

I asked myself a question. Can I build a standalone infrastructure, I mean not joined to an Active Directory Domain like I can do with Esx Servers ? Well you can with … 1 server but if you want to migrate VM without joining HyperV to domain u’ll have this kind message :

screen05

So now join your HyperV server to your domain :

Add-Computer –DomainName myLab.local –Credential myLab\Administrator –Restart

Then we are going to configure Hyper-V settings.
I think you don’t want to host VM and VHD on your C drive so :

set-VMHost –VirtualHardDiskPath E:\HYPERV\VHD\ –VirtualMachinePath E:\HYPERV\VM\

If you want to do it on HYPERV02 you can connect directly to it or do it remotly like :

New-PSSession –ComputerName HYPERV02
Enter-PSSession –Id 1

# and then use the same configuration line. This will avoid copy error and different configurations.

screen08

Configure VsWitch :

External

Then we need to configure live migration settings. If you go to the settings with the UI you’ll see that there are two configurations :

screen09

I’ll not go in detailed security things here but basically the difference here is that with CredSSP you’ll have to do all your management tasks from a HyperV host and that’s not really what I want here.

So choose kerberos Authentification

Now if you try to move a virtual machine you will have this wonderfull error :

To solve this we need to create a trust relation between our servers.

Open an Active Directory Users and Computers console, then in Hyperv01 properties add delegation as shown below :

Do the same for Hyperv02.

Now it’s time to move VMs. To do so Create an empty VM named VM01.

From INFRA01 open a Powershell command prompt and open a Pssesion on Hyperv01 and Hyperv02.

From where the VM is located run the following command :

Move-VM VM01 Hyperv02  –IncludeStorage

And it should work … or not !

If like me you have the following error message : AccessDenied,Microsoft.HyperV.Powershell.Commands.MoveVMCommand

It should be because you installed HyperV role before joining the hosts to the domain.

To solve the problem simply add “Domain Admin” or any group you want to use to the HyperV administrator local group on each HyperV host.

 

Enjoy

By

Windows 2012 : Full or Core install ?

You maybe already noticed that Microsoft changed the default install mode of WS 2012 to core version ! What a change !

Where all Unix based server are proud to have black and white screen Microsoft ignored it since the beginning trying to build the most simple, fashion and powerful user interface. Moreover MS guys sometimes laugh at Unix admin considering them as old school guys using Vi, Emacs or other light and powerful tools

And today, in 2012, 19 years after Windows NT first release, we should install and use core version ! Why ? What happened ?

Well I don’t really know what is driving Microsoft in this direction but I just want to analyze Pro and cons and look at some figures.

First of all I installed two really default version of WS 2012 a full and a Core and here are the basic result of CPU, Memory and disk usage after install and without any utilization.

CPU
FULL
CORE
Memory
FULL
CORE
Disk
FULL
CORE

So if you install a core version you will basically save :

  • 3 process
  • 68 Thread
  • 137 MB of RAM
  • 2,87 GB of disk

You’ll also of course have less files on disk and less I/Os.

Ok ! this are the default figures. Now let’s see what Microsoft is telling us :

  • Greater stability. Because a Server Core installation has fewer running processes and services than a Full installation, the overall stability of Server Core is greater. Fewer things can go wrong, and fewer settings can be configured incorrectly.

My Comment : That’s right. But how many times did your Win 2003 or 2008R2 crashed because of MS process ? The main process included in the full version of course is explorer.exe . This process sometimes crashed but on my PC when I have 20 windows opened, twelve applications and 5 VM running !

  • Simplified management. Because there are fewer things to manage on a Server Core installation, it’s easier to configure and support a Server Core installation than a Full one—once you get the hang of it.

My Comment : This is my main point. I think they forget a point here, It’s easier IF you are a PowerShell expert ! How many of you are ?

  • Reduced maintenance. Because Server Core has fewer binaries than a Full installation, there’s less to maintain. For example, fewer hot fixes and security updates need to be applied to a Server Core installation. Microsoft analyzed the binaries included in Server Core and the patches released for Windows Server 2000 and Windows Server 2003 and found that if a Server Core installation option had been available for Windows Server 2000, approximately 60 percent of the patches required would have been eliminated, while for Windows Server 2003, about 40 percent of them would have.

My Comment : Ok, that’s true. Let’s see, if I take the last 6 month we have for each month from February to July  : 9 – 6 – 6 – 7  – 7 and 9 patches. What is your patching routine ? How many patches really need reboot ? How many patches will be applied to full and core ? and what is the difference between downloading 35 patch, install, reboot and download 5 patches, install, reboot ? Especially every 6 months …

  • Reduced memory and disk requirements. A Server Core installation on x86 architecture, with no roles or optional components installed and running at idle, has a memory footprint of about 180 megabytes (MB), compared to about 310 MB for a similarly equipped Full installation of the same edition. Disk space needs differ even more—a base Server Core installation needs only about 1.6 gigabytes (GB) of disk space compared to 7.6 GB for an equivalent Full installation. Of course, that doesn’t account for the paging files and disk space needed to archive old versions of binaries when software updates are applied. See Chapter 2 for more information concerning the hardware requirements for installing Server Core.

My Comment : (these are Microsoft figures are for WS 2008) Right we saw that. You’ll save about 150 MB of RAM and 2,8 GB of Disk. If you have a huge infrastructure you may care about it.

  • Reduced attack surface. Because Server Core has fewer system services running on it than a Full installation does, there’s less attack surface (that is, fewer possible vectors for malicious attacks on the server). This means that a Server Core installation is more secure than a similarly configured Full installation.

My Comment : Have you ever be victim of a virus or pirate attack because of Windows GUI security vulnerability ? Yes ? so review your network security

Source : http://technet.microsoft.com/en-us/library/dd184076.aspx

Two other important things pointed by MS article, Core server will not improve performances and Core server are only eligible to :

  • AD DS
  • AD LDS
  • DNS
  • DHCP
  • File Services
  • Print Services
  • Streaming Media Services
  • Web Server (IIS)
  • Hyper-V

Ok, I seems to be a server core opponent but I’m not, in fact I’m loving it but in another way, let me explain.

My advise is the following, don’t install your WS2012 in core version, instead use the full version, configure your server, be able to manage it remotely and then after all test disable the GUI. Indeed, you’ll benefit from lower memory utilization, less process, lower security risks etc but not benefits from less disk pace usage because files for the GUI stay on the disk.

To remove the GUI use the GUI (to remove the GUI) or use the following PowerShell command :

Remove-WindowsFeature Server-GUI-Shell, Server-Gui-Mgmt-Infra

By doing that if one day you are not able to do something remotely or lost the server communication or whatever, go on the server and re-activate the GUI with the following PowerShell :

Add-WindowsFeature Server-GUI-Shell, Server-Gui-Mgmt-Infra

Enjoy

Julien

By

Hyper-v Platform on Windows 8

Windows 8 include a really nice feature. This will for sure help in the development of Hyper-v server 2012 ! This feature is simply an Hyper-V platform for Windows 8.

To do so, i installed a Windows 8 Consumer preview Build 8250 on VmWare Workstation 2012 Tech preview.

Then i changed the OS version

And the Processor to accept virtualization

Then go to “Control Panel / Programs / Turns Windows Features on or off and turn on Hyper-v Platform and tools . Reboot.

If it’s grey and tell you that your processor does not have second level Address Translation just reboot the VM

Then you can open the Hyper-V management console and enjoy

Next we’ll see what we can do with it (replica, Vswitch, storage etc)

Julien

By

Hyper-v 2012 Nested in VmWare Workstation

If you ever try to install Hyper-V on a VM on VmWare workstation you certainly encoutered this message :

 Screen1

To solve this you’ll have to :

  1. Upgrade your Workstation version

    Screen2

  2. Shutdown the VM
  3. Modify VM OS Version

 Screen3

And enjoy

 Screen4

Julien