With Hybrid Cloud functionality you get the benefits of local private Cloud
and public Cloud together like :
- Active Directory Federation Services (ADSF) with Office365 and Windows Azure
to provision Active Directory users from on-premises to the Cloud
with
Microsoft Forefront Identity Manager or Dirsync.
- Making websites in the Cloud on WindowsAzure in a few minutes.
- Using WindowsAzure BLOB Storage for Archiving
- Manage and provision Virtual Machines with System Center 2012 SP1 from
on-premises to the Cloud into Windows Azure
(On October 18th 2013, Windows
Server 2012 R2 and System Center 2012 R2 will be released)
- Using SQL databases for your on-premises applications or for your Cloud
applications.
- Managing applications for mobile devices.
The business benefits are :
- You only pay for what you are using. (TCO Pricing)
- Time to market, provisioning of Users, Virtual Machines, Websites is realy
fast
- Any time any place 24/7
Here you can find more information on Windows Azure :
To begin with private Cloud computing on-premises, you begin with
architecture and design to match the business requirements.
When the design of Microsoft Hyper-V Clustering, SQL, System Center 2012
Virtual Machine Manager, VPN Gateway to Windows Azure and ADFS is done, you
don’t forget
disaster recovery with System Center 2012 Data Protection
Manager and Windows Azure Backup for your Private Cloud solution.
Before you start installing Windows Server 2012 you should check
this best practices:
Disclaimer: As with all Best Practices, not every recommendation can
– or should – be applied. Best Practices are general guidelines, not hard, fast
rules that must be followed. As such, you should carefully review each item to
determine if it makes sense in your environment. If implementing one (or more)
of these Best Practices seems sensible, great; if it doesn’t, simply ignore it.
In other words, it’s up to you
to decide if you should apply these in your setting.
GENERAL (HOST):
⎕ Use Server Core, if possible, to reduce OS overhead, reduce potential
attack surface, and to minimize reboots (due to fewer software updates).
⎕ Ensure hosts are up-to-date with recommended Microsoft updates, to ensure
critical patches and updates – addressing security concerns or fixes to the core
OS – are applied.
⎕ Ensure all applicable
Hyper-V
hotfixes and Cluster hotfixes (if applicable) have been applied. Review the
following sites and compare it to your environment, since not all hotfixes will
be applicable:
⎕ Ensure hosts have the latest BIOS version, as well as other hardware
devices (such as Synthetic Fibre Channel, NIC’s, etc.), to address any known
issues/supportability
⎕ Host should be domain joined, unless security standards dictate
otherwise. Doing so makes it possible to centralize the management of policies
for identity, security, and auditing. Additionally, hosts must be domain joined
before you can create a Hyper-V High-Availability Cluster.
⎕ RDP Printer Mapping should be disabled on hosts, to remove any chance of
a printer driver causing instability issues on the host machine.
- Preferred method: Use Group Policy with host servers in their
own separate OU
- Computer Configuration –> Policies –> Administrative Templates
–> Windows Components –> Remote Desktop Services –> Remote
Desktop Session Host –> Printer Redirection –> Do not allow client
printer redirection –> Set to “Enabled
⎕ Do not install any other Roles on a host besides the Hyper-V role and the
Remote Desktop Services roles (if VDI will be used on the host).
- When the Hyper-V role is installed, the host OS becomes the “Parent
Partition” (a quasi-virtual machine), and the Hypervisor partition is
placed between the parent partition and the hardware. As a result, it is
not recommended to install additional (non-Hyper-V and/or VDI related)
roles.
⎕ The only Features that should be installed on the host are: Failover
Cluster Manager (if host will become part of a cluster), Multipath
I/O (if host will be connecting to an iSCSI SAN, Spaces and/or Fibre
Channel), or Remote Desktop Services if VDI is being used. (See
explanation above for reasons why installing additional features is not
recommended.)
- All folders containing VHD, VHDX, AVHD, VSV and ISO files
- Default virtual machine configuration directory, if used
(C:\ProgramData\Microsoft\Windows\Hyper-V)
- Default snapshot files directory, if used
(%systemdrive%\ProgramData\Microsoft\Windows\Hyper-V\Snapshots)
- Custom virtual machine configuration directories, if applicable
- Default virtual hard disk drive directory
- Custom virtual hard disk drive directories
- Snapshot directories
- Vmms.exe (Note: May need to be configured as process exclusions within
the antivirus software)
- Vmwp.exe (Note: May need to be configured as process exclusions within
the antivirus software)
- Additionally, when you use Cluster Shared Volumes, exclude the CSV
path “C:\ClusterStorage” and all its subdirectories.
- For more information: http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-hosts.aspx
⎕ Default path for Virtual Hard Disks (VHD/VHDX) should be set to a
non-system drive, due to this can cause disk latency as well as create the
potential for the host running out of disk space.
⎕ If you choose to save the VM state as the Automatic Stop Action, the
default virtual machine path should be set to a non-system drive, due to the
creation of a .bin file is created that matches the size of memory reserved for
the virtual machine. A .vsv file may also be created in the same location as
the .bin file, adding to disk space used for each VM. (The default path is:
C:\ProgramData\Microsoft\Windows\Hyper-V.)
⎕ If you are using iSCSI: In Windows Firewall with Advanced
Security, enable iSCSI Service (TCP-In) for Inbound and iSCSI
Service (TCP-Out) for outbound in Firewall settings on each host, to allow
iSCSI traffic to pass to and from host and SAN device. Not enabling these rules
will prevent iSCSI communication.
To set the iSCSI firewall rules via netsh, you can use the following
command:
Netsh advfirewall firewall set rule group=”iSCSI Service” new
enable=yes
⎕ Periodically run performance counters against the host, to ensure optimal
performance.
- Recommend using the Hyper-V performance counter that can be extracted
from the (free) Codeplex PAL application:
- Install PAL on a workstation and open it, then click on the
Threshold File tab.
- Select “Microsoft Windows Server 2012 Hyper-V” from the Threshold
file title, then choose Export to Perfmon template file. Save
the XML file to a location accessible to the Hyper-V host.
- Next, on the host, open Server Manager –> Tool –> Performance
Monitor
- In Performance Monitor, click on Data Collector Sets –> User Defined. Right
click on User Defined and choose New –> Data Collector Set. Name
the collector set “Hyper-V Performance Counter Set” and select Create
from a template (Recommended) then choose Next. On the next screen,
select Browse and then locate the XML file you exported from the PAL
application. Once done, this will show up in your User Defined Data
Collector Sets.
- Run these counters in Performance Monitor for 30 minutes to 1 hour
(during high usage times) and look for disk latency, memory and CPU issues,
etc.
GENERAL (VMs):
PHYSICAL NICs:
⎕ Ensure NICs have the latest firmware, which often address known issues
with hardware.
⎕ Ensure latest NIC drivers have been installed on the host, which resolve
known issues and/or increase performance.
⎕ NICs should not use APIPA (Automatic Private IP Addressing). APIPA is
non-routable and not registered in DNS.
⎕ VMQ should be enabled on VMQ-capable physical network adapters bound to
an external virtual switch.
⎕ TCP Chimney Offload is not supported with Server 2012 software-based NIC
teaming, due to TCP Chimney has the entire networking stack offloaded to the
NIC. If software-based NIC teaming is not used, however, you can leave it
enabled.
- TO SHOW STATUS:
- From an elevated command-prompt, type the following:
- netsh int tcp show global
- (The output should show Chimney Offload State disabled)
- TO DISABLE TCP Chimney Offload:
- From an elevated command-prompt, type the following:
- netsh int tcp set global chimney=disabled
⎕ Jumbo frames should be turned on and set for 9000 or 9014 (depending on
your hardware) for CSV, iSCSI and Live Migration networks. This can
significantly increase (6x increased throughput) throughput while also reducing
CPU cycles.
- End-to-End configuration must take place – NIC, SAN, Switch
must all support Jumbo Frames.
- You can enable Jumbo frames when using crossover cables (for Live
Migration and/or Heartbeat), in a two node cluster.
- To verify Jumbo frames have been successfully configured, run the
following command from all your Hyper-V host(s) to your iSCSI SAN:
- Ping 10.50.2.35 –f –l 8000
- This command will ping the SAN (e.g. 10.50.2.35) with an 8K packet
from the host. If replies are received, Jumbo frames are properly
configured.
NICs used for iSCSI communication should have all Networking protocols (on
the Local Area Connection Properties) unchecked, with the exception of:
- Manufacturers protocol (if applicable)
- Internet Protocol Version 4
- Internet Protocol Version 6.
- Unbinding other protocols (not listed above) helps eliminate non-iSCSI
traffic/chatter on these NICs.
⎕ NIC Teaming should not be used on iSCSI NIC’s. MPIO is the best method.
NIC teaming can be used on the Management, Production (VM traffic), CSV
Heartbeat and Live Migration networks.
- For more information on NIC Teaming:
- For more information on MPIO:
- Microsoft Multipath I/O (MPIO) Users Guide for Windows Server 2012:
- Managing MPIO with Windows PowerShell on Windows Server 2012:
⎕ If you are using NIC teaming for Management, CSV Heartbeat and/or Live
Migration, create the team(s) before you begin assigning
Networks.
⎕ If using aggregate (switch-dependent) NIC teaming in a guest VM, only
SR-IOV NICs should be used on guest.
⎕ If using NIC teaming inside a guest VM, follow this order:
METHOD #1:
- Open the settings of the Virtual Machine
- Under Network Adapter, select Advanced Features.
- In the right pane, under Network Teaming, tick the “Enable this
network adapter to be part of a team in the guest operating system.
- Once inside the VM, open Server Manager. In the All Servers view,
enable NIC Teaming from Server
METHOD #2:
- Use the following PowerShell command (Run as Administrator) on the
Hyper-V host where the VM currently resides:
- Set-VMNetworkAdapter –VMName contoso-vm1 –AllowTeaming On
- This PowerShell command turns on resiliency if one or more of the
teamed NICs goes offline.
- Once inside the VM, open Server Manager. In the All Servers view,
enable NIC Teaming from Server
⎕ When creating virtual switches, it is best practice to uncheck Allow management operating
system to share this network adapter, in order to create a dedicated network
for your VM(s) to communicate with other computers on the physical network. (If
the management adapter is shared, do not modify protocols on the NIC.)
Please note: we fully support and even recommend (in some cases) using
the virtual switch to separate networks for Management, Live Migration,
CSV/Heartbeat and even iSCSI. For example two 10GB NIC’s that are split out
using VLANs and QoS.
⎕ Recommended network configuration when clustering:
Min # of Networks on Host |
Host Management |
VM Network Access |
CSV/Heartbeat |
Live Migration |
iSCSI |
5 |
“Management” |
“Production” |
“CSV/Heartbeat” |
“Live Migration” |
“iSCSI” |
** CSV/Heartbeat & Live Migration Networks can be crossover cables
connecting the nodes, but only if you are building a two (2) node cluster.
Anything above two (2) nodes requires a switch. **
⎕ Turn off cluster communication on the iSCSI network.
- In Failover Cluster Manager, under Networks, the iSCSI network
properties should be set to “Do not allow cluster network communication on
this network.” This prevents internal cluster communications as well as CSV
traffic from flowing over the same network.
⎕ Redundant network paths are strongly encouraged (multiple switches) –
especially for your Live Migration and iSCSI network – as it provides resiliency
and quality of service (QoS).
VLANS:
⎕ If aggregate NIC Teaming is enabled for Management and/or Live Migration
networks, the physical switch ports the host is connected to should be set to
trunk (promiscuous) mode. The physical switch should pass all traffic to the
host for filtering.
⎕ Turn off VLAN filters on teamed NICs. Let the teaming software or the
Hyper-V switch (if present) do the filtering.
VIRTUAL NETWORK ADAPTERS (NICs):
⎕ Legacy Network Adapters (a.k.a. Emulated NIC drivers) should only be used
for PXE booting a VM or when installing non-Hyper-V aware Guest operating
systems. Hyper-V’s synthetic NICs (the default NIC selection; a.k.a. Synthetic
NIC drivers) are far more efficient, due to using a dedicated VMBus to
communicate between the virtual NIC and the physical NIC; as a result, there are
reduced CPU cycles, as well as much lower hypervisor/guest transitions per
operation.
Example :
The first thing you want to do is create a team out of the two NICs and
connect the team to a Hyper-V virtual switch. For instance with Powershell
:
New-NetLbfoTeam Team1 –TeamMembers NIC1, NIC2 –TeamNicName
TeamNIC1
New-VMSwitch TeamSwitch –NetAdapterName TeamNIC1 –MinimumBandwidthMode
Weight –AllowManagementOS $false
Next, you want to create multiple vNICs on the parent partition, one for
each kind of traffic (two for SMB). Here’s an example:
Add-VMNetworkAdapter –ManagementOS –Name SMB1 –SwitchName
TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name SMB2 –SwitchName
TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Migration –SwitchName
TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Cluster –SwitchName
TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Management –SwitchName
TeamSwitch
After this, you want to configure the NICs properly. This will include
setting IP addresses, creating separate subnets for each kind of traffic. You
can optionally put them each on a different VLAN.
Since you have lots of NICs now and you’re already in manual configuration
territory anyway, you might want to help the SMB Multichannel by pointing it to
the NICs that should be used by SMB. You can do this by configuring SMB
Multichannel constraints instead of letting SMB try all different paths. For
instance, assuming that your Scale-Out File Server name is SOFS, you could
use:
New-SmbMultichannelConstraint -ServerName SOFS -InterfaceAlias SMB1,
SMB2
Last but not least you might also want set QoS for each kind of traffic,
using the facilities provided by the Hyper-V virtual switch. One way to do it
is:
Set-VMNetworkAdapter –ManagementOS –Name SMB1 –MinimumBandwidthWeight
20
Set-VMNetworkAdapter –ManagementOS –Name SMB2 –MinimumBandwidthWeight
20
Set-VMNetworkAdapter –ManagementOS –Name Migration
–MinimumBandwidthWeight 20
Set-VMNetworkAdapter –ManagementOS –Name Cluster –MinimumBandwidthWeight
5
Set-VMNetworkAdapter –ManagementOS –Name Management
–MinimumBandwidthWeight 5
Set-VMNetworkAdapter –VMName * -MinimumBandwidthWeight 1
DISK:
⎕ New disks should use the VHDX format. Disks created in earlier Hyper-V
iterations should be converted to VHDX, unless there is a need to move the VHD
back to a 2008 Hyper-V host.
- The VHDX format supports virtual hard disk storage capacity of up to 64
TB, improved protection against data corruption during power failures (by
logging updates to the VHDX metadata structures), and improved alignment
of the virtual hard disk format to work well on large sector disks.
⎕ Disks should be fixed in a production environment, to increase disk
throughput. Differencing and Dynamic disks are not recommended for production,
due to increased disk read/write latency times (differencing/dynamic
disks).
⎕ Use caution when using snapshots. If not properly managed, snapshots can
cause disk space issues, as well as additional physical I/O overhead.
Additionally, if you are hosting 2008 R2 (or earlier) Domain Controllers,
reverting to an earlier snapshot can cause
USN
rollbacks. Windows Server 2012 has been updated to help better protect
Domain Controllers from USN rollbacks; however, you should still limit
usage.
⎕ The recommended minimum free space on CSV volumes containing Hyper-V
virtual machine VHD and/or VHDX files:
- 15% free space, if the partition size is less than 1TB
- 10% free space, if the partition size is between 1TB and 5TB
- 5% free space, if the partition size is greater than 5TB
- To enumerate current volume information, including the percentage free,
you can use the following PowerShell command:
- Get-ClusterSharedVolume “Cluster Disk 1″ | fc *
- Review the “PercentageFree” output
⎕ It is not supported to create a storage pool using Fiber Channel or iSCSI
LUNs.
- For more information see:
⎕ Page file on Hyper-V Host should managed by the OS and not configured
manually.
MEMORY:
⎕ Use Dynamic Memory on all VMs (unless not supported).
- Dynamic Memory adjusts the amount of memory available to a virtual
machine, based on changes in memory demand using a memory balloon driver,
which helps use memory resources more efficiently.
- For more information:
⎕ Guest OS should be configured with (minimum) recommended memory
- 2048MB is recommended for Windows
Server 2012 (e.g. 2048 – 4096 Dynamic Memory). (The minimum
supported is 512 MB)
- 2048MB is recommended for Windows
Server 2008, including R2 (e.g. 2048 – 4096 Dynamic Memory). (The
minimum supported is 512 MB)
- 1024MB is recommended for Windows 7 (e.g. 1024 – 2048 Dynamic Memory).
(The minimum supported is 512 MB)
- 1024MB is recommended for Windows Vista (e.g. 1024 – 2048 Dynamic
Memory). (The minimum supported is 512 MB)
- 512MB is recommended for Windows Server 2003 R2 w/SP2 (e.g. 256 – 2048
Dynamic Memory). (The minimum supported is 128 MB.
- 512MB is recommended for Windows Server 2003 w/SP2 (e.g. 256 – 2048
Dynamic Memory). (The minimum supported is 128 MB).
- 512MB is recommended for Windows XP. Important: XP does not support
Dynamic Memory. (The minimum supported is 64 MB). Note:
Support for Windows XP Ends April 2014!
CLUSTER:
⎕ Set preferred network for CSV communication, to ensure the correct
network is used for this traffic. (Note: This will only need to be run on one of
your Hyper-V nodes.)
- The lowest metric in the output generated by the following PowerShell
command will be used for CSV traffic
- Open a PowerShell command-prompt (using “Run as administrator”)
- First, you’ll need to import the “FailoverClusters” module. Type the
following at the PS command-prompt:
- Import-Module FailoverClusters
- Next, we’ll request a listing of networks used by the host, as well as
the metric assigned. Type the following:
- Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role
In order to change which network interface is used for CSV traffic,
use the following PowerShell command:
- (Get-ClusterNetwork “CSV Network”).Metric=900
- This will set the network named “CSV Network” to 900
*** Set preferred network for Live Migration, to ensure the correct
network(s) are used for this traffic:
- Open Failover Cluster Manager, Expand the Cluster
- Next, right click on Networks and select Live Migration Settings
- Use the Up / Down buttons to list the networks in order from most
preferred (at the top) to least preferred (at the bottom)
- Uncheck any networks you do not want used for Live Migration traffic
- Select Apply and then press OK
- Once you have made this change, it will be used for all VMs in the
cluster
⎕ The Cluster Shutdown Time (ShutdownTimeoutInMinutes registry entry)
should be set to an acceptable number
- Default is set using the following calculation (which can be too high,
depending on how much physical memory is installed)
- (100 / 64) * physical RAM
- For example, a 96GB system would have 150 minute timeout. (100/64)*96
= 150
- Suggest setting the timeout to 15, 20 or 30 minutes, depending on the
number of VMs in your environment.
- Registry Key: HKLM\Cluster\ShutdownTimeoutInMinutes
- Enter minutes in Decimal value.
- Note: Requires a reboot to take effect
⎕ Run the Cluster Validation periodically to remediate any issues
- NOTE: If all LUNs are part of the cluster, the validation test will
skip all disk checks. It is recommended to set up a small test-only LUN and
share it on all nodes, so full validation testing can be completed.
- If you need to test a LUN running virtual machines, the LUN will need
to be taken offline.
- For more information: http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx#BKMK_how_to_run
⎕ Consider enabling CSV Cache if you have VMs that are used primarily for
read requests, and are less write intensive. Scenarios such as Pooled VDI VMs;
also can be leveraged for reducing VM boot storms.
HYPER-V REPLICA:
⎕ If utilizing Hyper-V Replica, update inbound traffic on the firewall to
allow TCP port ‘80’ and/or port ‘443’ traffic. (In Windows Firewall, enable
“Hyper-V Replica HTTP Listener (TCP-In)” rule on each node of the cluster.
To enable HTTP (port 80) replica traffic, you can run the following from an
elevated command-prompt:
netsh advfirewall firewall set rule group=”Hyper-V Replica HTTP” new
enable=yes
To enable HTTPS (port 443) replica traffic, you can run the following from
an elevated command-prompt:
netsh advfirewall firewall set rule group=”Hyper-V Replica HTTPS” new
enable=yes
⎕ Compression is recommended for replication traffic, to reduce bandwidth
requirements.
⎕ Configure guest operating systems for VSS-based backups to enable
application-consistent snapshots for Hyper-V Replica.
⎕ Integration services must be installed before primary or Replica virtual
machines can use an alternate IP address after a failover
⎕ Virtual hard disks with paging files should be excluded from replication,
unless the page file is on the OS disk.
⎕ Test failovers should be performed monthly, at a minimum, to verify that
failover will succeed and that virtual machine workloads will operate as
expected after failover
⎕ Hyper-V Replica requires the Failover Clustering Hyper-V Replica Broker
role be configured if either the primary or the replica server is part of a
cluster.
⎕ Feature and performance optimization of Hyper-V Replica can be further
tuned by using the registry keys mentioned in the article below:
CLUSTER-AWARE UPDATING:
⎕ Place all Cluster-Aware Updating (CAU) Run Profiles on a single File
Share accessible to all potential CAU Update Coordinators. (Run Profiles are
configuration settings that can be saved as an XML file called an Updating Run
Profile and reused for later Updating Runs.
http://technet.microsoft.com/en-us/library/jj134224.aspx
SMB 3.0 FILE SHARES:
⎕ An Active Directory infrastructure is required, so you can grant
permissions to the computer account of the Hyper-V hosts.
⎕ Loopback configurations (where the computer that is running Hyper-V is
used as the file server for virtual machine storage) are not supported.
Similarly, running the file share in VM’s that are hosted on compute nodes that
will serve other VM’s is not supported.
VITRUAL DOMAIN CONTROLLERS (DCs):
⎕ Domain Controller VMs should have “Shut down the guest operating system”
in the Automatic Stop Action setting applied (in the virtual machine settings on
the Hyper-V Host)
INTEGRATION SERVICES:
⎕ Ensure Integration Services (IS) have been installed on all VMs. IC’s
significantly improve interaction between the VM and the physical host.
⎕ Be certain you are running the latest version of integration services –
the same version as the host(s) – in all guest operating systems, as some
Microsoft updates make changes/improvements to the Integration Services
software. (When a new Integration Services version is updated on the host(s) it
does not automatically update the guest operating systems.)
- Note: If Integration Services are out of date, you will see 4010
events logged in the event viewer.
- You can discover the version for each of your VMs on a host by running
the following PowerShell command:
- Get-VM | ft Name, IntegrationServicesVersion
- If you’d like a PowerShell method to update Integration Services on
VMs, check out this blog: http://gallery.technet.microsoft.com/scriptcenter/Automated-Install-of-Hyper-edc278ef
OFFLOADED DATA TRANSFER (ODX) Usage:
⎕ If your SAN supports ODX (
see
this post for help; also check with your hardware vendor), you should
strongly consider enabling ODX on your Hyper-V hosts, as well as any VMs that
connect directly to SAN storage LUNs.
- To enable ODX, open PowerShell (using Run as Administrator) and type
the following:
- Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem
-Name “FilterSupportedFeaturesMode” –Value 0
- Be sure to run this command on every Hyper-V host that connects to the
SAN, as well as any VM that connects directly to the SAN.