mountainss SystemCenter Blog
Microsoft SystemCenter blogsite about virtualization
on-premises and Cloud
Windows Server 2012 #HyperV – #SCVMM Design – Best Practices #WindowsAzure #Winserv
With Hybrid Cloud functionality you get the benefits of local private Cloud
and public Cloud together like :
The business benefits are :
Here you can find more information on Windows Azure :
To begin with private Cloud computing on-premises, you begin with
architecture and design to match the business requirements.
When the design of Microsoft Hyper-V Clustering, SQL, System Center 2012
Virtual Machine Manager, VPN Gateway to Windows Azure and ADFS is done, you
don’t forget
disaster recovery with System Center 2012 Data Protection Manager and Windows Azure Backup for your Private Cloud solution.
Before you start installing Windows Server 2012 you should check
this best practices:
Disclaimer: As with all Best Practices, not every recommendation can
– or should – be applied. Best Practices are general guidelines, not hard, fast
rules that must be followed. As such, you should carefully review each item to
determine if it makes sense in your environment. If implementing one (or more)
of these Best Practices seems sensible, great; if it doesn’t, simply ignore it.
In other words, it’s up to you
to decide if you should apply these in your setting.
GENERAL (HOST):
⎕ Use Server Core, if possible, to reduce OS overhead, reduce potential
attack surface, and to minimize reboots (due to fewer software updates).
⎕ Ensure hosts are up-to-date with recommended Microsoft updates, to ensure
critical patches and updates – addressing security concerns or fixes to the core
OS – are applied.
⎕ Ensure all applicable Hyper-V
hotfixes and Cluster hotfixes (if applicable) have been applied. Review the
following sites and compare it to your environment, since not all hotfixes will
be applicable:
· Update List for Windows Server 2012 Hyper-V: http://social.technet.microsoft.com/wiki/contents/articles/15576.hyper-v-update-list-for-windows-server-2012.aspx
· List of Failover Cluster Hotfixes: http://social.technet.microsoft.com/wiki/contents/articles/15577.list-of-failover-cluster-hotfixes-for-windows-server-2012.aspx
· Failover Cluster Management snap-in crashes after you install update
2750149 on a Windows Server 2012-based failover cluster:
http://support.microsoft.com/kb/2803748
⎕ Ensure hosts have the latest BIOS version, as well as other hardware
devices (such as Synthetic Fibre Channel, NIC’s, etc.), to address any known
issues/supportability
⎕ Host should be domain joined, unless security standards dictate
otherwise. Doing so makes it possible to centralize the management of policies
for identity, security, and auditing. Additionally, hosts must be domain joined
before you can create a Hyper-V High-Availability Cluster.
· For more information: http://technet.microsoft.com/en-us/library/ee941123(v=WS.10).aspx
⎕ RDP Printer Mapping should be disabled on hosts, to remove any chance of
a printer driver causing instability issues on the host machine.
⎕ Do not install any other Roles on a host besides the Hyper-V role and the
Remote Desktop Services roles (if VDI will be used on the host).
⎕ The only Features that should be installed on the host are: Failover
Cluster Manager (if host will become part of a cluster), Multipath
I/O (if host will be connecting to an iSCSI SAN, Spaces and/or Fibre
Channel), or Remote Desktop Services if VDI is being used. (See
explanation above for reasons why installing additional features is not
recommended.)
⎕ Anti-virus software should exclude Hyper-V specific files using the Hyper-V:
Antivirus Exclusions for Hyper-V Hosts article, namely:
⎕ Default path for Virtual Hard Disks (VHD/VHDX) should be set to a
non-system drive, due to this can cause disk latency as well as create the
potential for the host running out of disk space.
⎕ If you choose to save the VM state as the Automatic Stop Action, the
default virtual machine path should be set to a non-system drive, due to the
creation of a .bin file is created that matches the size of memory reserved for
the virtual machine. A .vsv file may also be created in the same location as
the .bin file, adding to disk space used for each VM. (The default path is:
C:\ProgramData\Microsoft\Windows\Hyper-V.)
⎕ If you are using iSCSI: In Windows Firewall with Advanced
Security, enable iSCSI Service (TCP-In) for Inbound and iSCSI
Service (TCP-Out) for outbound in Firewall settings on each host, to allow
iSCSI traffic to pass to and from host and SAN device. Not enabling these rules
will prevent iSCSI communication.
To set the iSCSI firewall rules via netsh, you can use the following
command:
Netsh advfirewall firewall set rule group=”iSCSI Service” new
enable=yes
⎕ Periodically run performance counters against the host, to ensure optimal
performance.
GENERAL (VMs):
⎕ Ensure you are running only supported guests in your environment. For a
complete listing, refer to the following list: http://blogs.technet.com/b/schadinio/archive/2012/06/26/windows-server-2012-hyper-v-list-of-supported-client-os.aspx
PHYSICAL NICs:
⎕ Ensure NICs have the latest firmware, which often address known issues
with hardware.
⎕ Ensure latest NIC drivers have been installed on the host, which resolve
known issues and/or increase performance.
⎕ NICs should not use APIPA (Automatic Private IP Addressing). APIPA is
non-routable and not registered in DNS.
⎕ VMQ should be enabled on VMQ-capable physical network adapters bound to
an external virtual switch.
⎕ TCP Chimney Offload is not supported with Server 2012 software-based NIC
teaming, due to TCP Chimney has the entire networking stack offloaded to the
NIC. If software-based NIC teaming is not used, however, you can leave it
enabled.
⎕ Jumbo frames should be turned on and set for 9000 or 9014 (depending on
your hardware) for CSV, iSCSI and Live Migration networks. This can
significantly increase (6x increased throughput) throughput while also reducing
CPU cycles.
NICs used for iSCSI communication should have all Networking protocols (on
the Local Area Connection Properties) unchecked, with the exception of:
⎕ NIC Teaming should not be used on iSCSI NIC’s. MPIO is the best method.
NIC teaming can be used on the Management, Production (VM traffic), CSV
Heartbeat and Live Migration networks.
⎕ If you are using NIC teaming for Management, CSV Heartbeat and/or Live
Migration, create the team(s) before you begin assigning
Networks.
⎕ If using aggregate (switch-dependent) NIC teaming in a guest VM, only
SR-IOV NICs should be used on guest.
⎕ If using NIC teaming inside a guest VM, follow this order:
METHOD #1:
METHOD #2:
⎕ When creating virtual switches, it is best practice to uncheck Allow management operating
system to share this network adapter, in order to create a dedicated network
for your VM(s) to communicate with other computers on the physical network. (If
the management adapter is shared, do not modify protocols on the NIC.)
Please note: we fully support and even recommend (in some cases) using
the virtual switch to separate networks for Management, Live Migration,
CSV/Heartbeat and even iSCSI. For example two 10GB NIC’s that are split out
using VLANs and QoS.
⎕ Recommended network configuration when clustering:
** CSV/Heartbeat & Live Migration Networks can be crossover cables
connecting the nodes, but only if you are building a two (2) node cluster.
Anything above two (2) nodes requires a switch. **
⎕ Turn off cluster communication on the iSCSI network.
⎕ Redundant network paths are strongly encouraged (multiple switches) –
especially for your Live Migration and iSCSI network – as it provides resiliency
and quality of service (QoS).
VLANS:
⎕ If aggregate NIC Teaming is enabled for Management and/or Live Migration
networks, the physical switch ports the host is connected to should be set to
trunk (promiscuous) mode. The physical switch should pass all traffic to the
host for filtering.
⎕ Turn off VLAN filters on teamed NICs. Let the teaming software or the
Hyper-V switch (if present) do the filtering.
VIRTUAL NETWORK ADAPTERS (NICs):
⎕ Legacy Network Adapters (a.k.a. Emulated NIC drivers) should only be used
for PXE booting a VM or when installing non-Hyper-V aware Guest operating
systems. Hyper-V’s synthetic NICs (the default NIC selection; a.k.a. Synthetic
NIC drivers) are far more efficient, due to using a dedicated VMBus to
communicate between the virtual NIC and the physical NIC; as a result, there are
reduced CPU cycles, as well as much lower hypervisor/guest transitions per
operation.
Example :
The first thing you want to do is create a team out of the two NICs and
connect the team to a Hyper-V virtual switch. For instance with Powershell
:
New-NetLbfoTeam Team1 –TeamMembers NIC1, NIC2 –TeamNicName
TeamNIC1
New-VMSwitch TeamSwitch –NetAdapterName TeamNIC1 –MinimumBandwidthMode
Weight –AllowManagementOS $false
Next, you want to create multiple vNICs on the parent partition, one for
each kind of traffic (two for SMB). Here’s an example:
Add-VMNetworkAdapter –ManagementOS –Name SMB1 –SwitchName
TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name SMB2 –SwitchName
TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Migration –SwitchName
TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Cluster –SwitchName
TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Management –SwitchName
TeamSwitch
After this, you want to configure the NICs properly. This will include
setting IP addresses, creating separate subnets for each kind of traffic. You
can optionally put them each on a different VLAN.
Since you have lots of NICs now and you’re already in manual configuration
territory anyway, you might want to help the SMB Multichannel by pointing it to
the NICs that should be used by SMB. You can do this by configuring SMB
Multichannel constraints instead of letting SMB try all different paths. For
instance, assuming that your Scale-Out File Server name is SOFS, you could
use:
New-SmbMultichannelConstraint -ServerName SOFS -InterfaceAlias SMB1,
SMB2
Last but not least you might also want set QoS for each kind of traffic,
using the facilities provided by the Hyper-V virtual switch. One way to do it
is:
Set-VMNetworkAdapter –ManagementOS –Name SMB1 –MinimumBandwidthWeight
20
Set-VMNetworkAdapter –ManagementOS –Name SMB2 –MinimumBandwidthWeight
20
Set-VMNetworkAdapter –ManagementOS –Name Migration
–MinimumBandwidthWeight 20
Set-VMNetworkAdapter –ManagementOS –Name Cluster –MinimumBandwidthWeight
5
Set-VMNetworkAdapter –ManagementOS –Name Management
–MinimumBandwidthWeight 5
Set-VMNetworkAdapter –VMName * -MinimumBandwidthWeight 1
There is a great TechNet page with details on this and other network
configurations at http://technet.microsoft.com/en-us/library/jj735302.aspx
DISK:
⎕ New disks should use the VHDX format. Disks created in earlier Hyper-V
iterations should be converted to VHDX, unless there is a need to move the VHD
back to a 2008 Hyper-V host.
⎕ Disks should be fixed in a production environment, to increase disk
throughput. Differencing and Dynamic disks are not recommended for production,
due to increased disk read/write latency times (differencing/dynamic
disks).
⎕ Use caution when using snapshots. If not properly managed, snapshots can
cause disk space issues, as well as additional physical I/O overhead.
Additionally, if you are hosting 2008 R2 (or earlier) Domain Controllers,
reverting to an earlier snapshot can cause USN
rollbacks. Windows Server 2012 has been updated to help better protect
Domain Controllers from USN rollbacks; however, you should still limit
usage.
⎕ The recommended minimum free space on CSV volumes containing Hyper-V
virtual machine VHD and/or VHDX files:
⎕ It is not supported to create a storage pool using Fiber Channel or iSCSI
LUNs.
⎕ Page file on Hyper-V Host should managed by the OS and not configured
manually.
MEMORY:
⎕ Use Dynamic Memory on all VMs (unless not supported).
⎕ Guest OS should be configured with (minimum) recommended memory
CLUSTER:
⎕ Set preferred network for CSV communication, to ensure the correct
network is used for this traffic. (Note: This will only need to be run on one of
your Hyper-V nodes.)
In order to change which network interface is used for CSV traffic,
use the following PowerShell command:
*** Set preferred network for Live Migration, to ensure the correct
network(s) are used for this traffic:
⎕ The Cluster Shutdown Time (ShutdownTimeoutInMinutes registry entry)
should be set to an acceptable number
⎕ Run the Cluster Validation periodically to remediate any issues
⎕ Consider enabling CSV Cache if you have VMs that are used primarily for
read requests, and are less write intensive. Scenarios such as Pooled VDI VMs;
also can be leveraged for reducing VM boot storms.
HYPER-V REPLICA:
⎕ If utilizing Hyper-V Replica, update inbound traffic on the firewall to
allow TCP port ‘80’ and/or port ‘443’ traffic. (In Windows Firewall, enable
“Hyper-V Replica HTTP Listener (TCP-In)” rule on each node of the cluster.
To enable HTTP (port 80) replica traffic, you can run the following from an
elevated command-prompt:
netsh advfirewall firewall set rule group=”Hyper-V Replica HTTP” new
enable=yes
To enable HTTPS (port 443) replica traffic, you can run the following from
an elevated command-prompt:
netsh advfirewall firewall set rule group=”Hyper-V Replica HTTPS” new
enable=yes
⎕ Compression is recommended for replication traffic, to reduce bandwidth
requirements.
⎕ Configure guest operating systems for VSS-based backups to enable
application-consistent snapshots for Hyper-V Replica.
⎕ Integration services must be installed before primary or Replica virtual
machines can use an alternate IP address after a failover
⎕ Virtual hard disks with paging files should be excluded from replication,
unless the page file is on the OS disk.
⎕ Test failovers should be performed monthly, at a minimum, to verify that
failover will succeed and that virtual machine workloads will operate as
expected after failover
⎕ Hyper-V Replica requires the Failover Clustering Hyper-V Replica Broker
role be configured if either the primary or the replica server is part of a
cluster.
⎕ Feature and performance optimization of Hyper-V Replica can be further
tuned by using the registry keys mentioned in the article below:
CLUSTER-AWARE UPDATING:
⎕ Place all Cluster-Aware Updating (CAU) Run Profiles on a single File
Share accessible to all potential CAU Update Coordinators. (Run Profiles are
configuration settings that can be saved as an XML file called an Updating Run
Profile and reused for later Updating Runs. http://technet.microsoft.com/en-us/library/jj134224.aspx
SMB 3.0 FILE SHARES:
⎕ An Active Directory infrastructure is required, so you can grant
permissions to the computer account of the Hyper-V hosts.
⎕ Loopback configurations (where the computer that is running Hyper-V is
used as the file server for virtual machine storage) are not supported.
Similarly, running the file share in VM’s that are hosted on compute nodes that
will serve other VM’s is not supported.
VITRUAL DOMAIN CONTROLLERS (DCs):
⎕ Domain Controller VMs should have “Shut down the guest operating system”
in the Automatic Stop Action setting applied (in the virtual machine settings on
the Hyper-V Host)
INTEGRATION SERVICES:
⎕ Ensure Integration Services (IS) have been installed on all VMs. IC’s
significantly improve interaction between the VM and the physical host.
⎕ Be certain you are running the latest version of integration services –
the same version as the host(s) – in all guest operating systems, as some
Microsoft updates make changes/improvements to the Integration Services
software. (When a new Integration Services version is updated on the host(s) it
does not automatically update the guest operating systems.)
OFFLOADED DATA TRANSFER (ODX) Usage:
⎕ If your SAN supports ODX (see
this post for help; also check with your hardware vendor), you should
strongly consider enabling ODX on your Hyper-V hosts, as well as any VMs that
connect directly to SAN storage LUNs.
|
Wednesday, 28 May 2014
Windows Server 2012 #HyperV – #SCVMM Design – Best Practices #WindowsAzure #Winserv
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment