Wednesday 2 July 2014

Data Warehouse Jobs fails when upgrading Service Manager 2012 from SP1 to R2

Data Warehouse Jobs fails when upgrading Service Manager 2012 from SP1 to R2

I recently performed upgrades of a Service Manager 2012 SP1 environment at a customer and our own environment, from SP1 to Service Manager 2012 R2.
While following the pre-upgrade and upgrade steps as specified in http://technet.microsoft.com/en-us/library/dn520902.aspx, including disabling the Data Warehouse Jobs, both upgrades were successful but when enabling the Data Warehouse Jobs some of the jobs and job modules started failing.
The jobs that was failing in both environment were:
  • Transform.Common
  • Load.Common
  • Load.OMDWDataMart
  • Load.CMDWDataMart
Upon examine the jobs more closely I found that not all job modules failed, only a subset of the job modules. For example:

At the Data Warehouse server I would find a lot of these events in the Operations Manager log:
Error Event ID 33502, Source Data Warehouse:
ETL Module Execution failed:

ETL process type: Load

Batch ID: 136704

Module name: LoadCMDWDataMartPowerActivityDayFact

Message: UNION ALL view ‘CMDWDataMart.dbo.PowerActivityDayFactvw’ is not updatable because a primary key was not found on table ‘[CMDWDataMart].[dbo].[PowerActivityDayFact_2013_Jun]‘.

..and..

Warning Event ID 33503, Source Data Warehouse:
An error countered while attempting to execute ETL Module:

ETL process type: Load

Batch ID: 136704

Module name: LoadCMDWDataMartPowerActivityDayFact

Message: UNION ALL view ‘CMDWDataMart.dbo.PowerActivityDayFactvw’ is not updatable because a primary key was not found on table ‘[CMDWDataMart].[dbo].[PowerActivityDayFact_2013_Jun]‘.
Each of the the transform and load jobs would generate these error messages.
I started examining the Data Warehouse SQL Databases, and found that the error messages was correct, the primary key constraint really was missing on the table that the error message referred to.
So what to do?

Well, luckily I know my way around SQL Server and T-SQL commands. I found that not all Fact tables were missing the primary key (PK). For example, the primary key constraint was missing from dbo.PowerActivityDayFact_2013_Jun, but it was in place for dbo.PowerActivityDayFact_2013_Jul (and the other months for the fact).
So all I needed to do was to script the PK for the correct table, and update the table name and PK name and run the T-SQL command to create the missing primary key.
A little more information step-by-step:
  1. First of all, I disabled all the Data Warehouse Jobs.
  2. After that I began with resuming the Transform.Common job.
  3. I examined the event log and found the tables that were missing the primary key.
  4. I scripted the primary key for the table where it was present, changed the table name and PK name, and run the script on the database to create it on the table where it was missing. The database for the tables updated via the Transform.Common job is DWRepository.
  5. In my environment there were only two tables that was missing primary key in the DWRepository table.
  6. I ran the Transform.Common job again, this time successfully.
I repeated this process for each of the other jobs. These jobs also used different databases, and had different numbers of tables where the primary key was missing:
  1. Load.Common
    1. Database DWDataMart, 59 tables with primary key missing (puh!)
  2. Load.OMDWDataMart
    1. Database OMDWDataMart, 7 tables with primary key missing
  3. Load.CMDWDataMart
    1. Database CMDWDataMart, 9 tables with primary key missing
So it took a while to read through the event logs and find all the tables, but in the end every job was able to run successfully and I could enable the job schedules again.
How to script the primary key and create the missing on the table?

I recommend that you really know your way around SQL Server to do these things, and most importantly: Do a full backup of the affected databases first!
This is the process I used to script and create primary keys, each step repeated for each table:
  1. For example dbo.EntityManagedTypeFact_2013_Jun was missing the primary key, but it was present on the next month; dbo.EntityManagedTypeFact_2013_Jul.
  2. In SQL Management Studio, expand the database and the table in question. Expand Keys and right click. Select Script Key as, CREATE To and New Query Editor Window:
  3. The script would then be shown as:
  4. Since the primary key was missing on the ..Fact_2013_Jun, I updated the script so that Jun replaced Jul (marked yellow above).
  5. And then I executed the script to create the missing primary key.
What about the other environment?

I found that basically the same tables which missed the primary key in the first environment, also missed the primary key in the second environment.
The only difference was that in the first environment, it was always “Jun” tables that were missing the PK. And in the second environment it was “Jan” (and a few “Feb”), but exactly the same tables in the same databases! In fact, I collected all script commands in one main script for each database from the first environment, and after a quick find and replace of month I was able to run the exact same script in the second environment.
One other thing I noted was that also these fact table months was the oldest ones in the database (Jun or Jan/Feb) respectively.
Why does this happen?

I don’t know really. I would like to think I followed the upgrade steps methodically. I will at a later time upgrade other enviroments from SP1 to R2, and will update this blog if I learn more.
The strange thing is I have experienced something similar to this at another time when upgrading another environment (not these) from 2012 RTM to SP1. But that time the problem was that the MP sync job failed because of already existing primary keys. The solution at that time was to DELETE primary keys, which then would be recreated automatically with the MP sync job.

Wednesday 28 May 2014

Rebuilding the WMI Repository

Rebuilding the WMI Repository

If you experience behavior when using WMI, such as application errors or scripts that used to work are no longer working, you may have a corrupted WMI repository. To fix a corrupted WMI repository, use these steps:

Windows XP and Windows Vista

Click Start, Run and type CMD.EXE
Note: In Windows Vista, you need to open an elevated Command Prompt window. To do so, click Start, click All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
Type this command and press Enter:
net stop winmgmt
Using Windows Explorer, rename the folder %windir%\System32\Wbem\Repository. (For example, %windir%\System32\Wbem\Repository_bad). %windir% represents the path to the Windows directory, which is typically C:\Windows.
Switch to Command Prompt window, and type the following and press ENTER after each line:
net start winmgmt
EXIT
Courtesy: The above is excerpted from Microsoft Technet articleWMI Isn't Working!
© 2007 Microsoft Corporation. All rights reserved.

For Windows XP Service Pack 2

Click Start, Run and type the following command:
rundll32 wbemupgd, UpgradeRepository
This command is used to detect and repair a corrupted WMI Repository. The results are stored in the setup.log (%windir%\system32\wbem\logs\setup.log) file.

For Windows Vista

Open an elevated Command Prompt window. To do so, click Start, click All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
Type the following command:
winmgmt /salvagerepository
The above command Performs a consistency check on the WMI repository, and if an inconsistency is detected, rebuilds the repository. The content of the inconsistent repository is merged into the rebuilt repository, if it can be read.

For Windows Server 2003

Use the following command to detect and repair a corrupted WMI Repository:
rundll32 wbemupgd, RepairWMISetup

Re-registering the WMI components (RefWMI FAQ)

The .DLL and .EXE files used by WMI are located in %windir%\system32\wbem. You might need to re-register all the .DLL and .EXE files in this directory. If you are running a 64-bit system you might also need to check for .DLLs and .EXE files in %windir%\sysWOW64\wbem.
To re-register the WMI components, run the following commands at the command prompt:
  • cd /d %windir%\system32\wbem
  • for %i in (*.dll) do RegSvr32 -s %i
  • for %i in (*.exe) do %i /RegServer
 

Note that none of the above two methods restore the missing files related to Windows Management Instrumentation (WMI). So, below is a comprehensive repair procedure that restores all the missing WMI modules. In case of missing WMI modules, you may use the following method.

 

Comprehensive rebuild method

Important note: If you've installed a Service Pack, you need to insert your Windows XP CD with Service Pack integration (called as the Slipstreamed Windows XP CD). If you don't have one, you may point to the %Windir%\ServicePackFiles\i386 folder for a recent version of the system files required during WMI repair. Or you may create a slipstreamed Windows XP CD and insert it when prompted.
Click Start, Run and type the following command, and press ENTER:
rundll32.exe setupapi,InstallHinfSection WBEM 132 %windir%\inf\wbemoc.inf
Insert your Windows XP CD into the drive when prompted. Repair process should take few minutes to complete. Then restart Windows for the changes to take effect.

Can’t remove additional Exchange mailboxes

Can’t remove additional Exchange mailboxes
 
I’ve added some additional Exchange mailboxes to my account but now I can’t seem to remove them anymore in Outlook as they don’t show up in my account settings or the additional mailboxes list.
Right clicking on a mailbox and choosing “Close <mailbox>” produces the error:
"This group of folders is associated with an e-mail account. To remove the account, click the File Tab, and on the Info tab, click Account Settings. Select the e-mail account, and then click Remove."
As they are not listed there, how can I still close these mailboxes?
Restarting Outlook is a first quick check to see if there are any pending account changes which can only be processed via a restart. Most likely, this isn’t going to bring you anything though.
More likely is that you are using Outlook 2007 with SP3 or Outlook 2010 and the mailbox is being hosted on Exchange 2010 server with Service Pack 1 or later. In addition, you have been granted “Full Access” permissions on the mailbox by your Exchange administrator.
In that case, the additional mailbox is being added automatically via the “Auto-Mapping” feature of Exchange.
Folder list with an auto-mapped mailbox. Account Settings - Configured Exchange account
Exchange - Additional mailbox list
Auto-mapped mailboxes are not exposed in your account or additional mailbox list.

Removing an auto-mapped mailbox

As you’ve noticed, removing a mailbox that has been added via the auto-mapping feature is not possible via the traditional way. In fact, as an end-user, there is nothing you can do to remove it in Outlook.
However, you can ask your Exchange administrator to remove the auto-mapping attribute for your account from the additional mailbox you’ve been granted “Full Access” to.
Your administrator can then run the following PowerShell command in the Exchange Management Shell:
Add-MailboxPermission -Identity <shared mailbox alias> -User <your mailbox alias> -AccessRights FullAccess -InheritanceType All -Automapping $false
Once this property has been removed, the additional mailbox will automatically remove itself within some minutes after you restart Outlook.
Notes for your Exchange administrator
  • This command requires Exchange 2010 SP2 or later.
  • This command removes the reference to the user with the Full Access permissions from the msExchDelegateListLink property of the additional mailbox.
  • The additional mailboxes for a user are propagated via the AlternateMailbox attribute with AutoDiscover.
  • For more info see: Disable Outlook Auto-Mapping with Full Access Mailboxes

Re-adding the mailbox

If you want to re-add the mailbox again, there is no need for you to bother your Exchange administrator and ask him/her to reset the auto-mapping attribute.
You can re-add the additional mailbox via the traditional way;
  • Outlook 2007
    Tools-> Account Settings…-> double click on your Exchange account-> button: More Settings…-> tab Advanced-> button: Add…
  • Outlook 2010
    File-> Account Settings-> Account Settings…-> double click on your Exchange account-> button: More Settings…-> tab Advanced-> button: Add…

SCCM 2012 Reporting for dummies: Report Builder 2.0 is not installed




SCCM 2012 Reporting for dummies: Report Builder 2.0 is not installed

There’s a strange issue in SCCM 2012 when you attempt to create a new report by clicking Create Report in the Reporting > Reports node in the Monitoring tab of the SCCM 2012 console. It’s the same error as when you attempt to modify one of the existing reports and the error message you get is:
“Report Building 2.0 is not installed as a click-once application on report server ‘-FQDN of your server-‘. Go to ‘http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=66ab3dbb-bf3e-4f46-9559-ccc6a4f9dc19’ for download information.”

This happens because by default, SCCM 2012 attempts to open the 2.0 version of Reports Builder and has a check in place for 2.0 explicitly. Unfortunately, if you are running SQL Server 2008 R2, the version of the Report Builder that is installed is 3.0 even though it will still work with 3.0. This is documented quite well in this TechNet article:
http://technet.microsoft.com/en-us/library/gg712698.aspx#BKMK_SQLReportingServices
Luckily, the work around for this is as easy as changing a simple registry key. Open up the registry editor and navigate to this hive:
HKLM\Software\Wow6432Node\Microsoft\ConfigMgr 10\AdminUI\Reporting

Once here, change the ReportBuilderApplicationManifestName from “ReportBuilder_2_0_0_0.application” to “ReportBuilder_3_0_0_0.application”.
If you’ve changed the “2” to a “3” and you still get this error message you may need to close the console and reopen it again as administrator. You should now get into the Create Report Wizard or Edit without any issues!

Converting evaluation versions of Windows Server 2012 to full retail versions

Converting evaluation versions of Windows Server 2012 to full retail versions
​Most evaluation versions can be converted to full retail versions, but the method varies slightly depending on the edition. Before you attempt to convert the version, verify that your server is actually running an evaluation version. To do this, do either of the following:

  • From an elevated command prompt, run slmgr.vbs /dlv; evaluation versions will include “EVAL” in the output.
  • From the Start screen, open Control Panel. Open System and Security, and then System. View Windows activation status in the Windows activation area of the System page. Click View details in Windows activation for more information about your Windows activation status.

If you have already activated Windows, the Desktop shows the time remaining in the evaluation period.
If the server is running a retail version instead of an evaluation version, see the “Upgrading previous licensed versions” section of this document for instructions to upgrade to Windows Server 2012.
If the server is running an evaluation version of Windows Server 2012 Standard or Windows Server 2012 Datacenter, you can convert it to a retail version as follows:

  • If the server is a domain controller, see hxxp://technet.microsoft.com/library/hh472160.aspx for important steps to follow before proceeding.
  • Read the license terms.
  • From an elevated command prompt, determine the current edition name with the command DISM /online /Get-CurrentEdition. Make note of the edition ID, an abbreviated form of the edition name. Then run DISM /online /Set-Edition:<edition ID> /ProductKey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX /AcceptEula, providing the edition ID and a retail product key. The server will restart twice.

For the evaluation version of Windows Server 2012 Standard, you can also convert to the retail version of Windows Server 2012 Datacenter in one step using this same command and the appropriate product key.

How To Configure SCOM To Monitor for Changes To The Domain Admins Group

How To Configure SCOM To Monitor for Changes To The Domain Admins Group

One of the demos that I do in my lab uses an Opalis workflow that is triggered by a SCOM rule watching the security event log on my domain controller for any changes to the domain admins group.  Once the alert gets triggered in SCOM, Opalis picks it up, disables the offending account, removes it from the domain admins group, populates the ‘notes’ field with some text indicating why the account is disabled, closes the alert in SCOM and sends an Exchange email to the administrator.
This blog post describes how to configure the SCOM piece.  If you want to know how to configure the Opalis piece – I created a separate post HERE.
I’ve been asked a number of times how I did this in SCOM.  Here you go.  It’s pretty simple. 
The first thing you’ll have to do – if you haven’t done this already – is to enable auditing on your DC’s.  This is done via GPO.  I won’t cover the details of that here – but this KB will walk you through the process.  Basically, auditing of directory services objects (add/moves/changes) is not enabled by default - you have to do that manually and it's a prerequisite to making this process work properly.
Alright, enough of that...let's head over to the SCOM Admin Console...
Authoring –> Management Pack Objects –> Rules –> Create a New Rule
Essentially what we’re doing here is creating an alert that gets triggered by a specific event id in the DC’s security log.  In our case, it’s 4728 for Server 2008 R2 domain controllers.  If your DC’s are not 2008 R2, the event id is different – you’ll have to look it up.
Here are the steps.  Make sure you create a custom management pack – don’t stick this in the default…it’s not good practice.
image
image
Make sure that you select a DC in the ‘target computer’ field:
image
Data Source is where we configure the event id and parameters – Response is where we configure the description field of the actual event and any other customizations (in our case populating a custom field):
image
Click the “…” to configure “Parameter 3” – for the values, you’ll just type those in:
image
By the way – here’s what the actual event log looks like:
image
image
You do have options here – what priority level and severity (affects how it alerts in SCOM) and then what you want the alert description to look like.  You can also make changes to the Alert Name which is what you’ll see top line in the ‘alert view’ in SCOM:
image
In the Custom Fields section, I used #2 and populated that with the text DAACCESS.
I populate CustomField1 with the domain\username information (I use that in the Opalis workflow)
image
The reason I did this is because that’s how I’m telling Opalis what to look for when an event pops in SCOM.  Now, every time this rule gets triggered and this alert pops in SCOM, Opalis will pick it up and start our workflow.
image

System Center Orchestrator 2012 : Installation step by step

System Center Orchestrator 2012 : Installation step by step Edit
Hi everyone,
Microsoft just released the beta of System Center Orchestrator 2012 (previously know as Opalis). http://blogs.technet.com/b/systemcenter/archive/2011/06/15/announcing-the-system-center-orchestrator-beta.aspx
This beta is public, could be downloaded and tested by everyone.
First, I invite you to read the Pre-requisites :
http://technet.microsoft.com/en-us/library/hh201965.aspx
In these TechNet articles you will find useful information, like this one : Even if you are in a lab environment, don’t try to install System Center Orchestrator 2012 Beta on a Domain Controller, It will not pass the requirement check. (thank you to Adam and Ravi to notice that to me).
You will see that Microsoft did an awesome work on the installation process. It’s really easier than installing Opalis 6.3.
Basically, for installing this beta you will need a server with :
  • Windows 2008 R2 Server (RTM or SP1, both supported)
  • SQL 2008 R2
  • IIS
  • .Net Framework 4
  • Silverlight
1. Download the System Center Orchestrator 2012 Beta file at the Microsoft Download Center : http://www.microsoft.com/download/en/details.aspx?id=26503
image
2. Extract the content of the file.
image
3. Run the setupsco.exe file.image
4. If you want to install all the features (Runbook Designer, Console/Web Service, Runbook server) on the same server, select Install Orchestrator. The installer also gives the possibility to install each features independently.
image
5. No product key required for the beta, just accept the license agreement.
image
6. Select all the features that you want to install. Select everything for this first install.
image
7. By expanding, you have information about each features and their prerequisites :
image
image
image
image
8. The installer has a prerequisites checker :
image
9. Two level in the prerequisites, Critical will block the installation process, warning could be pass.
image
10. Orchestrator requires a service account to run runbooks and access remote system resources.
image
11. Go to your Active Directory console, create a new account, and add it in the Local Administrator Group of the server.
image
12. You also have to authorize this account to "Log on as a service". Enable this right in Administrative Tools -> Local Security Policy -> Local Policies -> User Rights Assignment.
image
13. Open "Log on as a service" policy and enter the service account that you just created.
image
14. Back to the Orchestrator 2012 beta installer and click on the Test button.
image
15. Database connection, Go to your SQL server, open the management studio, and create manually the Orchestrator database.
image
16. Add your service account as Credentials for the SQL server.
image
17. Grant this account as db_owner of the Orchestrator DB.image
18. Go back to the Orchestrator installer, select Existing database and chose the database that you just created.
image
19. Orchestrator use a group (local group or Active Group) for authorizing access to the Run Book Designer. Go to your Active Directory console, create a new security group, add your user account and the service account created above as group member.
20. Back to the Orchestrator installer and select the group just created :image
21. Check the box Grant remote access for the Runbook Designer.
image
22. Configure the port for the web service and Orchestrator Console.
image
23. Select the installation folder.
image
24. Review the summary, note that you could change any parameter directly from there.
image
25. Installation in progress…
image
26. After a few minutes, the installation is finished.
image
27. You could now enjoy the new Runbook designer.
image
28. And also the new Web Console based on IIS, .Net and Silverlight.
image
As you could see, Microsoft really did an awesome work on the installation process. Next post will present all the new features of System Center Orchestrator 2012 in details.

Windows Server 2012 #HyperV – #SCVMM Design – Best Practices #WindowsAzure #Winserv

Windows Server 2012 #HyperV – #SCVMM Design – Best Practices #WindowsAzure #Winserv Edit

mountainss SystemCenter Blog

Microsoft SystemCenter blogsite about virtualization on-premises and Cloud

Windows Server 2012 #HyperV – #SCVMM Design – Best Practices #WindowsAzure #Winserv

 
 
i
 
3 Votes
 

Funtional design Cloud
With Hybrid Cloud functionality you get the benefits of local private Cloud and public Cloud together like :
  • Active Directory Federation Services (ADSF) with Office365 and Windows Azure to provision Active Directory users from on-premises to the Cloud with
    Microsoft Forefront Identity Manager or Dirsync.
  • Making websites in the Cloud on WindowsAzure in a few minutes.
  • Using WindowsAzure BLOB Storage for Archiving
  • Manage and provision Virtual Machines with System Center 2012 SP1 from on-premises to the Cloud into Windows Azure
    (On  October 18th 2013, Windows Server 2012 R2 and System Center 2012 R2 will be released)
  • Using SQL databases for your on-premises applications or for your Cloud applications.
  • Managing applications for mobile devices.
The business benefits are :
  • You only pay for what you are using. (TCO Pricing)
  • Time to market, provisioning of Users, Virtual Machines, Websites is realy fast
  • Any time any place 24/7
Here you can find more information on Windows Azure :
To begin with private Cloud computing on-premises, you begin with architecture and design to match the business requirements.
Hyperv Cluster 2012 networking design
When the design of Microsoft Hyper-V Clustering, SQL, System Center 2012 Virtual Machine Manager, VPN Gateway to Windows Azure and ADFS is done, you don’t forget
disaster recovery with System Center 2012 Data Protection Manager and Windows Azure Backup for your Private Cloud solution.
Before you start installing Windows Server 2012 you should check this best practices:
Disclaimer: As with all Best Practices, not every recommendation can – or should – be applied. Best Practices are general guidelines, not hard, fast rules that must be followed. As such, you should carefully review each item to determine if it makes sense in your environment. If implementing one (or more) of these Best Practices seems sensible, great; if it doesn’t, simply ignore it. In other words, it’s up to you to decide if you should apply these in your setting.

 
GENERAL (HOST):
⎕ Use Server Core, if possible, to reduce OS overhead, reduce potential attack surface, and to minimize reboots (due to fewer software updates).
⎕ Ensure hosts are up-to-date with recommended Microsoft updates, to ensure critical patches and updates – addressing security concerns or fixes to the core OS – are applied.
⎕ Ensure all applicable Hyper-V hotfixes and Cluster hotfixes (if applicable) have been applied. Review the following sites and compare it to your environment, since not all hotfixes will be applicable:
· Failover Cluster Management snap-in crashes after you install update 2750149 on a Windows Server 2012-based failover cluster:
http://support.microsoft.com/kb/2803748
⎕ Ensure hosts have the latest BIOS version, as well as other hardware devices (such as Synthetic Fibre Channel, NIC’s, etc.), to address any known issues/supportability
⎕ Host should be domain joined, unless security standards dictate otherwise. Doing so makes it possible to centralize the management of policies for identity, security, and auditing. Additionally, hosts must be domain joined before you can create a Hyper-V High-Availability Cluster.
⎕ RDP Printer Mapping should be disabled on hosts, to remove any chance of a printer driver causing instability issues on the host machine.
  • Preferred      method: Use Group Policy with host servers in their own      separate OU
    • Computer Configuration –> Policies –> Administrative Templates       –> Windows Components –> Remote Desktop Services –> Remote       Desktop Session Host –> Printer Redirection –> Do not allow client       printer redirection –> Set to “Enabled
⎕ Do not install any other Roles on a host besides the Hyper-V role and the Remote Desktop Services roles (if VDI will be used on the host).
  • When      the Hyper-V role is installed, the host OS becomes the “Parent      Partition” (a quasi-virtual machine), and the Hypervisor partition is      placed between the parent partition and the hardware. As a result, it is      not recommended to install additional (non-Hyper-V and/or VDI related)      roles.
⎕ The only Features that should be installed on the host are: Failover Cluster Manager (if host will become part of a cluster), Multipath I/O (if host will be connecting to an iSCSI SAN, Spaces and/or Fibre Channel), or Remote Desktop Services if VDI is being used. (See explanation above for reasons why installing additional features is not recommended.)
⎕ Anti-virus software should exclude Hyper-V specific files using the Hyper-V: Antivirus Exclusions for Hyper-V Hosts article, namely:
    • All folders containing VHD, VHDX, AVHD, VSV and ISO files
    • Default virtual machine configuration directory, if used (C:\ProgramData\Microsoft\Windows\Hyper-V)
    • Default snapshot files directory, if used       (%systemdrive%\ProgramData\Microsoft\Windows\Hyper-V\Snapshots)
    • Custom virtual machine configuration directories, if applicable
    • Default virtual hard disk drive directory
    • Custom virtual hard disk drive directories
    • Snapshot directories
    • Vmms.exe (Note: May need to be configured as process exclusions       within the antivirus software)
    • Vmwp.exe (Note: May need to be configured as process exclusions       within the antivirus software)
    • Additionally, when you use Cluster Shared Volumes, exclude the CSV       path “C:\ClusterStorage” and all its subdirectories.
  • For      more information: http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-hosts.aspx
⎕ Default path for Virtual Hard Disks (VHD/VHDX) should be set to a non-system drive, due to this can cause disk latency as well as create the potential for the host running out of disk space.
⎕ If you choose to save the VM state as the Automatic Stop Action, the default virtual machine path should be set to a non-system drive, due to the creation of a .bin file is created that matches the size of memory reserved for the virtual machine.  A .vsv file may also be created in the same location as the .bin file, adding to disk space used for each VM. (The default path is: C:\ProgramData\Microsoft\Windows\Hyper-V.)
⎕ If you are using iSCSI: In Windows Firewall with Advanced Security, enable iSCSI Service (TCP-In) for Inbound and iSCSI Service (TCP-Out) for outbound in Firewall settings on each host, to allow iSCSI traffic to pass to and from host and SAN device. Not enabling these rules will prevent iSCSI communication.
To set the iSCSI firewall rules via netsh, you can use the following command:
Netsh advfirewall firewall set rule group=”iSCSI Service” new enable=yes
⎕ Periodically run performance counters against the host, to ensure optimal performance.
  • Recommend      using the Hyper-V performance counter that can be extracted from the      (free) Codeplex PAL application:
  • Install      PAL on a workstation and open it, then click on the Threshold File      tab.
    • Select “Microsoft Windows Server 2012 Hyper-V” from the Threshold       file title, then choose Export to Perfmon template file. Save       the XML file to a location accessible to the Hyper-V host.
  • Next,      on the host, open Server Manager –> Tool –> Performance Monitor
  • In      Performance Monitor, click on Data Collector Sets –>      User Defined. Right click on User Defined and choose New –> Data      Collector Set. Name the collector set “Hyper-V Performance Counter      Set” and select Create from a template (Recommended) then      choose Next. On the next screen, select Browse and then locate the      XML file you exported from the PAL application. Once done, this will show      up in your User Defined Data Collector Sets.
  • Run      these counters in Performance Monitor for 30 minutes to 1 hour (during      high usage times) and look for disk latency, memory and CPU issues, etc.
GENERAL (VMs):
⎕ Ensure you are running only supported guests in your environment. For a complete listing, refer to the following list: http://blogs.technet.com/b/schadinio/archive/2012/06/26/windows-server-2012-hyper-v-list-of-supported-client-os.aspx
PHYSICAL NICs:
⎕ Ensure NICs have the latest firmware, which often address known issues with hardware.
⎕ Ensure latest NIC drivers have been installed on the host, which resolve known issues and/or increase performance.
⎕ NICs should not use APIPA (Automatic Private IP Addressing). APIPA is non-routable and not registered in DNS.
⎕ VMQ should be enabled on VMQ-capable physical network adapters bound to an external virtual switch.
⎕ TCP Chimney Offload is not supported with Server 2012 software-based NIC teaming, due to TCP Chimney has the entire networking stack offloaded to the NIC. If software-based NIC teaming is not used, however, you can leave it enabled.
  • TO      SHOW STATUS:
    • From an elevated command-prompt, type the following:
      • netsh int tcp show global
        • (The output should show Chimney Offload State         disabled)
  • TO      DISABLE TCP Chimney Offload:
    • From an elevated command-prompt, type the following:
      • netsh int tcp set global chimney=disabled
⎕ Jumbo frames should be turned on and set for 9000 or 9014 (depending on your hardware) for CSV, iSCSI and Live Migration networks. This can significantly increase (6x increased throughput) throughput while also reducing CPU cycles.
  • End-to-End      configuration must take place – NIC, SAN, Switch must all support      Jumbo Frames.
  • You      can enable Jumbo frames when using crossover cables (for Live Migration      and/or Heartbeat), in a two node cluster.
  • To      verify Jumbo frames have been successfully configured, run the following      command from all your Hyper-V host(s) to your iSCSI SAN:
    • Ping 10.50.2.35 –f –l 8000
      • This command will ping the SAN (e.g. 10.50.2.35) with an 8K packet        from the host. If replies are received, Jumbo frames are properly        configured.
Command prompt Ip ping
NICs used for iSCSI communication should have all Networking protocols (on the Local Area Connection Properties) unchecked, with the exception of:
  • Manufacturers      protocol (if applicable)
  • Internet      Protocol Version 4
  • Internet      Protocol Version 6.
  • Unbinding      other protocols (not listed above) helps eliminate non-iSCSI      traffic/chatter on these NICs.
⎕ NIC Teaming should not be used on iSCSI NIC’s. MPIO is the best method. NIC teaming can be used on the Management, Production (VM traffic), CSV Heartbeat and Live Migration networks.
⎕ If you are using NIC teaming for Management, CSV Heartbeat and/or Live Migration, create the team(s) before you begin assigning Networks.
⎕ If using aggregate (switch-dependent) NIC teaming in a guest VM, only SR-IOV NICs should be used on guest.
⎕ If using NIC teaming inside a guest VM, follow this order:
METHOD #1:
  • Open      the settings of the Virtual Machine
    • Under Network Adapter, select Advanced Features.
    • In the right pane, under Network Teaming, tick the “Enable this       network adapter to be part of a team in the guest operating system.
  • Once      inside the VM, open Server Manager. In the All Servers view, enable NIC      Teaming from Server
Capture 2
METHOD #2:
  • Use      the following PowerShell command (Run as Administrator) on the Hyper-V      host where the VM currently resides:
    • Set-VMNetworkAdapter –VMName contoso-vm1 –AllowTeaming On
      • This PowerShell command turns on resiliency if one or more of the        teamed NICs goes offline.
    • Once inside the VM, open Server Manager. In the All Servers view,       enable NIC Teaming from Server
⎕ When creating virtual switches, it is best practice to uncheck Allow management operating system to share this network adapter, in order to create a dedicated network for your VM(s) to communicate with other computers on the physical network. (If the management adapter is shared, do not modify protocols on the NIC.)
Please note: we fully support and even recommend (in some cases) using the virtual switch to separate networks for Management, Live Migration, CSV/Heartbeat and even iSCSI.  For example two 10GB NIC’s that are split out using VLANs and QoS.
⎕ Recommended network configuration when clustering:
Min # of Networks on Host Host Management VM Network Access CSV/Heartbeat Live Migration iSCSI
5 “Management” “Production” “CSV/Heartbeat” “Live Migration” “iSCSI”
** CSV/Heartbeat & Live Migration Networks can be crossover cables connecting the nodes, but only if you are building a two (2) node cluster. Anything above two (2) nodes requires a switch. **
⎕ Turn off cluster communication on the iSCSI network.
  • In      Failover Cluster Manager, under Networks, the iSCSI network properties      should be set to “Do not allow cluster network communication on this      network.” This prevents internal cluster communications as well as CSV      traffic from flowing over the same network.
⎕ Redundant network paths are strongly encouraged (multiple switches) – especially for your Live Migration and iSCSI network – as it provides resiliency and quality of service (QoS).
VLANS:
⎕ If aggregate NIC Teaming is enabled for Management and/or Live Migration networks, the physical switch ports the host is connected to should be set to trunk (promiscuous) mode. The physical switch should pass all traffic to the host for filtering.
⎕ Turn off VLAN filters on teamed NICs. Let the teaming software or the Hyper-V switch (if present) do the filtering.
VIRTUAL NETWORK ADAPTERS (NICs):
⎕ Legacy Network Adapters (a.k.a. Emulated NIC drivers) should only be used for PXE booting a VM or when installing non-Hyper-V aware Guest operating systems. Hyper-V’s synthetic NICs (the default NIC selection; a.k.a. Synthetic NIC drivers) are far more efficient, due to using a dedicated VMBus to communicate between the virtual NIC and the physical NIC; as a result, there are reduced CPU cycles, as well as much lower hypervisor/guest transitions per operation.
Example :
The first thing you want to do is create a team out of the two NICs and connect the team to a Hyper-V virtual switch. For instance with Powershell :
New-NetLbfoTeam Team1 –TeamMembers NIC1, NIC2 –TeamNicName TeamNIC1
New-VMSwitch TeamSwitch –NetAdapterName TeamNIC1 –MinimumBandwidthMode Weight –AllowManagementOS $false
Next, you want to create multiple vNICs on the parent partition, one for each kind of traffic (two for SMB). Here’s an example:
Add-VMNetworkAdapter –ManagementOS –Name SMB1 –SwitchName TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name SMB2 –SwitchName TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Migration –SwitchName TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Cluster –SwitchName TeamSwitch
Add-VMNetworkAdapter –ManagementOS –Name Management –SwitchName TeamSwitch
After this, you want to configure the NICs properly. This will include setting IP addresses, creating separate subnets for each kind of traffic. You can optionally put them each on a different VLAN.
Since you have lots of NICs now and you’re already in manual configuration territory anyway, you might want to help the SMB Multichannel by pointing it to the NICs that should be used by SMB. You can do this by configuring SMB Multichannel constraints instead of letting SMB try all different paths. For instance, assuming that your Scale-Out File Server name is SOFS, you could use:
New-SmbMultichannelConstraint -ServerName SOFS -InterfaceAlias SMB1, SMB2
Last but not least you might also want set QoS for each kind of traffic, using the facilities provided by the Hyper-V virtual switch. One way to do it is:
Set-VMNetworkAdapter –ManagementOS –Name SMB1 –MinimumBandwidthWeight 20
Set-VMNetworkAdapter –ManagementOS –Name SMB2 –MinimumBandwidthWeight 20
Set-VMNetworkAdapter –ManagementOS –Name Migration –MinimumBandwidthWeight 20
Set-VMNetworkAdapter –ManagementOS –Name Cluster –MinimumBandwidthWeight 5
Set-VMNetworkAdapter –ManagementOS –Name Management –MinimumBandwidthWeight 5
Set-VMNetworkAdapter –VMName * -MinimumBandwidthWeight 1
There is a great TechNet page with details on this and other network configurations at http://technet.microsoft.com/en-us/library/jj735302.aspx
DISK:
⎕ New disks should use the VHDX format. Disks created in earlier Hyper-V iterations should be converted to VHDX, unless there is a need to move the VHD back to a 2008 Hyper-V host.
  • The      VHDX format supports virtual hard disk storage capacity of up to 64 TB,      improved protection against data corruption during power failures (by      logging updates to the VHDX metadata structures), and improved alignment      of the virtual hard disk format to work well on large sector disks.
⎕ Disks should be fixed in a production environment, to increase disk throughput. Differencing and Dynamic disks are not recommended for production, due to increased disk read/write latency times (differencing/dynamic disks).
⎕ Use caution when using snapshots. If not properly managed, snapshots can cause disk space issues, as well as additional physical I/O overhead. Additionally, if you are hosting 2008 R2 (or earlier) Domain Controllers, reverting to an earlier snapshot can cause USN rollbacks. Windows Server 2012 has been updated to help better protect Domain Controllers from USN rollbacks; however, you should still limit usage.
⎕ The recommended minimum free space on CSV volumes containing Hyper-V virtual machine VHD and/or VHDX files:
  • 15%      free space, if the partition size is less than 1TB
  • 10%      free space, if the partition size is between 1TB and 5TB
  • 5%      free space, if the partition size is greater than 5TB
  • To      enumerate current volume information, including the percentage free, you      can use the following PowerShell command:
    • Get-ClusterSharedVolume “Cluster Disk 1″ | fc *
      • Review the “PercentageFree” output
⎕ It is not supported to create a storage pool using Fiber Channel or iSCSI LUNs.
⎕ Page file on Hyper-V Host should managed by the OS and not configured manually.
MEMORY:
⎕ Use Dynamic Memory on all VMs (unless not supported).
⎕ Guest OS should be configured with (minimum) recommended memory
  • 2048MB      is recommended for Windows      Server 2012 (e.g. 2048 – 4096 Dynamic Memory). (The minimum      supported is 512 MB)
  • 2048MB      is recommended for Windows      Server 2008, including R2 (e.g. 2048 – 4096 Dynamic Memory).      (The minimum supported is 512 MB)
  • 1024MB      is recommended for Windows 7 (e.g. 1024 – 2048 Dynamic Memory). (The      minimum supported is 512 MB)
  • 1024MB      is recommended for Windows Vista (e.g. 1024 – 2048 Dynamic Memory). (The      minimum supported is 512 MB)
  • 512MB      is recommended for Windows Server 2003 R2 w/SP2 (e.g. 256 – 2048 Dynamic      Memory). (The minimum supported is 128 MB.
  • 512MB      is recommended for Windows Server 2003 w/SP2 (e.g. 256 – 2048 Dynamic      Memory). (The minimum supported is 128 MB).
  • 512MB      is recommended for Windows XP. Important: XP does not support Dynamic      Memory. (The minimum supported is 64 MB). Note:      Support for Windows XP Ends April 2014!
CLUSTER:
⎕ Set preferred network for CSV communication, to ensure the correct network is used for this traffic. (Note: This will only need to be run on one of your Hyper-V nodes.)
  • The      lowest metric in the output generated by the following PowerShell command      will be used for CSV traffic
    • Open a PowerShell command-prompt (using “Run as administrator”)
    • First, you’ll need to import the “FailoverClusters” module. Type the       following at the PS command-prompt:
      • Import-Module FailoverClusters
    • Next, we’ll request a listing of networks used by the host, as well       as the metric assigned. Type the following:
      • Get-ClusterNetwork | ft Name, Metric, AutoMetric,        Role
In order to change which network interface is used for CSV traffic,       use the following PowerShell command:
      • (Get-ClusterNetwork “CSV         Network”).Metric=900
        • This will set the network named “CSV          Network” to 900
Capture 3
*** Set preferred network for Live Migration, to ensure the correct network(s) are used for this traffic:
  • Open      Failover Cluster Manager, Expand the Cluster
  • Next,      right click on Networks and select Live Migration Settings
    • Use the Up / Down buttons to list the networks in order from most       preferred (at the top) to least preferred (at the bottom)
    • Uncheck any networks you do not want used for Live Migration traffic
    • Select Apply and then press OK
  • Once      you have made this change, it will be used for all VMs in the cluster
⎕ The Cluster Shutdown Time (ShutdownTimeoutInMinutes registry entry) should be set to an acceptable number
  • Default      is set using the following calculation (which can be too high, depending      on how much physical memory is installed)
    • (100 / 64) * physical RAM
      • For example, a 96GB system would have 150 minute timeout. (100/64)*96        = 150
  • Suggest      setting the timeout to 15, 20 or 30 minutes, depending on the number of      VMs in your environment.
    • Registry Key: HKLM\Cluster\ShutdownTimeoutInMinutes
      • Enter minutes in Decimal value.
      • Note: Requires a reboot to take effect
Capture 4
⎕ Run the Cluster Validation periodically to remediate any issues
  • NOTE:      If all LUNs are part of the cluster, the validation test will skip all      disk checks. It is recommended to set up a small test-only LUN and share      it on all nodes, so full validation testing can be completed.
  • If      you need to test a LUN running virtual machines, the LUN will need to be      taken offline.
  • For      more information: http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx#BKMK_how_to_run
⎕ Consider enabling CSV Cache if you have VMs that are used primarily for read requests, and are less write intensive. Scenarios such as Pooled VDI VMs; also can be leveraged for reducing VM boot storms.
HYPER-V REPLICA:
⎕ If utilizing Hyper-V Replica, update inbound traffic on the firewall to allow TCP port ‘80’ and/or port ‘443’ traffic. (In Windows Firewall, enable “Hyper-V Replica HTTP Listener (TCP-In)” rule on each node of the cluster.
To enable HTTP (port 80) replica traffic, you can run the following from an elevated command-prompt:
netsh advfirewall firewall set rule group=”Hyper-V Replica HTTP” new enable=yes
To enable HTTPS (port 443) replica traffic, you can run the following from an elevated command-prompt:
netsh advfirewall firewall set rule group=”Hyper-V Replica HTTPS” new enable=yes
⎕ Compression is recommended for replication traffic, to reduce bandwidth requirements.
⎕ Configure guest operating systems for VSS-based backups to enable application-consistent snapshots for Hyper-V Replica.
⎕ Integration services must be installed before primary or Replica virtual machines can use an alternate IP address after a failover
⎕ Virtual hard disks with paging files should be excluded from replication, unless the page file is on the OS disk.
⎕ Test failovers should be performed monthly, at a minimum, to verify that failover will succeed and that virtual machine workloads will operate as expected after failover
⎕ Hyper-V Replica requires the Failover Clustering Hyper-V Replica Broker role be configured if either the primary or the replica server is part of a cluster.
⎕ Feature and performance optimization of Hyper-V Replica can be further tuned by using the registry keys mentioned in the article below:
CLUSTER-AWARE UPDATING:
⎕ Place all Cluster-Aware Updating (CAU) Run Profiles on a single File Share accessible to all potential CAU Update Coordinators. (Run Profiles are configuration settings that can be saved as an XML file called an Updating Run Profile and reused for later Updating Runs. http://technet.microsoft.com/en-us/library/jj134224.aspx
SMB 3.0 FILE SHARES:
⎕ An Active Directory infrastructure is required, so you can grant permissions to the computer account of the Hyper-V hosts.
⎕ Loopback configurations (where the computer that is running Hyper-V is used as the file server for virtual machine storage) are not supported. Similarly, running the file share in VM’s that are hosted on compute nodes that will serve other VM’s is not supported.
VITRUAL DOMAIN CONTROLLERS (DCs):
⎕ Domain Controller VMs should have “Shut down the guest operating system” in the Automatic Stop Action setting applied (in the virtual machine settings on the Hyper-V Host)
INTEGRATION SERVICES:
⎕ Ensure Integration Services (IS) have been installed on all VMs. IC’s significantly improve interaction between the VM and the physical host.
⎕ Be certain you are running the latest version of integration services – the same version as the host(s) – in all guest operating systems, as some Microsoft updates make changes/improvements to the Integration Services software. (When a new Integration Services version is updated on the host(s) it does not automatically update the guest operating systems.)
  • Note:      If Integration Services are out of date, you will see 4010 events logged      in the event viewer.
  • You      can discover the version for each of your VMs on a host by running the      following PowerShell command:
    • Get-VM | ft Name, IntegrationServicesVersion
  • If      you’d like a PowerShell method to update Integration Services on VMs,      check out this blog: http://gallery.technet.microsoft.com/scriptcenter/Automated-Install-of-Hyper-edc278ef
OFFLOADED DATA TRANSFER (ODX) Usage:
⎕ If your SAN supports ODX (see this post for help; also check with your hardware vendor), you should strongly consider enabling ODX on your Hyper-V hosts, as well as any VMs that connect directly to SAN storage LUNs.
  • To      enable ODX, open PowerShell (using Run as Administrator) and type the      following:
    • Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem       -Name “FilterSupportedFeaturesMode” –Value 0
    • Be sure to run this command on every Hyper-V host       that connects to the SAN, as well as any VM that connects directly to the       SAN.