This PC can’t run Windows 11 (Hyper-V)

Trying to install Windows 11, but not meeting requirements? (Hyper-V)

The error that prevents the computer from running Windows 11 is due to not meeting the minimum system requirements.



You can refer to the following table:

Processor1 gigahertz (GHz) or faster with 2 or more cores on a compatible 64-bit processor or System on a Chip (SoC).       
TPMTrusted Platform Module (TPM) version 2.0. Check here for instructions on how your PC might be enabled to meet this requirement.
Storage64 GB or larger storage device Note: See below under “More information on storage space to keep Windows 11 up-to-date” for more details.
System firmwareUEFI, Secure Boot capable. Check here for information on how your PC might be able to meet this requirement.
RAM4 GB or larger storage device
Graphics cardCompatible with DirectX 12 or later with WDDM 2.0 driver.
DisplayHigh definition (720p) display that is greater than 9” diagonally, 8 bits per colour channel.
Internet connection and Microsoft accountWindows 11 Home edition requires internet connectivity and a Microsoft account.

If you are trying to run Windows 11 on Hyper-V, the default Generation 2 Virtual Machines does NOT meet the requirement.

Also consider


You will need to modify the following configuration for Processor and TPM to accommodate the required configuration.

Processor1 gigahertz (GHz) or faster with 2 or more cores on a compatible 64-bit processor or System on a Chip (SoC).       
TPMTrusted Platform Module (TPM) version 2.0. Check here for instructions on how your PC might be enabled to meet this requirement.

Meeting TPM requirement on a Virtual Machine, Generation 2:

  1. Right Click the virtual machine
  2. Click Settings
  3. Click Security
  4. Mark ‘Enable Trusted Platform Module’

Meeting Processor requirement on a Virtual Machine, Generation 2:

  1. Right Click the virtual machine
  2. Click Settings
  3. Click Processor
  4. Change ‘Number of virtual processors’ to a minimum of ‘2’

Now enjoy Windows 11 on Hyper-V.

Setting up the lab environment – DNS resolution puzzle

I would prefer to have access from my local vlan and wireless vlan to the servers.
But didn’t want to all dns traffic into the VM’s (and depend on a testing environment)

Basically I want host resolution, and being able to utilizing the domain services in the testing environment, without interruption of my other services.

This is the solution in went for was using Conditional Forwarders

First the Hyper-V host:

I Installed the DNS Server role within Windows Server 2016.
Setup forwarders to google dns:

 

 

 

 

 

 

 

After that i will add the Conditional Forwards for my testing domain
I  in my previous post I created 2 Domain controllers, both hosting DNS.

 

 

 

 

 

 

 

I will then add my Hyper-V hosts IP to the DNS server of my router/dhcp on the needed vlans.
When clients send requests for the testing domain, they will get forwarded to the Hyper-V guests (DCs) and all other requests will go to the Google DNS (8.8.8.8, 8.8.4.4) – more info: Getting started with Google Public DNS

I did want a backup as well, so I installed Synology DNS on my Synology DS1511+
Synology DNS supports forwarding zones, with up to 2 forwarders per zone.
That’s perfect for my setup, added the 2 Hyper-V guest DC’s.
The Synology DNS would of course also need Resolution services enabled, so we can forward requests to the Google DNS (8.8.8.8, 8.8.4.4)

 

 

 

 

Then I will go ahead an update the DNS servers handed out by my DHCP on my normal client network and wireless clients.
This configuration offers failover/backup, because both the Hyper-V hosts and the Synology will be able to handle DNS requests and forwarding.

Setting up the lab environment – Hyper-V: Virtual Machines

Now to the good stuff

Usually when working with Hyper-V I use reference disks, mainly to save space on rather expensive disks. But is there much to gain when using deduplication? I was on sure, so asked in Tech Konnect

The response from Tech Konnect confirmed, when using deduplication, it out wages the other issues with reference disks, rather than saving disk space.

Since it’s not possible to create folders or groups within the Hyper-V Management Console, I will be using a naming standard: <Group> – <Generation> – <OS> – <hostname>

The first Virtual Machine will be a Domain Controller, what better way to start?

Virtual Machine Configuration:
Generation: 2
Startup memory: 4096
Dynamic Memory: Enabled
Network Connection: External
Disk size: 60 GB
Boot from the ISO File – Windows Server 2016 Standard (Desktop Experience)

The quick wins for a Generation 2 Virtual Machine

  • PXE Boot by using a standard network adapter
  • Boot from a SCSI virtual hard disk
  • Boot from SCSI virtual DVD
  • Secure Boot (enabled by default
  • UEFI firmware support
  • Shielded Virtual Machines
  • Storage spaces direct
  • Hot add/removal of virtual network adapters

Note: IDE drives and legacy network adapter support has been removed.
For more info: Generation 2 Virtual Machine Overview and Hyper-V feature compatibility by Generation and Guest

The memory assigned might be a bit overkill, but for now it will be OK.
When configuring the second DC i will only assign: 2048.
The complete installation time to logon was 3 minutes and 9 seconds

Both DCs can actually live with 2048 mb ram, so it can always be cut down, but keep in mind we are using Dynamic Memory allocation.

I will of course be setting up MDT and ConfigMgr at a later point, to streamline and gain a bit of speed.

 

Setting up the lab environment – Hyper-V

The host was installed with Windows Server 2016.
This means Hyper-V is a feature that we just need to enable – yay!

  1. Open a elevated PowerShell prompt
  2. Run the command: Install-WindowsFeature -Name Hyper-v -IncludeManagementTools -Restart

The command will automatically reboot once installed
NOTE: In some cases you will have to enable Intel-VT in BIOS.
You can read more about the system requirements here: Systems Requirements for Hyper-V on Windows Server 2016

For the actual setup of guests machines, I will be running mostly Windows Server 2016, Windows 10 and maybe a Linux guest or two.
Don’t forget to review: Supported Windows guest operating systems

Now to the Hyper-V Switch configuration:

I am going to add an external switch, as my client is already connected to the network on the correct vlan.
Keep in mind I got a seperat USB NIC with 2 Ports (USB 3.0 to Dual Port Gigabit Ethernet Adapter NIC w/ USB Port)
This means i will be able to have my on-board primary NIC only for management and use one of the other free ports only for VMs.

  1. Open Hyper-V Manger
  2. Mark your server
  3. Click Virtual Switch Manager in the actions pane
  4. Mark External
  5. Click Create Virtual Switch
  6. Name your switch – Example: External – 254 (254 indicating the vlan)
  7. Remove the checkbox in Allow management operating system to share this network adapter
  8. Mark: Enable single-root I/O virtualization (SR-IOV)
    Not familiar with SR-IOV? Read this blog post by John Howard: Everything you wanted to know about SR-IOV
  9. Click Ok
    You might get a warning that pending network configuration will prevent remote access to this computer – If your connected to the server using another NIC, you will not be disconnected.

This concludes the basic configuration of the Hyper-V host.
We installed Hyper-V and configured a switch with external access.

The next post will be more detailed with the actual Hyper-V guest installations

The new LAB and home datacenter

Finally i managed to setup the new lab and home-datacenter.

Due to several home limitations (cost of power, space and noise)

The decision was clear:

1 x Intel NUC Skull Canyon NUC6i7KYK

2 x G.Skill Ripjaws4 SO DDR4-2133 DC – 16GB

1 x USB 3.0 to Dual Port Gigabit Ethernet Adapter NIC w/ USB Port

2 x Samsung NVMe SSD 960 EVO 1 TB

1 x Samsung USB 3.0 Flash Drive FIT 32GB

The NUC can run RAID 0 and 1 on the internal NVMe drives, i’m going for RAID 0 (Stripe)
This is where it gets a bit interesting.. Mostly i’m going to run VM’s within Hyper-V.
Hyper-V and deduplication that is… of course.

I needed to move the OS to another disk, for maximum storage.
Keep in mind, deduplication will not run on OS/System disk.

This is where the USB Flash Drives comes in handy, Windows Server 2016 can run directly on that, leaving me with 2 full NVMe drives in RAID 0 and deduplication – YAY!

that’s the hardware part 🙂

 

 

 

The follow up post is here: https://blog.thomasmarcussen.com/follow-up-on-the-home-datacenter-hardware/

Unified Extensible Firmware Interface (UEFI)

Unified Extensible Firmware Interface

For many years BIOS has been the industry standard for booting a PC. BIOS has served us well, but it is time to replace it with something better. UEFI is the replacement for BIOS, so it is important to understand the differences between BIOS and UEFI. In this section, you learn the major differences between the two and how they affect operating system deployment.

Introduction to UEFI

BIOS has been in use for approximately 30 years. Even though it clearly has proven to work, it has some limitations, including:

  • 16-bit code
  • 1 MB address space
  • Poor performance on ROM initialization
  • MBR maximum bootable disk size of 2.2 TB

As the replacement to BIOS, UEFI has many features that Windows can and will use.

With UEFI, you can benefit from:

  • Support for large disks. UEFI requires a GUID Partition Table (GPT) based disk, which means a limitation of roughly 16.8 million TB in disk size and more than 100 primary disks.
  • Faster boot time. UEFI does not use INT 13, and that improves boot time, especially when it comes to resuming from hibernate.
  • Multicast deployment. UEFI firmware can use multicast directly when it boots up. In WDS, MDT, and Configuration Manager scenarios, you need to first boot up a normal Windows PE in unicast and then switch into multicast. With UEFI, you can run multicast from the start.
  • Compatibility with earlier BIOS. Most of the UEFI implementations include a compatibility support module (CSM) that emulates BIOS.
  • CPU-independent architecture. Even if BIOS can run both 32- and 64-bit versions of firmware, all firmware device drivers on BIOS systems must also be 16-bit, and this affects performance. One of the reasons is the limitation in addressable memory, which is only 64 KB with BIOS.
  • CPU-independent drivers. On BIOS systems, PCI add-on cards must include a ROM that contains a separate driver for all supported CPU architectures. That is not needed for UEFI because UEFI has the ability to use EFI Byte Code (EBC) images, which allow for a processor-independent device driver environment.
  • Flexible pre-operating system environment. UEFI can perform many functions for you. You just need an UEFI application, and you can perform diagnostics and automatic repairs, and call home to report errors.
  • Secure boot. Windows 8 and later can use the UEFI firmware validation process, called secure boot, which is defined in UEFI 2.3.1. Using this process, you can ensure that UEFI launches only a verified operating system loader and that malware cannot switch the boot loader.

Versions

UEFI Version 2.3.1B is the version required for Windows 8 and later logo compliance. Later versions have been released to address issues; a small number of machines may need to upgrade their firmware to fully support the UEFI implementation in Windows 8 and later.

Hardware support for UEFI

In regard to UEFI, hardware is divided into four device classes:

  • Class 0 devices. This is the UEFI definition for a BIOS, or non-UEFI, device.
  • Class 1 devices. These devices behave like a standard BIOS machine, but they run EFI internally. They should be treated as normal BIOS-based machines. Class 1 devices use a CSM to emulate BIOS. These older devices are no longer manufactured.
  • Class 2 devices. These devices have the capability to behave as a BIOS- or a UEFI-based machine, and the boot process or the configuration in the firmware/BIOS determines the mode. Class 2 devices use a CSM to emulate BIOS. These are the most common type of devices currently available.
  • Class 3 devices. These are UEFI-only devices, which means you must run an operating system that supports only UEFI. Those operating systems include Windows 8, Windows 8.1, Windows Server 2012, and Windows Server 2012 R2. Windows 7 is not supported on these class 3 devices. Class 3 devices do not have a CSM to emulate BIOS.

Windows support for UEFI

Microsoft started with support for EFI 1.10 on servers and then added support for UEFI on both clients and servers.

With UEFI 2.3.1, there are both x86 and x64 versions of UEFI. Windows 10 supports both. However, UEFI does not support cross-platform boot. This means that a computer that has UEFI x64 can run only a 64-bit operating system, and a computer that has UEFI x86 can run only a 32-bit operating system.

How UEFI is changing operating system deployment

There are many things that affect operating system deployment as soon as you run on UEFI/EFI-based hardware. Here are considerations to keep in mind when working with UEFI devices:

  • Switching from BIOS to UEFI in the hardware is easy, but you also need to reinstall the operating system because you need to switch from MBR/NTFS to GPT/FAT32 and NTFS.
  • When you deploy to a Class 2 device, make sure the boot option you select matches the setting you want to have. It is common for old machines to have several boot options for BIOS but only a few for UEFI, or vice versa.
  • When deploying from media, remember the media has to be FAT32 for UEFI, and FAT32 has a file-size limitation of 4GB.
  • UEFI does not support cross-platform booting; therefore, you need to have the correct boot media (32- or 64-bit).

Allowing non-Administrators to control Hyper-V

By default Hyper-V is configured such that only members of the administrators group can create and control virtual machines.  I am going to show you how to allow a non-administrative user to create and control virtual machines.

Hyper-V uses the new authorization management framework in Windows to allow you to configure what users can and cannot do with virtual machines.

Hyper-V can be configured to store it’s authorization configuration in Active Directory or in a local XML file.  After initial installation it will always be configured to use a local XML file located at \programdata\Microsoft\Windows\Hyper-V\InitialStore.xml on the system partition.  To edit this file you will need to:

Open the Run dialog (launch it from the Start menu or press Windows Key + R).
Start mmc.exe
Open the File menu and select Add/Remove Snap-in…
From the Available snap-ins list select Authorization Manager.
Click Add > and then click OK.
Click on the new Authorization Manager node in the left panel.
Open the Action menu and select Open Authorization Store…
Choose XML file for the Select the authorization store type: option and then use the Browse… to open \programdata\Microsoft\Windows\Hyper-V\InitialStore.xml on the system partition (programdata is a hidden directory so you will need to type it in first).
Click OK.
Expand InitialStore.xml then Microsoft Hyper-V services then Role Assignments and finally select Administrator.
Open the Action menu and select Assign Users and Groups then From Windows and Active Directory…
Enter the name of the user that you want to be able to control Hyper-V and click OK.
Close the MMC window (you can save or discard your changes to Console 1 – this does not affect the authorization manager changes that you just made).

The user that you added will be able to completely control Hyper-V even if they are not an administrator on the physical computer.

Disk2vhd: New Utility to Create VHD Versions of Physical Disks

Disk2vhd” is a new utility released by Sysinternals team at Microsoft which allows you to create VHD versions of physical disks. If you are not aware of these technical terms, lets help you in understanding them.

VHD refers to “Virtual Hard Disk” which is a file format used in Microsoft virtual machines. So by using “Disk2vhd” you can create VHD versions of your HDD which can be used in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs).

https://i0.wp.com/img.photobucket.com/albums/v374/vishaal_here/Disk2VHD.png?w=584

Disk2vhd supports Windows XP SP2 and higher including 64-bit editions.

You can download it using following link:

Download Link

Hypervisor not running

I’m running Windows Server 2008 R2 on my laptop and after I Hyper-V role the following error message showed, when trying to start a Hyper-V machine. Seems the Hyper-Visor entry was never made to the BCD store.

To add the hypervisor auto launch into the BCD store you’ll need to run the following command in administrator mode

bcdedit /set hypervisorlaunchtype auto

make sure virtualization feature is enabled in BIOS!

wpeinit.exe Unable to Locate Component "wdi.dll"

so…. the problem seems to be VM Ware’s VM Server 1.0.4… (ESX/GSX) .. Could give some very “funny” problems when running a cluster with 1.0.3 and 1.0.4 nodes.. DOH!

Download the Intel e1000 drivers from http://downloadcenter.intel.com/Product_Filter.aspx?ProductID=1878&lang=eng for both XP and Vista. Put them in D:DriversIntel

using Windows AIK command prompt:

1.  In the command window, run copype x86 c:winpe to initialize my WinPE environment.

2.  Run the imagex /mountrw c:winpewinpe.wim 1 c:winpemount command to populate the image folder.

3.  Run the peimg /inf=D:DriversVMWarescsi*.inf /image=c:winpemount for all drivers (e.g. SCSI, NIC,etc.).

4.  Run the peimg /inf=D:DriversIntelPRO2KXP*.inf /image=c:winpemount for the Intel e1000 drivers.

5.  Add the ..WindowsSystem32wdi.dll from a Vista machine to the c:winpemountWindowsSystem32 folder.

6.  Run the imagex /unmount /commit c:winpemount to update the c:winpewinpe.wim.

7.  Run the oscdimg -n -h -bc:winpeetfsboot.com c:winpeiso c:winpewinpe.iso to generate the boot ISO file.
edit the *.vmx file to add the entry Ethernet0.virtualDev = “e1000” after Ethernet.preset = “TRUE” entry using Notepad.
The configure the VM to use WINPE.iso creating in Step 7.