This PC can’t run Windows 11 (Hyper-V)

Trying to install Windows 11, but not meeting requirements? (Hyper-V)

The error that prevents the computer from running Windows 11 is due to not meeting the minimum system requirements.



You can refer to the following table:

Processor1 gigahertz (GHz) or faster with 2 or more cores on a compatible 64-bit processor or System on a Chip (SoC).       
TPMTrusted Platform Module (TPM) version 2.0. Check here for instructions on how your PC might be enabled to meet this requirement.
Storage64 GB or larger storage device Note: See below under “More information on storage space to keep Windows 11 up-to-date” for more details.
System firmwareUEFI, Secure Boot capable. Check here for information on how your PC might be able to meet this requirement.
RAM4 GB or larger storage device
Graphics cardCompatible with DirectX 12 or later with WDDM 2.0 driver.
DisplayHigh definition (720p) display that is greater than 9” diagonally, 8 bits per colour channel.
Internet connection and Microsoft accountWindows 11 Home edition requires internet connectivity and a Microsoft account.

If you are trying to run Windows 11 on Hyper-V, the default Generation 2 Virtual Machines does NOT meet the requirement.

Also consider


You will need to modify the following configuration for Processor and TPM to accommodate the required configuration.

Processor1 gigahertz (GHz) or faster with 2 or more cores on a compatible 64-bit processor or System on a Chip (SoC).       
TPMTrusted Platform Module (TPM) version 2.0. Check here for instructions on how your PC might be enabled to meet this requirement.

Meeting TPM requirement on a Virtual Machine, Generation 2:

  1. Right Click the virtual machine
  2. Click Settings
  3. Click Security
  4. Mark ‘Enable Trusted Platform Module’

Meeting Processor requirement on a Virtual Machine, Generation 2:

  1. Right Click the virtual machine
  2. Click Settings
  3. Click Processor
  4. Change ‘Number of virtual processors’ to a minimum of ‘2’

Now enjoy Windows 11 on Hyper-V.

Setting up the lab environment – DNS resolution puzzle

I would prefer to have access from my local vlan and wireless vlan to the servers.
But didn’t want to all dns traffic into the VM’s (and depend on a testing environment)

Basically I want host resolution, and being able to utilizing the domain services in the testing environment, without interruption of my other services.

This is the solution in went for was using Conditional Forwarders

First the Hyper-V host:

I Installed the DNS Server role within Windows Server 2016.
Setup forwarders to google dns:

 

 

 

 

 

 

 

After that i will add the Conditional Forwards for my testing domain
I  in my previous post I created 2 Domain controllers, both hosting DNS.

 

 

 

 

 

 

 

I will then add my Hyper-V hosts IP to the DNS server of my router/dhcp on the needed vlans.
When clients send requests for the testing domain, they will get forwarded to the Hyper-V guests (DCs) and all other requests will go to the Google DNS (8.8.8.8, 8.8.4.4) – more info: Getting started with Google Public DNS

I did want a backup as well, so I installed Synology DNS on my Synology DS1511+
Synology DNS supports forwarding zones, with up to 2 forwarders per zone.
That’s perfect for my setup, added the 2 Hyper-V guest DC’s.
The Synology DNS would of course also need Resolution services enabled, so we can forward requests to the Google DNS (8.8.8.8, 8.8.4.4)

 

 

 

 

Then I will go ahead an update the DNS servers handed out by my DHCP on my normal client network and wireless clients.
This configuration offers failover/backup, because both the Hyper-V hosts and the Synology will be able to handle DNS requests and forwarding.

Setting up the lab environment – Hyper-V: Virtual Machines

Now to the good stuff

Usually when working with Hyper-V I use reference disks, mainly to save space on rather expensive disks. But is there much to gain when using deduplication? I was on sure, so asked in Tech Konnect

The response from Tech Konnect confirmed, when using deduplication, it out wages the other issues with reference disks, rather than saving disk space.

Since it’s not possible to create folders or groups within the Hyper-V Management Console, I will be using a naming standard: <Group> – <Generation> – <OS> – <hostname>

The first Virtual Machine will be a Domain Controller, what better way to start?

Virtual Machine Configuration:
Generation: 2
Startup memory: 4096
Dynamic Memory: Enabled
Network Connection: External
Disk size: 60 GB
Boot from the ISO File – Windows Server 2016 Standard (Desktop Experience)

The quick wins for a Generation 2 Virtual Machine

  • PXE Boot by using a standard network adapter
  • Boot from a SCSI virtual hard disk
  • Boot from SCSI virtual DVD
  • Secure Boot (enabled by default
  • UEFI firmware support
  • Shielded Virtual Machines
  • Storage spaces direct
  • Hot add/removal of virtual network adapters

Note: IDE drives and legacy network adapter support has been removed.
For more info: Generation 2 Virtual Machine Overview and Hyper-V feature compatibility by Generation and Guest

The memory assigned might be a bit overkill, but for now it will be OK.
When configuring the second DC i will only assign: 2048.
The complete installation time to logon was 3 minutes and 9 seconds

Both DCs can actually live with 2048 mb ram, so it can always be cut down, but keep in mind we are using Dynamic Memory allocation.

I will of course be setting up MDT and ConfigMgr at a later point, to streamline and gain a bit of speed.

 

Setting up the lab environment – Hyper-V

The host was installed with Windows Server 2016.
This means Hyper-V is a feature that we just need to enable – yay!

  1. Open a elevated PowerShell prompt
  2. Run the command: Install-WindowsFeature -Name Hyper-v -IncludeManagementTools -Restart

The command will automatically reboot once installed
NOTE: In some cases you will have to enable Intel-VT in BIOS.
You can read more about the system requirements here: Systems Requirements for Hyper-V on Windows Server 2016

For the actual setup of guests machines, I will be running mostly Windows Server 2016, Windows 10 and maybe a Linux guest or two.
Don’t forget to review: Supported Windows guest operating systems

Now to the Hyper-V Switch configuration:

I am going to add an external switch, as my client is already connected to the network on the correct vlan.
Keep in mind I got a seperat USB NIC with 2 Ports (USB 3.0 to Dual Port Gigabit Ethernet Adapter NIC w/ USB Port)
This means i will be able to have my on-board primary NIC only for management and use one of the other free ports only for VMs.

  1. Open Hyper-V Manger
  2. Mark your server
  3. Click Virtual Switch Manager in the actions pane
  4. Mark External
  5. Click Create Virtual Switch
  6. Name your switch – Example: External – 254 (254 indicating the vlan)
  7. Remove the checkbox in Allow management operating system to share this network adapter
  8. Mark: Enable single-root I/O virtualization (SR-IOV)
    Not familiar with SR-IOV? Read this blog post by John Howard: Everything you wanted to know about SR-IOV
  9. Click Ok
    You might get a warning that pending network configuration will prevent remote access to this computer – If your connected to the server using another NIC, you will not be disconnected.

This concludes the basic configuration of the Hyper-V host.
We installed Hyper-V and configured a switch with external access.

The next post will be more detailed with the actual Hyper-V guest installations

Setting up the lab environment – Deduplication

The next step for the lab or so-called home data center: Installing and Configuring Deduplication

I was going to use a USB stick for the Windows Server 2016 OS.
The main reason for this: DEDUPLICATION.

I did start out with a USB stick, but due to performance issues this was changed – read the follow-up post (https://blog.thomasmarcussen.com/follow-up-on-the-home-datacenter-hardware/)

The reason for having the OS on a separate volume: Deduplication is not supported on system or boot volumes. Read more about Deduplication here: About Data Deduplication

Let’s get started

Installing and Configuring Deduplication

  1. Open an elevated PowerShell prompt
  2. Execute: Import-Module ServerManager
  3. Execute: Add-WindowsFeature -Name FS-Data-Deduplication
  4. Execute: Import-Module Deduplication

Installing Deduplication

Now we installed data Deduplication and it’s ready for configuration.

My Raid 0 volume is D:
The volume will primarily hold Virtual Machines (Hyper-V)
I’m going to execute the following command: Enable-DedupVolume D: -UsageType HyperV

Enable Deduplication for volume

You can read more about the different usage types here: Understanding Data Deduplication

Some quick info for the usage type Hyper-V:

  • Background optimization
  • Default optimization policy:
    • Minimum file age = 3 days
    • Optimize in-use files = Yes
    • Optimize partial files = Yes
  • “Under-the-hood” tweaks for Hyper-V interop

You can start the optimization job and limited (if needed) the amount of consumed memory for the process: Start-DedupJob -Volume “D:” -Type Optimization -Memory 50

 

 

 

You can get the deduplication status with the command: Get-DedupStatus

 

 

 

 

The currently saved space on my volume is 46.17 GB
That is for a 2 ISO files and a reference machine for Windows Server 2016 and the reference disks copied to separate folder.

More usefull powershell cmdlets here: Deduplication Cmdlets in Windows PowerShell

I do love deduplication especially for virtual machines, hence most of the basic data is the same.
The disks are also rather expensive so getting the most out of them is preferred.

 

Follow up on the home datacenter hardware

It’s time for a small update – the previous post is available here: https://blog.thomasmarcussen.com/new-lab-home-datacenter/

The datacenter has been running for about a week now – quite good…. but…..

I’ve been using the Samsung USB as OS drive – Samsung USB 3.0 Flash Drive FIT 32GB
It does have fast read, and a not that slow write, according to Samsung: Up to 130 MB/s

The week passed with setting up and installing VMs – using the actual VMs etc.
But when installing Windows Updates on the Hyper-V host, installing Features/Roles or anykind of configuration, it seems to slow down to useless/freeze.

Running a full Windows Update took about 2 days to reach fully patched level.
During that time it was useless as in no respondig.

I ran a WinSat drive test on the Samsung USB Flash Drive:

Random 16.0 Read: 8.87 MB/s
Random 16.0 Write: 5.45 MB/S

Random reads and writes seems pretty low.

The sequential seems a bit better:

Sequential 64.0 Read: 76.89 MB/s
Sequential 64.0 Write: 86.95 MB/s

The Commands used with winsat:
Winsat disk -drive C: -ran -write (Random 16.0 Write)
Winsat disk -drive C: -ran -read (Random 16.0 Read)
Winsat disk -drive C: -seq -write (Sequential 64.0 Read)
Winsat disk -drive C: -seq -read (Sequential 64.0 Write)

So I decided to replace to Samsung USB 3.0 Flash Drive FIT as a OS Drive.

The new hardware choosen ended up being:

1 x StarTech.com USB 3.0 to M.2 SATA External SSD Enclosure with UASP
1 x Samsung 850 EVO M.2 2280 SSD – 250GB

SM2NGFFMBU33 - StarTech.com USB 3.0 to M.2 SATA External SSD Enclosure with UASPMZ-N5E250BW - Samsung 850 EVO M.2 2280 SSD - 250GB
NOTE: the StarTech.com enclosure does not support NVMe, so did choose a m.2 SSD.

I know that StarTech also have USB 3.1, but i really do want to keep the USB 3.1 port free for an additional RAID enclosure when/if needed. Properly a StarTech enscloure but not sure yet.. (USB 3.1 (10Gbps) External Enclosure for Dual 2.5″ SATA Drives) still looking for a nice USB 3.1 enclosure that supports m.2 NVMe…

Samsung states the specs for the new disk as:

  • Up to 500MB/s Sequential Write
  • Up to 540/s Sequential Read

The actual performance test on the Samsung 850 EVO M.2 2280 SSD:

Random 16.0 Read: 276.51 MB/s
Random 16.0 Write: 271.37 MB/S

Sequential 64.0 Read: 388.85 MB/s
Sequential 64.0 Write: 383.71 MB/s

So in any case it’s quite a performance boost for the OS disk.

 

Unified Extensible Firmware Interface (UEFI)

Unified Extensible Firmware Interface

For many years BIOS has been the industry standard for booting a PC. BIOS has served us well, but it is time to replace it with something better. UEFI is the replacement for BIOS, so it is important to understand the differences between BIOS and UEFI. In this section, you learn the major differences between the two and how they affect operating system deployment.

Introduction to UEFI

BIOS has been in use for approximately 30 years. Even though it clearly has proven to work, it has some limitations, including:

  • 16-bit code
  • 1 MB address space
  • Poor performance on ROM initialization
  • MBR maximum bootable disk size of 2.2 TB

As the replacement to BIOS, UEFI has many features that Windows can and will use.

With UEFI, you can benefit from:

  • Support for large disks. UEFI requires a GUID Partition Table (GPT) based disk, which means a limitation of roughly 16.8 million TB in disk size and more than 100 primary disks.
  • Faster boot time. UEFI does not use INT 13, and that improves boot time, especially when it comes to resuming from hibernate.
  • Multicast deployment. UEFI firmware can use multicast directly when it boots up. In WDS, MDT, and Configuration Manager scenarios, you need to first boot up a normal Windows PE in unicast and then switch into multicast. With UEFI, you can run multicast from the start.
  • Compatibility with earlier BIOS. Most of the UEFI implementations include a compatibility support module (CSM) that emulates BIOS.
  • CPU-independent architecture. Even if BIOS can run both 32- and 64-bit versions of firmware, all firmware device drivers on BIOS systems must also be 16-bit, and this affects performance. One of the reasons is the limitation in addressable memory, which is only 64 KB with BIOS.
  • CPU-independent drivers. On BIOS systems, PCI add-on cards must include a ROM that contains a separate driver for all supported CPU architectures. That is not needed for UEFI because UEFI has the ability to use EFI Byte Code (EBC) images, which allow for a processor-independent device driver environment.
  • Flexible pre-operating system environment. UEFI can perform many functions for you. You just need an UEFI application, and you can perform diagnostics and automatic repairs, and call home to report errors.
  • Secure boot. Windows 8 and later can use the UEFI firmware validation process, called secure boot, which is defined in UEFI 2.3.1. Using this process, you can ensure that UEFI launches only a verified operating system loader and that malware cannot switch the boot loader.

Versions

UEFI Version 2.3.1B is the version required for Windows 8 and later logo compliance. Later versions have been released to address issues; a small number of machines may need to upgrade their firmware to fully support the UEFI implementation in Windows 8 and later.

Hardware support for UEFI

In regard to UEFI, hardware is divided into four device classes:

  • Class 0 devices. This is the UEFI definition for a BIOS, or non-UEFI, device.
  • Class 1 devices. These devices behave like a standard BIOS machine, but they run EFI internally. They should be treated as normal BIOS-based machines. Class 1 devices use a CSM to emulate BIOS. These older devices are no longer manufactured.
  • Class 2 devices. These devices have the capability to behave as a BIOS- or a UEFI-based machine, and the boot process or the configuration in the firmware/BIOS determines the mode. Class 2 devices use a CSM to emulate BIOS. These are the most common type of devices currently available.
  • Class 3 devices. These are UEFI-only devices, which means you must run an operating system that supports only UEFI. Those operating systems include Windows 8, Windows 8.1, Windows Server 2012, and Windows Server 2012 R2. Windows 7 is not supported on these class 3 devices. Class 3 devices do not have a CSM to emulate BIOS.

Windows support for UEFI

Microsoft started with support for EFI 1.10 on servers and then added support for UEFI on both clients and servers.

With UEFI 2.3.1, there are both x86 and x64 versions of UEFI. Windows 10 supports both. However, UEFI does not support cross-platform boot. This means that a computer that has UEFI x64 can run only a 64-bit operating system, and a computer that has UEFI x86 can run only a 32-bit operating system.

How UEFI is changing operating system deployment

There are many things that affect operating system deployment as soon as you run on UEFI/EFI-based hardware. Here are considerations to keep in mind when working with UEFI devices:

  • Switching from BIOS to UEFI in the hardware is easy, but you also need to reinstall the operating system because you need to switch from MBR/NTFS to GPT/FAT32 and NTFS.
  • When you deploy to a Class 2 device, make sure the boot option you select matches the setting you want to have. It is common for old machines to have several boot options for BIOS but only a few for UEFI, or vice versa.
  • When deploying from media, remember the media has to be FAT32 for UEFI, and FAT32 has a file-size limitation of 4GB.
  • UEFI does not support cross-platform booting; therefore, you need to have the correct boot media (32- or 64-bit).