About Thomas.Marcussen

Technology Architect & Evangelist, Microsoft Trainer and Everything System Center Professional with a passion for Technology

Using SCCM CI Baseline to check for expiring user certificates

The topic is almost self explaining.

You need to monitor specific user-based certificates, to avoid a situation where they have already expired.

You can add this to your daily security compliance checklist.

Prerequisites for running CIs can be found here: Compliance Baseline prerequisites

  1. Create Configuration Item

Go to Assets and Compliance, Compliance settings Configuration Items, right click and select Create a new configuration item:

Create Configuration Item

Provide the name CI – Script – USER CERT Expiration check, leave the configuration item type as Windows and press Next:
Configuration Item Wizard

Optionally you can provide a description that gives an overview of the configuration item and other relevant information that helps to identify it in the Configuration Manager console.

Select the OS where this configuration item assumes to be applied and click Next
client operating systems that will assess this configuration item for compliance

To create Configuration Item, click New:
Create Configuration Item Wizard

Type in the name CI – Script, from drop down of settings type select Script and data type as String.

There are two options to specify where a script would reside

– Discovery Script

– Remediation Script

Remediation is not handled in this post.

To place discovery script since to evaluate compliance, click on Add Script.

Please note that this script needs to be runin the logged-on user context, therefore please check “Run scripts by using the logged on user credentials”

Create Setting

Select script language as Windows PowerShell and type in the script (see attached USER_CERT_Expiration _Discovery.ps1) in the Script field:
Edit Discovery Scripts

#

$Compliance = ‘Compliant’

$Check = get-childitem -path cert:\currentuser -recurse | where-object {$_.thumbprint -eq ‘‎‎‎‎‎‎245c97df7514e7cf2df8be72ae957b9e04741e85’}| where { $_.notafter -le (get-date).AddDays(30)}

If ($Check) {$Compliance = ‘NonCompliant’}

$Compliance

#

Script download: [download id=”787″]

and click OK

Click Next

Specify settings for this operating system

After the script is in place, you can click the “Compliance Rules” tab. Now compliance rule needs to be created. This rule will determine how the compliance is reported once the script runs on a computer (based on how the compliance a machine could be either Compliant or NonCompliant).

 

Click on New

Specify complance rules for this operating system

Type in the comSpecify rules to define compliance conditions for this settingpliance rule name and click on Browse:

Select the name of the configuration setting that just created (if not already selected and then click on Select):
Select a setting for this rule

In the Rule Type select Value and then select if the value returned is Equals to Compliant.

Click OK

Click Next
Use compliance rules to specify the condition that make a configuration item setting compliant

Next screen presents the summary of the settings, if any changes are needed then you can go back and make changes here. Click Next.

create an operating system configuration item with the following settings

Configuration Item is ready now.
The Create Configuration Item Wizard completed successfully

Next step is to create Configuration Baseline.

  1. Create Configuration Baseline

Right click Configuration baseline and create configuration baseline.
Create Configuration Baseline

Type the name of configuration baseline CB – Script – USER CERT Expiration check. Click on add and select configuration item from drop down menu.
Specify general information about this configuration baseline

Please make sure that Purpose set to Required!

Select the configuration item just created and click OK. This would finish creating configuration baseline.

Add Configuration Item

Now it is time to deploy this base line to relevant Users Collection(-s).

  1. Deploy the Configuration Baseline

    Go to configuration baseline and right click and select Deploy.
    Deploy Configuration Baseline

Select the configuration baseline CB – Script – USER CERT Expiration check.

Browse and point it to targeted Users collection (its recommended to run it for some limited collection for testing before deployment to production)

Change the evaluation schedule as per as your requirements (taking in consideration that in case of it seems to be critical for your environment, in production running this CB probably once a day is recommended)

Again, the key thing here is to be sure that you deploy this CB to users and not to your systems!

Select the configuration baseline that you want to deploy to a collection

Click OK

Note: When the configuration baseline is deployed, please allow that it can be evaluated for compliance within about two hours of the start time that you schedule.

  1. Verify that a device has evaluated the Configuration Baseline

To check it on a Windows PC client (general recommendation to do it for all targeted OS client types)

On a Device, go to Control Panel, System and Security and open the Configuration Manager applet. In the Configurations tab you’ll see what Configuration Baselines the client will evaluate at its specific schedule. Click on configurations and click on “Evaluate”, “Refresh” and then “View Report”.
As shown in the pictures below, Configuration Baseline was evaluated to be Compliant or Not
Configuration Manager Properties

Report view

Report view, non-compliant

 

Compliance baselines prerequisites for SCCM

PREREQUISITES

  1. Run Compliance Baselines on clients using SCCM

This setting is located in the Administration workspace under Overview ⇒ Client Settings ⇒ Default Client Agent Settings. Then right-click and select Properties. This will open the properties window for the client settings. By setting the “Enable compliance evaluation on clients” option to “Yes”, you enable this option in the default settings. The default schedule for evaluation is every 7 days.

You can adjust this schedule as necessary for your environment, including using a custom schedule that will allow you more control over when it runs.
The default schedule will typically be adequate for most environments.

You can also modify the default client settings, create new custom client settings, or modify existing custom client settings. You can create or modify custom client settings when you want to apply a group of client settings to specific collections.Client Settings - Default settings

 

  1. Running Powershell scripts on clients using SCCM:

    1. is to sign the script with your company trusted certificate
    2. is to set the PowerShell execution policy to “Bypass”.

    If you are not about to sign your scripts, please go to Administration->Client Settings, select the default (or create a new) Client Settings and set the PowerShell execution policy to “Bypass” in the “Computer Agent” section of the client settings. This action allows unsigned PowerShell scripts to run properly when they executed by the Computer Agent. If you don’t use the default client settings, you need to make sure the custom client settings you created are also deployed to the collection you will be checking compliance on.
    Client Settings - Powershell execution Policy

 

  1. Reporting Services Point SCCM role

    Should be installed in your environment for reporting.
    Assuming this role is already installed as reporting is a core requirement in the majority of SCCM functions.

Smart Card device integration into Windows 10

All the joys of Windows 10….. now on 1709

Last week after upgrading Windows 10, I came a cross this nice new integration for Smart Cards. (tokens)

 

 

 

 

 

 

 

Windows 10 new has support for eTokens (SafeNet Tokens)
I was very pleased with this update, it will save me yet another application to install.
I’ve been using the SafeNet Application from Gemalto and it has served me well for several years. So time for a changes, the integrated Smart Card application in Windows 10 works perfect for me.

I am using the following it with:

and my tokens? I ALWAYS use digicert for codesigning certificates:)

ps. A new version of Access Director Enterprise is on its way, signed and released to web.

Stay tuned!

Bad Rabbit Ransomware

A new ransomware has seen the light.

Bad Rabbit ransomware is currently roaming Eastern European countries.

Bad Rabbit is mainly delivered using a fake Flash Update.
This means we a looking a regular drive-by-attack and fake updates/malicious software from websites to get it started.

Secure you clients now!
1. Blacklist the hashes
2. Block the files
3. Lock the registry entries.
4. Remove your local administrative privileges, if you can’t? Limit them and monitor using: Access Director Enterprise

Bad Rabbit IOCs:

Hashes:

install_flash_player.exe: 630325cac09ac3fab908f903e3b00d0dadd5fdaa0875ed8496fcbb97a558d0da
infpub.dat: 579fd8a0385482fb4c789561a30b09f25671e86422f40ef5cca2036b28f99648
cscc.dat (dcrypt.sys): 0b2f863f4119dc88a22cc97c0a136c88a0127cb026751303b045f7322a8972f6 
dispci.exe: 8ebc97e05c8e1073bda2efb6f4d00ad7e789260afa2c276f0c72740b838a0a93

Files:

C:\Windows\infpub.dat
C:\Windows\System32\Tasks\drogon
C:\Windows\System32\Tasks\rhaegal
C:\Windows\cscc.dat
C:\Windows\dispci.exe

Registry entries:

HKLM\SYSTEM\CurrentControlSet\services\cscc
HKLM\SYSTEM\CurrentControlSet\services\cscc\Type	1
HKLM\SYSTEM\CurrentControlSet\services\cscc\Start	0
HKLM\SYSTEM\CurrentControlSet\services\cscc\ErrorControl	3
HKLM\SYSTEM\CurrentControlSet\services\cscc\ImagePath	cscc.dat
HKLM\SYSTEM\CurrentControlSet\services\cscc\DisplayName	Windows Client Side Caching DDriver
HKLM\SYSTEM\CurrentControlSet\services\cscc\Group	Filter
HKLM\SYSTEM\CurrentControlSet\services\cscc\DependOnService	FltMgr
HKLM\SYSTEM\CurrentControlSet\services\cscc\WOW64	1

Network Activity:

Local & Remote SMB Traffic on ports 137, 139, 445
caforssztxqzf2nm.onion

Files extensions targeted for encryption:

.3ds .7z .accdb .ai .asm .asp .aspx .avhd .back .bak .bmp .brw .c .cab .cc .cer .cfg .conf .cpp .crt .cs .ctl .cxx .dbf .der .dib .disk .djvu .doc .docx .dwg .eml .fdb .gz .h .hdd .hpp .hxx .iso .java .jfif .jpe .jpeg .jpg .js .kdbx .key .mail .mdb .msg .nrg .odc .odf .odg .odi .odm .odp .ods .odt .ora .ost .ova .ovf .p12 .p7b .p7c .pdf .pem .pfx .php .pmf .png .ppt .pptx .ps1 .pst .pvi .py .pyc .pyw .qcow .qcow2 .rar .rb .rtf .scm .sln .sql .tar .tib .tif .tiff .vb .vbox .vbs .vcb .vdi .vfd .vhd .vhdx .vmc .vmdk .vmsd .vmtm .vmx .vsdx .vsv .work .xls .xlsx .xml .xvd .zip

 

Enrique Lima Community Contribution Award – Submissions open until July 30, 2017

Submitting Nominations: If you know someone who is deserving of this Award, please send an email to [email protected] and please cover the following criteria in your message:

  • Active MCT or MCT Alumni
  • Is the person actively teaching Microsoft technologies? How often?
  • Active in the MCT community
  • Demonstrates enthusiasm and a positive attitude with regards to the program and the community
  • Demonstrates passion for mentoring new and existing MCTs as well as others in the technical community
  • Willingness to volunteer within and outside of the MCT community. Examples could include the following:
    • Volunteering at a school for Hour of Code
    • Local code camps, user groups
    • Regional events
    • Large events, like Ignite and Build, Envision and WPC
    • Online engagements (blogs, forums, etc.)

About the Award:

The Enrique Lima Award is designed to recognize and celebrate the outstanding work of Microsoft Certified Trainers in the MCT Community, being awarded to only those who show knowledge, passion, and commitment to the Microsoft community as a whole, and specifically to the MCT program. This Award was established in memory of Enrique Lima; a husband, a father of two, a Microsoft Certified Trainer (MCT) and Regional Lead, Microsoft Valued Professional (MVP), Krewe member , SharePoint Community leader, and member of the Learn on Demand Systems (LODS) team.  Enrique always went above and beyond what was expected, building up both local, regional and international communities and everyday showed us all what the spirit of the MCT community was all about

Setting up the lab environment – DNS resolution puzzle

I would prefer to have access from my local vlan and wireless vlan to the servers.
But didn’t want to all dns traffic into the VM’s (and depend on a testing environment)

Basically I want host resolution, and being able to utilizing the domain services in the testing environment, without interruption of my other services.

This is the solution in went for was using Conditional Forwarders

First the Hyper-V host:

I Installed the DNS Server role within Windows Server 2016.
Setup forwarders to google dns:

 

 

 

 

 

 

 

After that i will add the Conditional Forwards for my testing domain
I  in my previous post I created 2 Domain controllers, both hosting DNS.

 

 

 

 

 

 

 

I will then add my Hyper-V hosts IP to the DNS server of my router/dhcp on the needed vlans.
When clients send requests for the testing domain, they will get forwarded to the Hyper-V guests (DCs) and all other requests will go to the Google DNS (8.8.8.8, 8.8.4.4) – more info: Getting started with Google Public DNS

I did want a backup as well, so I installed Synology DNS on my Synology DS1511+
Synology DNS supports forwarding zones, with up to 2 forwarders per zone.
That’s perfect for my setup, added the 2 Hyper-V guest DC’s.
The Synology DNS would of course also need Resolution services enabled, so we can forward requests to the Google DNS (8.8.8.8, 8.8.4.4)

 

 

 

 

Then I will go ahead an update the DNS servers handed out by my DHCP on my normal client network and wireless clients.
This configuration offers failover/backup, because both the Hyper-V hosts and the Synology will be able to handle DNS requests and forwarding.

Setting up the lab environment – Hyper-V: Virtual Machines

Now to the good stuff

Usually when working with Hyper-V I use reference disks, mainly to save space on rather expensive disks. But is there much to gain when using deduplication? I was on sure, so asked in Tech Konnect

The response from Tech Konnect confirmed, when using deduplication, it out wages the other issues with reference disks, rather than saving disk space.

Since it’s not possible to create folders or groups within the Hyper-V Management Console, I will be using a naming standard: <Group> – <Generation> – <OS> – <hostname>

The first Virtual Machine will be a Domain Controller, what better way to start?

Virtual Machine Configuration:
Generation: 2
Startup memory: 4096
Dynamic Memory: Enabled
Network Connection: External
Disk size: 60 GB
Boot from the ISO File – Windows Server 2016 Standard (Desktop Experience)

The quick wins for a Generation 2 Virtual Machine

  • PXE Boot by using a standard network adapter
  • Boot from a SCSI virtual hard disk
  • Boot from SCSI virtual DVD
  • Secure Boot (enabled by default
  • UEFI firmware support
  • Shielded Virtual Machines
  • Storage spaces direct
  • Hot add/removal of virtual network adapters

Note: IDE drives and legacy network adapter support has been removed.
For more info: Generation 2 Virtual Machine Overview and Hyper-V feature compatibility by Generation and Guest

The memory assigned might be a bit overkill, but for now it will be OK.
When configuring the second DC i will only assign: 2048.
The complete installation time to logon was 3 minutes and 9 seconds

Both DCs can actually live with 2048 mb ram, so it can always be cut down, but keep in mind we are using Dynamic Memory allocation.

I will of course be setting up MDT and ConfigMgr at a later point, to streamline and gain a bit of speed.

 

Setting up the lab environment – Hyper-V

The host was installed with Windows Server 2016.
This means Hyper-V is a feature that we just need to enable – yay!

  1. Open a elevated PowerShell prompt
  2. Run the command: Install-WindowsFeature -Name Hyper-v -IncludeManagementTools -Restart

The command will automatically reboot once installed
NOTE: In some cases you will have to enable Intel-VT in BIOS.
You can read more about the system requirements here: Systems Requirements for Hyper-V on Windows Server 2016

For the actual setup of guests machines, I will be running mostly Windows Server 2016, Windows 10 and maybe a Linux guest or two.
Don’t forget to review: Supported Windows guest operating systems

Now to the Hyper-V Switch configuration:

I am going to add an external switch, as my client is already connected to the network on the correct vlan.
Keep in mind I got a seperat USB NIC with 2 Ports (USB 3.0 to Dual Port Gigabit Ethernet Adapter NIC w/ USB Port)
This means i will be able to have my on-board primary NIC only for management and use one of the other free ports only for VMs.

  1. Open Hyper-V Manger
  2. Mark your server
  3. Click Virtual Switch Manager in the actions pane
  4. Mark External
  5. Click Create Virtual Switch
  6. Name your switch – Example: External – 254 (254 indicating the vlan)
  7. Remove the checkbox in Allow management operating system to share this network adapter
  8. Mark: Enable single-root I/O virtualization (SR-IOV)
    Not familiar with SR-IOV? Read this blog post by John Howard: Everything you wanted to know about SR-IOV
  9. Click Ok
    You might get a warning that pending network configuration will prevent remote access to this computer – If your connected to the server using another NIC, you will not be disconnected.

This concludes the basic configuration of the Hyper-V host.
We installed Hyper-V and configured a switch with external access.

The next post will be more detailed with the actual Hyper-V guest installations

Where is my cloud key?

During vlan configuration for my new lab (see previous post Home Data Center)
I had to change some vlans, for some reason my  Hybrid Cloud Device Management controller got “lost in translation”

The setup:
1 x Mikrotik CCR1036-12G-4S-EM
1 x UniFi switch 16 150w
1 x UniFI Cloud Key

It all starts with the adoption of devices onto the cloud key – no problems there.
But when your Cloud Key is lost in a vlan with no connectivity or access to other devices, then its back to basics.

My problem was that I deleted the valid networks/vlans added on ports – BIG mistake!
So nothing really works and you can’t change anything, but tuning a bit on the vlans on the router seemed to open up a bit.

I was able to SSH into the switch (It’s running BusyBox)

 

 

 

From there we can SSH to localhost on port 2222
Click anykey to get the Warning!: The changes may break controller settings and only be effective until reboot.

It will not give a response and will be awaiting a key stroke before your ready to go

Keep in mind all configurations will be lost, once connected back and provisioned by the cloud key.

To enter user privilege mode type: Enable
To enter Global Config mode type: Configure

And now we can configure the entire switch (also without the controller and more advanced settings.

In this case,
Selecting an interface (port 2): interface 0/2
adding a vlan to the interface (port 2): interface vlan participation include 22
and your lost Cloud Key should now be back on the correct vlan.
If you just need to bring back to management network on the switch, you can use: network mgmt_vlan 1
Note: 1 being the vlan you want to participate in.

NOTE:
If you need multiple vlan on 1 port – maybe with a UniFi AP AC Pro, you will see that the AP doesn’t have a configuration for management vlan, so we need to configure the native LAN for the device. It only requires 3 steps, it can be a bit confusing configuring and adding a bit more complexity.

– Defined Netowrk/VLANs in Controller Settings
– Manage or Create Network Profiles for the switch in the Switch Configuration
– Assign Networks/VLANS or Profiles to the Port(s)

There is a nice explanation here: A-non-expert-Guide-to-VLAN-and-Trunks-in-Unifi-Switches