Blog

Building a storage box – Part 3 – Completed

Hardware

I installed the NVME to PCIe x4 adapter with the 256 GB drive, booted up the computer, and crossed my fingers. Fortunately, the drive was discovered by Windows and I could initialize, partition, and format it. The performance of the drive was spectacular compared to the other drives in the box.

The next goal was to set the NVMe drive as the primary boot device. After some time in the BIOS, I determined that device wouldn’t support the desired configuration and I would have to boot from SATA bus. The primary drive in this box is an old Samsung SSD II drive, so its performance pales in comparison, but would still be superior to spinning drives. Good enough.

If you recall, I was planning on using the optical NIC card and had purchased the requisite parts to connect it to my managed switch. Unfortunately, the motherboard only has one PCIe x4 slot. (note: check your hardware requirements and capabilities before making purchases). This meant I had a decision to make. Have a random, spare drive online or have a fancy NIC card that wouldn’t provide any real enhancement?

I opted to keep the drive and shelve the optical NIC. I kept the parts I had purchased so I could someday use the card because it is cool. And the optical cables would look really slick in my box o’ cables.

Operating System

I tried. A little…

Having built the box on Windows Server first, I wanted to try out FreeNAS as it is a highly regarded solution for network storage. Installing FreeNAS was easy enough, having had a handful of USB 3 keys kicking around.  

Once I got logged in and did some poking around, I was impressed by the suite of features available. I also realized that I was a bit lost in the UI with the depth of features FreeNAS provided. Seeing as I had already spent more time and money on the project than I had originally intended, I didn’t want to invest additional time to lean a new system. That’s why I decided to go back to the familiarity of Windows Server.

Backup and Final Configurations

When testing out Windows Server Backup, I recalled there was an option to use a local drive as the destination for the backup when scheduling reoccurring backups. As this method is the recommended setting in the wizard, and I wanted to feel like I was getting some additional functionality out of the NVMe drive, I decided to use the drive for this purpose.

Who am I to argue with a recommendation?

Be warned, if you select a drive for the backup target, the drive has its volume letter removed, preventing its use for any other purpose.

No drive letter on the 238GB drive. Also, dedup is sweet!

All-in-all, I am happy with this storage box. It doesn’t have the bells and whistles that I had intended, but it is providing plenty of redundant storage, and backing up itself up. I am watching the 4TB drive on Amazon for a sale. If I can score another one for cheap, I wouldn’t mind adding a third drive into the array. I may also enable Hyper-V and host a few small VMs.

10,126 total views, 6 views today

Building a Storage Box – Part 2 – Testing Windows Server Backup to Restore a Storage Pool Configuration

Note: Don’t do anything you are about to read in production without testing it extensively. I make no guarantees about the best practiness or supportability of the procedures that follow.

After poking around the internet a bit, I found a Windows Server Feature called “Windows Server Backup” that was the prescribed method for backing up system states and volumes from Windows Server. This feature is not installed be default, so it needed to be installed by me.

Where the Feature resides

Once that was installed, I created a UNC share on a separate server and then created a One Time Backup from Windows Server Backup on the storage box.

This is all I backed up.

The reason I didn’t perform a full backup is because I wanted to see if I could recover the storage pool, virtual disk, and configs from a System State restore in the event my primary volume dies.

To test the process, I used MEMCM (I’m still getting used to typing that) to deploy a Bare Metal task sequence to the server.

Image
Partition and Format, here we come!

Post imaging, I only had the primary volume available. The storage volume was not present. I had expected this.

Nothing up my sleeve, especially a storage volume.

I went through the process of installing Windows Server Backup so I could apply my System State Backup.

I was greeted by various warnings that I was violating best practice regarding restoration policy. It turns out that it isn’t recommended to directly restore from a network path because if network issues occur, the drive could get borked. There was another one that didn’t like that I was applying the system state to a “new” computer, or something along those lines. I forgot to screen grab it. Sorry.

Anyway, ignoring the warnings, I forged on ahead.

Restoring the System State

Admittedly, this process to a lot longer than I thought it would. I don’t know for sure how long it took because I eventually stepped away. Upon completion, it looked like the system hung after explorer shutdown. I had to hard power off the box. Yes, that made me really nervous.

After a few reboots, I was thankfully greeted with a login prompt. Unfortunately, the trust between the computer and the domain was bad. I suppose this was to be expected as I applied an older state to a new domain joined computer. Anyway, that was easy enough to fix. I just left and rejoined the domain.

Once I got to the desktop, I opened up Server Manager to see the results of the experiment!

Success! Or as they say in Klingon, Qapla’

The volume was present, mounted, and contained all of the files I copied to the array.

So I was able to prove that I could recover the storage pool and virtual disk, which tells me that I should be able to safely use this technique to protect myself from data loss from a failure of the primary drive. That’s good to know!

The next steps in the storage box odyssey are to install the NVMe drive in the PCI-X adapter and see if the computer will acknowledge its existence – and perhaps boot from it! The adapter came in today, but I just haven’t had a chance to play with it yet.

I should also acknowledge that I heard from a few folks on Twitter about me eschewing cloud backup. Everyone I heard from suggested Backblaze.  Without having yet trying their service, I must say that $6 per month for unlimited storage is really cheap, and I’ll very likely be trying them out once this experiment has concluded.

Until the next installment…

5,446 total views, 1 views today

Building a Storage Box – Part 1

While browsing through the Black Friday 2019 deals online, I found a good deal on 4TB NAS drives. Still in a turkey induced hazed from the night before, I added the two drives into the cart and purchased them. Once clarity hit me, I realized I needed to pick which one of the parts computers in my basement would become the new storage computer.

After running my inventory of parts computers through my mind, I settled on an old HP 600 something or other that was scavenged from a trash pile. It had a quad core processor, at least 8 GB of RAM, and a dual fiber optic NIC card. This case would absolutely hold the two NAS drives, but since this box is a Small Form Factor, I didn’t have another bay open to hold a system drive, unless I wanted to rip out the DVD drive (which I didn’t want to do).

The HP box. Might I just add how much I hate proprietary drive mounting brackets.

I ended up purchasing a NVMe drive and a PCI-X adapter to provide my new system drive, knowing that the case could accommodate it, and hoping that I would be able to boot from it.

Since my main network switch has two SFP ports, and the NIC card in the computer had two as well, I also ordered a bag optical network cable, hoping that configuring this would be easy. Networking isn’t my specialty, and I’ve never setup the fiber stuff before, so this is all “fingers crossed” territory for me. Assuming it works like I think it will (I know deep down it won’t), I am hoping to team the NICs together.

I really wish I would have pulled the little black plugs out before I made assumptions…

Everything arrived the next day, except for the NVMe PCI-X adapter. Knowing full well that I couldn’t just leave new computer parts to sit idly in a box on my desk, I installed what I could. This would give me a chance to dork around with the configuration of Storage Spaces, etc.

The first of many purchases…

Still needing a hard drive for a system volume, I pulled a 128 GB SSD from another parts machine, and along with the two 4 TB NAS drives, installed them into the little HP box. Then I hooked up power, monitor, and keyboard.

It wouldn’t boot. No display.

After trying the various video ports (there were two), I couldn’t get it to boot and gave up on the HP.

Time for Plan B!

I had a AMD Phenom 2 with 16 GB of RAM that was my very first lab server when I started consulting eight years ago. I swapped out its old hard drives with the donor SSD and new NAS drives. It booted.

Swapping drives out of my Plan B

The SSD drive I placed in there was from an old 2012 R2 server, and the OS diligently booted up, applied device settings, and allowed me to log in. To my pleasure, the optical NIC card was recognized.

There they are!

My plan was to PXE boot into MEMCM and image the machine with my Server 2019 bare metal task sequence, but as best as I can tell, the motherboard doesn’t support network booting. Further, the board is old enough to not have a TPM, nor support UEFI. I suppose I can look up a firmware update from Foxconn, but that battle is for another night.

Also, the CPU heat sink may be held on with a zip tie…

I booted back into Windows, copied down a Server 2019 ISO, mounted it, and ran Setup.exe. While far from ideal, I did an upgrade without keeping data. Should I have made a bootable USB key from an ISO? Maybe. Probably. Yeah, I was being lazy here.

I still may end up making boot media once the PCI-X adapter gets here…TBD.

I used Storage Spaces to build a new Storage Pool from the two NAS drives. I then made a new virtual disk with the full capacity available, selecting ReFS because “Resilient” is in its name, then enabling deduplication on that virtual drive.

Storage Pool, Physical Disks, and Virtual Disk config
Disks, Volumes, and Storage Pool, this time with Deduplication!

During my googling on Storage Spaces, I stumbled across a whitepaper talking about using a SSD as a cache for the Storage Space. My new plan is to use the SSD that is currently the temporary system drive as the cache drive, once the NVMe drive is installed and running Windows. But it is also looking like I will need to rebuild the virtual disk for this to work….dunno.

What about the fancy-pants fiber optic network cabling? That didn’t quite work either.

The back plate of the card was designed for a Small Form Factor machine, and not a full-sized computer, so I couldn’t install the card into the computer. All I had to do was bend the screw-down-tab back, voila!

Nothing a pair of pliers couldn’t solve. I also may secure the card with a zip tie.

As it turns out the ports on my switch needed the SPF connectors, so I really didn’t have anything to plug the cables into. I ordered a pair, and they should arrive with the PCI-X adapter.

I ordered these things. I hope they work.

The last thing I started tackling today, but am now more confused about, is backups. As much as I would like to store the entire 4TB volume in the cloud, I am not going to pay for that.  I installed Windows Server Backup, and took a System State and C: volume backup only. I am hoping that restoring the volume and its state will keep the storage pool readable after OS recovery, but I am not sure if that will really be the case.  I’ll be doing some testing on this once the new drive gets in.

Until the next UPS shipment arrives…

7,466 total views, 2 views today

The SCConfigMgr ConfigMgr Prerequisite Tool

Having built many SCCM servers and hierarchies, we know that there are a fair number of prerequisites that need to be installed and configured prior to installing SCCM. For years now, I have been implementing these prerequisites largely by hand, with the exception of a Server Role and Features script a former colleague of mine wrote for ConfigMgr 2012 SP1. While revamping my personal lab, I needed to install a new build of SCCM Technical Preview, so I figured it was about time I tried out Nickolaj Andersen’s SCCM Prerequisite tool (https://www.scconfigmgr.com/configmgrprerequisitestool/). SCConfigMgr has become synonymous with premier community tools, and the ConfigMgr Prerequisite Tool is no exception!

To fully utilize this tool, there are some prerequisites we’ll need to complete before we start laying down the SCCM prerequisites that this tool is built to handle. The nice thing is that the prerequisites we need for this tool are already needed for the SCCM Site installation, so they shouldn’t be problematic to accomplish. Here’s the list:

  • SQL Server installed
  • SSRS Installed and DB created
  • SCCM Installation Media or CD.Latest folder backup
  • Windows Server media for .Net 3.5 binaries
  • A local folder created for WSUS
  • A folder for the SCCM Prerequisite Files, which are needed during the installation wizard.
  • An AD Security Group for the systems that will be publishing to AD

Once the above has been addressed, we can start using the tool. The first place to go is the “Settings” Tab which is where we will specify the “alternate source” to be used for .Net 3.5 installation, as well as the connection to our SQL Server. Fill out these fields to match the environment.

Settings -> Sources
Settings -> Connections

There is also a “Credentials” tab, which we can supply an account for remote installation of Roles and Feature, as well as the AD Schema Extension. I did not need this in my lab as my Primary Site Server hosts all the roles and I have already extended the Schema. If we are planning on using this for a production deployment, we’re probably going to need this one filled out, perhaps multiple times.

Settings -> Credentials

Now that our groundwork has been laid, we can move to the “Sites” tab. In the “Site Type” tab, select which type of Site Types we are installing. Check the box for “Retry failed Windows Feature installations with alternative source”. We configured the source in the “Settings” tab in the previous step.

Sites -> Site Type

Once that’s done, click install and let the magic happen!

Sites -> Site Type – Installing Features

On the “Prerequisite Files” tab, point the first option to the requested file (SETUPDL.EXE) on the SCCM Installation media, and then select the folder where we want the prerequisite files to be downloaded. Once complete, click the Start button to download the files. Once we get to installing SCCM, we will reference the folder when prompted.

Sites -> Prerequisite Files (Note: I used a backup of the CD.Latest folder)

The “Preferences” tab may just be my favorite. It will create the no_sms_on_drive.sms file on whatever drives we specify. I always forget this step, but now I won’t be able to!

Sites -> Preferences

Click on the “Roles” tab. Here we will be able to install the Server Roles and Features for the various SCCM Server Roles we will be installing throughout the hierarchy. Note that we can install locally, as well as remote. There is a “use alternate credentials” check box, but it is hidden in my image by the drop down. Note that if we are installing any of the SCCM Server Roles on the Site Server, they are not installed as part of the Sites -> Site Types installer, and will need to be run as needed.

Roles -> Role Selection

The “Directory” tab will prepare Active Directory for publishing. Select the “Schema” tab and then click on the “detect” button. This will determine which Domain Controller holds the Schema Master FISMO role.

In the second field, browse to the EXTADSCH.EXE file, which is the program that extends the Schema. There’s a good chance we’ll need to use the Alternative Credentials field as extending the AD Schema does require Schema Master rights, which, are not commonly assigned to User accounts. Once done, hit the “extend” button.

Directory -> Schema

The “Container” tab will create the “System Management” container, which is where the SCCM Site will publish the information about itself. Click the “detect” button to determine which server holds the PDC Emulator role. Again, we will probably need to specify an alternative credential here for the same reasons stated earlier. Once ready, click create!

Directory -> Container

In the “Permissions” tab, select the AD Security Group that will need rights to publish to AD. Click configure once the group has been selected.

Directory -> Permissions

The “ADK” tab allows us to select which version of ADK we wish to install. If we are going to be using ADK 1809, there are two options we will need to install. We’ll also need to specify where the stub installer will be downloaded to. There is also an Offline option that will download the entire payload to be used if installing SCCM on a system without internet access.

ADK – Online

The “SQL Server” tab allows us to configure and validate a variety of options.

Memory Minimum and Maximum:

SQL Server -> General

Validate Collation:

SQL Server -> Collation

Precreate the database:

SQL Server -> Database

And configure the maximum file sizes as pertains to SSRS:

SQL Server -> SSRS

Last, but certainly not least, go to the “WSUS” tab. This is where we will install the WSUS Role and configure the DB.

On the “Features” tab, select “SQL Server”. (Remember, friends don’t let friends use WID 😉) Click “install” to install the role.

WSUS -> Features

On the “Post-Install” tab, we will need to provide the FQDN of the SQL Server that will hold our SUSDB, the Instance name if we are not using the default (MSSQLSERVER), and the folder that WSUS will want to store its content in. Once supplied, click install.

WSUS -> Post-Install

Once all of that has been completed, we’ll be ready to install SCCM and any other servers in the hierarchy!

Clearly this tool is so much easier to use as it provides a “Single Pane of Glass” to install the SCCM installation prerequisites. We no longer need to hop around from ADSIEdit, Web Browser, AD Users and Computers, Server Manager, File Explorer, and PowerShell. I’ll be using this tool during my SCCM implementation engagements from now on! Thank you, Nickolaj, for such an excellent tool!

10,271 total views, 4 views today

Using SQL Maintenance Plans to Backup SCCM CB

I had a client that we recently implemented SCCM CB for. Everything was running smoothly, but they were getting an alert at 2:01 AM everyday stating that their Management Point was unhealthy. After some basic diagnostics, I was able to determine that their MP was fine, but it was the Maintenance Task Backup function that was causing the problem.

Why would the backup break the MP?

It turns out that the native backup function actually stops all of the internal services of SCCM when the backup process starts. The MP Health detection saw that the MP services had stopped, and then fired off the alert before the MP services started back up. Disabling the MP Health alert stopped the emails from going out, but it didn’t solve the problem.

SQL based backups are supported by Microsoft, but his wasn’t always the case. Support for this process was enabled for SCCM 2012 SP1. But the inherent problem with this mechanism is that it doesn’t backup the required files for SCCM to recover from backup, namely the CD.Latest folder.

Fortunately, with a little PowerShell and some knowledge of SQL Management Studio, we can resolve this short coming.

Before we get started, I need to acknowledge the work of Steve Thompson @Steve_TSQL. The script used is his, although slightly modified for UNC share usage. It is really effective. Thanks for the help Steve! (https://stevethompsonmvp.wordpress.com/2016/05/31/configuration-manager-sql-server-backup-guidelines/)

The first step in setting up the backup mechanism is to create the folders that will be written to. They can be local or UNC, but the key step here is to ensure that the Service Account used by the SQL Native Client has “Read” permission on the CD.Latest folder and “Full Control” of the target folder.

Next, we need to open up SQL Management Studio and create the job that will handle the folder copy function:

powershell.exe -command “Get-ChildItem -Path ‘\\[ServerName]\SCCM_Content\backup\CDlatest\*.zip’ | Where-Object {$_.CreationTime -lt (Get-Date).AddDays(-7)} | Remove-Item | Add-Type -Assembly ‘System.IO.Compression.FileSystem’ -PassThru | Select -First 1 | % { [IO.Compression.ZIPFile]::CreateFromDirectory(‘\\[PrimarySiteServer]\SMS_[SiteCode]\cd.latest’, ‘\\[ServerName]\SCCM_Content\backup\CDlatest\CDLatestArchive’ + (Get-Date -format ‘yyyyMMddHHmm’) + ‘.zip’) }”

***Note*** You will need to change the [ServerName] and [PrimarySiteServer] to match your environment

***Pro Tip***

Paste the command in PowerShell ISE so you can tweak the script to fit your own network or local paths. Once you have it dialed in, you can copy it over to SQL Management Studio.

-Start by right clicking on the Jobs folder under SQL Server Agent:

***Note*** If the icon is not green, but red, this means the SQL Native Client service isn’t running. Check the services snap-in (services.msc) to see if it is running. If it isn’t, start it.

-Give the job a name (in this case “BackupDemo”):

-Click on the Steps tab, then select “New”

-Give your step a name, select “Operating system (CmdExec)” in the Type menu, then paste your command line string into the Command window. Once all that is done, click “OK”. Once that has closed, click “OK” again to complete the wizard.

You can manually run the job once you have added the command line by right clicking on the copy job and selecting “Start Job at Step..” This is advisable to try before completing the process to ensure your command has been crafted successfully and to validate permissions on the source and target folders.

Your results should look like this:

-Next, we will need to create our Maintenance Plan. Right click on the Maintenance Plans folder and select “Maintenance Plan Wizard”

-Select “Next”

-Give the plan a name, then hit the “Change..” button next to the Schedule field.

-Create a schedule that works for your organization. In this case, I am scheduling the plan to run once a day at midnight. Select OK when complete.

-The schedule created should now appear in the Schedule field. Click next to proceed.

-Select the following items:

              Clean Up History

              Execute SQL Server Agent Job

              Back Up Database (Full)

              Maintenance Cleanup Task

Click next once complete.

-Move the Execute SQL Server Agent Job to the last step. Click Next when complete.

-Change the “Remove historical data older than” options to 1 Week(s). Click next when complete.

-On the Define Back Up Database (Full Task) – General Tab, in the Database(s) drop down, select the “All user databases (excluding master, model, msdb, tempdb)” option. Click OK.

-On the Destination tab, put in your target folder for your SQL backups. Tick the “Create a sub-directory for each database” checkbox. Make sure the “Backup file extension” is set to “bak”

-On the options tab, select the “Compress backup” option in the “Set backup compression” drop down. Once that has been set, click Next.

-Type or paste in the backup target folder in the “Folder” field under “Search folder and delete files based on an extension”. Type in “bak” under “File extension”. Tick the check box for “Include first-level subfolders”. Change the option “Delete files older than the following” to 1 week(s). Click next when complete.

-Select the CD.Latest backup job we created earlier. Click next when complete.

-Enter a path that you wish to use for backup logs. Click next when complete.

-Review the Maintenance Plan. Go back and change anything that needs correction. Click Finish when complete.

-The Maintenance Plan will be created. If all is successful, you should be greeted by this window. Click Close when complete.

-And our new Maintenance Plan should now be listed under “Maintenance Plans”!

-You can manually kick off the Maintenance Plan by right clicking on it, then selecting “Execute”. This is a good thing to do just to make sure that everything has been configured correctly.

-If nothing impedes the backup process, the SQL backup target folder should start to populate with the compressed backups

And our CD.Latest backup target folder should look something like this:

There are way to configure email alerting with SQL Maintenance plans, but I have yet to implement the functionality. It may be advisable to configure them to ensure that your backups are running correctly.

Once you have validated that the functionality is there, we can now safely rely on this function. If there is sufficient storage to store both the SQL Backup and the Native backup, it may be wise to implement both. Sometimes the “Belt and Suspenders” approach makes sense.

10,167 total views, 1 views today