Display Driver Stopped Responding and has Recovered – Dell Vostro 3750 – Windows 8.1 x64

Windows 8 Logo

Since updating to Windows 8.1 x64 I have encountered the following problem when my laptop wakes from sleep – my screen would be black for a couple of seconds – then I would get this notification: Display driver stopped responding and has recovered.

This is the error in Event Viewer:

A
Display driver error in Event Viewer

Given that this was not an issue before updating to Windows 8.1 I started looking for alternative drivers to install.

I was unable to install the new Intel Windows 8.1 Beta driver because my ‘legacy’ hardware is no longer supported. After trying a different Intel driver I headed over to the Dell website and downloaded the two most recent Windows 8 x64 Intel Video drivers. The one that worked for me is here: http://www.dell.com/support/drivers/us/en/04/DriverDetails?driverId=T4PKN

If you are also experiencing this issue I would recommend simply downloading the Windows 8 drivers from your manufacturers website.

Advertisements

FreeNAS 8 – Hang During Post on HP Microserver N40L

Microserver NAS

Today I decided to reboot FreeNAS 8 on my HP Microserver because the speed of transfers from my PC to FreeNAS had dropped to around 30 MB/s and were stalling regularly.

2.5 Gb file copy from Windows 8 (SSD) to FreeNAS 8.x (Mirror)
2.5 Gb file copy from Windows 8 (SSD) to FreeNAS 8.x (Mirror)

I logged into FreeNAS to take a look and could not see anything obviously amiss and so rebooted.

Unfortunately I had to unexpectedly deal with the deal with the following issues:

This is a FreeNAS data disk and can not boot system. System halted

and,

Auto-detecting USB Mass Storage Devices...
Device #01:

In the former instance FreeNAS was trying to boot from a zpool (data) drive and in the latter could not successfully detect the USB jump drive that contained my FreeNAS installation.

With the latter error my Microserver would take an eternity to get into the BIOS (the BIOS did correctly identify my jump drive as the device to boot though).

Clearly it was time to install FreeNAS to a new jump drive.

  • download the latest FreeNAS 64 bit disk image and extract it using 7zip
  • use Win32DiskImager to copy the extracted FreeNAS image onto a new jump drive (minimum size 4Gb)

I used a spare Lexar Firefly jump drive that I had lying around because it is small and easy to insert into the usb port on the Microserver motherboard.

When I logged into FreeNAS again I had to change the admin password. I then uploaded my previous saved configuration and rebooted.

Always take a couple of minutes to save your FreeNAS config (you never know when you might need it).

With this done I just had to deal with the warning that my zpools were using an older version of ZFS (15) than was currently running (28). Time to enable SSH so that we can do a little command line work and upgrade my zpools.

Select Control Services under Services in the left hand pane and then enable SSH. Then click the spanner icon to open the SSH Settings window and check Login as Root with password and click OK.

freenas ssh

Next open Putty (or a similar tool) and remote into FreeNAS as root (using your Admin password).

I used the following commands to upgrade my zpools:

Note: it is recommended to back-up your data before performing an upgrade and it is not recommended to upgrade zpools if they are not healthy.

zpool status
zpool upgrade <pool-name>

With the pool upgrade complete I made sure that I turned off SSH access.

Now that all of that is out of the way my file transfers are back to normal again. I still want to investigate the dips that I experience during file transfers, but the impact is not so great that it is a really pressing concern.

freenas 2

My main take-away from this is to make sure that I keep my config backed-up and always have a spare jump drive to replace a failed one. It happened to me much sooner than I thought it would!

2013 Potential Hardware for vSphere Home Nanolab and NAS Refresh

hardware-logo

My current VMware vSphere white-box will be 5 years old in August. It has an AMD Athlon X2 BE-2400 Brisbane @2.3GHz and 8Gb of RAM – and these days 8Gb of RAM is just not enough.

The hardware for my NAS is more recent – a HP Microsever N40L with 6Gb of RAM, running FreeNAS 8.x.

The cpubenchmark score for my vSphere box is 1333 – the score for the N40L is 979.

While I still need to look at the performance of ZFS on the N40L (it is OK but not exactly where I would like it to be) I know that a lot more CPU is not desperately needed for new vSphere hardware (but it would be nice).

I have been considering the Intel NUC (Next Unit of Computing) as an alternative to having a tower PC to run vSphere for a while now. It maxes out at 16Gb of RAM and it really shines in terms of its power efficiency (13-27 watts) and diminutive size (4″ x 4″). The i3 -3217U DC3217IYE NUC (Ivy Bridge architecture) is the current NUC that I have my eye on.

The Intel i3 NUC
The Intel i3 NUC

The issue with the NUC though is storage – I can either install an msata SSD in the NUC or use shared storage on my NAS (or both). I would like to use local storage on the NUC for speed and back up VMs to my NAS – the cost of SSDs will limit my local storage capacity though.

The next generation of NUCs are based on the Haswell architecture and include Core i5 (Horse Canyon) and i7 (Skull Canyon) CPUs. The i5-3427U offering (cpu benchmark: 3580) is of interest to me here as it includes Intel vPro remote management capabilities.

This still leaves us with the 3rd generation of NUCs (also Haswell) which have an on-board sata and sata power connector – these are slated to arrive in Q3 2013.

3rd Gen Intel NUC
3rd Gen Intel NUC

The other option for a diminutive vSphere box is the Gigabtye take on the NUC called Brix. It looks like Gigabyte plans to offer Intel (i3 – i7) CPUs and AMD Kabini (E1-2100, E1-2500 & E2-3000 dual core, and A4-5000 quad core) CPUs.

I think it will be worth keeping an eye on the Brix offerings to see where they differ from the NUC. The key areas for me will be efficiency, pricing and storage – what if Brix offers a 2.5 or 3.5″ internal drive bay, for example? I imagine that the AMD offerings will be cheaper than the Intel NUC – but we will have to wait and see.

On the home NAS side of things HP very recently updated their Microserver (Gen 8) with Celeron and Pentium models:

  • Intel® Celeron® G1610T (2 core, 2.3 GHz, 2MB, 35W)
  • Intel® Pentium® G2020T (2 core, 2.5 GHz, 3MB, 35W)

This does potentially make the Microserver a better vSphere candidate too, especially as the supported RAM has been upped to 16Gb.

The other good news is the built in iLO support, dual gigabit NICs and USB 3.0 ports (as seen on the beta unit, at least):

HP Microserver (Gen 8) rear panel - courtesy of
HP Microserver (Gen 8) rear panel – courtesy of blog.themonsta.id.au

So I’ll be keeping an eye on the new generation of Microserver too. The additional CPU and RAM are quite welcome (especially for ZFS). I am also keen to know the power consumption for these machines as a whole.

Either way with both the NUC and the Microserver I can build a power efficient and much smaller lab.

If I can score a couple of NUCs and another Microserver by the end of the year, I will be a happy man!

Windows 8 – Post SSD Installation Optimization

I just upgraded my laptop with a new SSD and Windows 8 Pro and found a nice guide for optimizing Hard Drives and Solid State Drives (SSD) here. For an SSD optimization includes reducing the disk space usage of the Operating System and reducing the number of writes to the drive. Below are the steps that I chose to implement from the guide.

Note: The only step that I took prior to installing Windows 8 on my SSD was to check that ACHI SATA mode was enabled in the BIOS. If you are unsure how to do this check the documentation for your motherboard.

Accessing the Control Panel in Windows 8:

There are many ways to get to the Control Panel to make system changes so I will only mention the two that I like. From the Desktop press the Windows key and X together and then click on Control Panel on the pop-up menu. From the Start Screen simply start to type Control Panel and then click on it when you see it in the search results.

Note: My Control Panel is not set to Category View – so my documentation will not follow that layout (I set my view to Large Icons instead).

Turn Off Hibernation:
This will save a few Gigabytes hard drive space (but will also turn off hybrid sleep).

  • From the Desktop press the Windows and X keys together and then click Command Prompt (Admin) from the pop-up menu.
  • Type powercfg -h off and then press Enter.

Shrink Disk Space Usage for System Protection:
This will reduce the amount of drive space available for System Restore data.

  • From the Control Panel click the System icon.
  • In the left hand pane click System Protection.
  • Click the Configure button and then adjust the Max Usage slider for your desired allocation of disk space.

Turn off Drive Indexing for Local Disk (C:):

Update: Turning off indexing will impact Windows 8 apps such as Mail and media apps (as noted by Fred in the comments). I can confirm that the Mail app will not list / auto-complete email addresses while indexing is turned off.

This will reduce the number of writes to your SSD. The speed of SSDs negates the typical benefit of maintaining indexing for all files on drive C.

  • Open File Explorer from the Desktop (or type Computer from the Start Screen and click on Computer in the search results).
  • Right click on Local Disk (C:) and select Properties from the pop-up menu.
  • Un-check Allow files on this drive to have contents indexed in addition to file properties and then click the Apply button.
  • Click OK and then Continue to proceed.
  • At the Error Applying Attributes windows click Ignore All and wait for the change to be completed.

Shrink the Page File?:
This will save drive space. This is more pressing when your computer has a large amount of RAM – as the Page File can become larger than the RAM installed.

The guide I referenced at the beginning of this post recommends manually reducing the Page File to between 512 and 1024MB. Initially I followed this advice but since then discovered a Technet post about Windows 8 and Automatic Memory Dump.

Automatic Memory Dump is the default for a Windows 8 install and it produces a Kernel memory dump. It was created to support the System Managed page file which has been updated to reduce the page file size primarily for small SSDs (or servers with large amounts of RAM). This allows the SMSS process to reduce the page file smaller than the size of RAM.

Obviously there is the facility to increase the size of the page file as needed. When your computer experiences a bug check a new registry key is created:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl\LastCrashTime

For the next 4 weeks the system managed page file will now have a minimum size of the installed RAM.

When you have fixed the system instability that is causing the bug check you can delete the aforementioned registry key if you want the page file to return to its reduced size more quickly.

With this in mind it is your call if you wish to manually set a minimum / maximum for your page file or if you want to let Windows manage it for you. You can change it as follows:

  • From the Control Panel click the System icon.
  • In the left hand pane click Advanced System Settings.
  • Under Performance click the Settings button.
  • Click the Advanced tab and under Virtual Memory click the Change button.
  • Un-check the Automatically manage paging file size for all drives check-box.
  • Select Drive C: and then click the Custom size radio button.
  • Manually set a minimum and maximum size in MB and then click OK.

Change Power Options:
Change the power plan settings – this is so that Idle Time Garbage Collection can run on your SSD when your system is idle (rather than going to sleep).

Note: This change is not recommended for laptops (where the Balanced plan will turn off hard drives after 10 minutes on battery and 20 minutes plugged in). Otherwise choose Never or another reasonable settings in minutes such as 60 or 120 below:

  • From the Control Panel click Power Options.
  • Select the High Performance radio button (on a laptop click Show additional plans).
  • Click Change plan settings and then click Change advanced power settings.
  • Expand the Hard disk option and set the Turn off hard disk after setting to Never.
  • Expand the Sleep option and set the Sleep after setting to Never.
  • Click OK.

Run the Windows Experience Index Assessment:
According to the guide this makes system changes to Windows when it learns that you have an SSD (which has a 0 rpm rotational speed).

  • From the Control Panel click Performance Information and Tools.
  • Click the Rate this computer button.

Finally, reboot your computer!

My Transition From WHS (Windows Home Server) to ZFS: HP Microserver & FreeNAS 8

I’ve decided to begin my transition to a ZFS based system before my Windows Home Server (WHS) gives up the ghost. ZFS provides protection against data corruption – which is mostly what attracted me to it.

Hardware-wise I settled on the HP Microserver N40L for a number of reasons and had to accept the limitations that this (and other choices) entailed.

The main reasons that I chose the Microserver were the 4 (non hot-swap) hard drive bays and the price. Swapping drives in and out of my WHS tower system is a pain so I wanted something with drive bays that slid out to install and replace drives. As my WHS is working fine I did not want to spend a lot of money on my transition to ZFS. And because I did not have a good experience installing Advanced Format drives in my WHS box I plan to gradually de-comission it as the drives die.

The Microserver is not the most powerful machine around but I figured that it should be fine for basic ZFS file duties, as I do not plan on using advanced features such as de-duplication. To keep costs down I added 4Gb of ECC RAM to the 2Gb that the N40L came with. I also purchased 2x 2Tb Western Digital Green drives.

Upgrading the RAM requires disconnecting cables from the motherboard and sliding the motherboard out to access the RAM slots. To remove the Mini-SAS connector on the motherboard squeeze the clip and then push down before pulling the connector up.

Why did I only purchase 2 drives and not plan to set up a Raid-Z pool in my system? Well partly due to cost – but also practicality. If I create small mirrored drive pools I have fairly good redundancy and I only have to buy 2 drives to upgrade the pool if I need to in the future. Writing to a mirrored pool should not be any slower than it is with my WHS box (which has duplication turned on for all folders) and read speeds will easily be good enough for streaming media to my living room.

My setup is in fact pretty basic and I made some decisions that forced me down that path. Firstly I wanted all of the drive bays to be dedicated solely to storage. Secondly, because I am adamant about ease of hard drive maintenance I elected not to install any additional drives in the CD / DVD drive bay. This limits me to 4 storage drives and means that I will not be installing a SSD for caching functions (which would improve the storage performance). This also limited me to finding a solution that would boot from a Jump drive.

I first tried installing VMware vSphere on a jump drive and then installing Nexenta Community Edition on a small virtual hard disk (10Gb) on one of the Western Digital drives. I then created two 1.81 Tb .vmdk files and mirrored them in Nexenta. Sadly the performance was not too great.

So for the moment I have settled on FreeNAS 8 (on a 4Gb Jump drive). It was easy to install – and so re-installing should the jump drive fail should be straight-forward. I should be able to upgrade easily enough should the need arise – the idea of having the Microserver be more like an appliance – that I set up and rarely have to touch is quite appealing (no Windows updates to install and no Demigrator.exe to interrupt my media streams).

So far I have only done enough configuration to test write speeds to FreeNAS from my Windows box. Over a gigabit connection I average about 70 MB/s which is great, as that is pretty much what I am getting on my WHS box.

I’ll check the power consumption when I get a chance but I anticipate being able to run two Microservers with FreeNAS for more or less the same consumption as my single WHS box.

I’ve found that FreeNAS 8 has had some mixed reviews – which does concern me a little. My setup is probably as simple as it could be though. Never-the-less I do plan to do some testing before I migrate any data to it.

My to-do list is as follows:

  • Set up ZFS Data Sets, User groups and Users to control access.
  • Copy data to my ZFS mirror and then remove and format one drive from the pool and test adding the drive back in to the pool.
  • Test importing my mirrored pool back into a new FreeNAS installation.
  • Configure FreeNAS to send alerts to my Gmail account.
  • Configure the SMART schedule to check my drives.

That should be enough to keep me busy for a while … and will hopefully leave me feeling quite happy about gradually moving my data from my WHS box!

Installing Nexenta Core Platform 3.0.1 With Nappit on VMware vSphere 4.x

I have been mulling over what exactly the eventual replacement for my Windows Home Server might be one day – and Nexenta is something that I have been pondering for a while.

The Nexenta Core Platform (NCP) is what the commercial (and community) versions of Nexenta (NexentaStor) are built upon.

NCP is based on Ubuntu, with an OpenSolaris kernel. NexentaStor (Community) has a Web Management User Interface (WMUI) and an 18TB limit for storage. NCP has a community developed WMUI called Nappit.

I decided to look at installed NCP and Nappit to get a feel for NCP over NexentaStor Community edition as I have not decided yet on what amount of storage I might want to use Nexenta for. This is because my plan is to use mirroring to provide basic redundancy rather than other forms of RAID. For some storage pools I might use three mirrored drives together rather than two and so I can see this strategy eating into the 18TB limit of NexentaStor Community (although hopefully not too quickly). I guess I don’t want to feel limited with my next storage server.

I am still pondering the pros and cons of virtualizing NCP on VMware vSphere versus running two physical boxes but for now lets look at installing NCP in a vSphere virtual machine.

Note the following keys used during installation:

  • Up and Down arrow keys move the cursor up and down between input fields and check-boxes,
  • Spacebar marks your selection,
  • Tab cycles through the options,
  • Enter confirms your choice and proceeds to the next step.

First download the Nexenta .iso and copy it to your vSphere datastore.

Create a new virtual machine and specify the following Guest Operating System properties – Linux and Ubuntu (64-bit).

I configured 4Gb of RAM with the default LSI Logic Parallel SCSI controller with a 12GB vitrual hard disk.

Finally point the virtual CD-ROM of the virtual machine to the uploaded Nexenta .iso and boot the virtual machine.

Enter a password for root, then press the down arrow key and re-enter your password. Press tab to highlight the OK button and then press Enter.

Login as root (or login as other user, enter su to get root permission).

At this point I tried to install napp-it but discovered that I did not have an IP address. The fix was as follows:

svcadm disable svc:/network/physical:default
svcadm enable svc:/network/physical:nwam

I entered the following command to check that I had an IP address:

ifconfig -a

Now we can install the nappit web interface for Nexenta:

wget -O - www.napp-it.org/nappit | perl
reboot

Open your preferred browser and enter: http://<server-ip&gt;:81 to manage your Nexenta installation.

Sources:

http://www.nexenta.org/boards/1/topics/1118, http://www.nexenta.org/projects/site/wiki/Difference

http://www.nexenta.org/projects/site/wiki/GettingStarted

http://www.nexenta.org/projects/site/wiki/WhyNexenta

http://193.196.158.121/napp-it.pdf

Windows Home Server – Once Upon a Failing Hard Drive

A few of days ago I began experiencing multiple error messages with a failing hard disk on my Windows Home Server (WHS). Dealing with a bad drive can be a pain – particularly with the downtime experienced – but it is for scenarios like this that I like to have duplication turned on for all my WHS folders …

The first sign of trouble was that my Media Center could not access and files on WHS. I checked Event Viewer and found lots and lots of disk errors like this:

The device, \Device\Harddisk2, has a bad block.

The Event ID was 7.

At this point I was already anticipating that I would probably have to replace the drive. For good measure I ran chkdsk /r to check all my drives and rebooted.

Then I started to see tons of errors that looked like this:

File record segment 10001 is unreadable
File record segment 10002 is unreadable

Once these had finished checkdisk started to repair these issues but that process hung so I had to give up on that.

At this point I turned my WHS off until the replacement drive arrived (I added the drive to the storage pool without any issues).

As I mentioned earlier I have duplication turned on for all my folders on WHS – for me it is a small price to pay for being able to have WHS rebuild my data from the duplicate files and get me back to where I was without too much fuss. It does however take a while to do this.

With a new drive installed I set about trying to remove the bad drive from the storage pool. Event Viewer told me that it was harddisk 2 that was having issues and thanks to my previous organization this was drive 2 in my tower connected to Sata cable number 2 on my motherboard.

I was also pretty sure that harddisk 2 was the second disk listed in the Storage tab of the WHS console – but I was not 100% confident as I had other drives with the same name listed in my storage pool too. So I downloaded and installed the free version of HDTune to double check. Sure enough the second drive in my HDTune list did not respond when I tried to list it in HDTune. HDTune let me get the serial numbers for all the working drives and by a process of elimination I used this to double-check the problem drive (I have the serial numbers written on the rear of each drive so that I can see them when I open the case).

I hoped to be able to remove the problem drive with a few clicks in WHS but I found that WHS could not remove the drive due to “file conflicts”. So I shutdown and physically disconnected the drive.

With the drive disconnected I rebooted WHS and tried to remove the now missing drive from the pool. Again I got an error message about file conflicts. I had a look around and saw that WHS was calculating sizes in the Storage tab (which I figured that was to be expected). However, when I clicked on the Network Critical button I found that I was getting an alert for each folder that contained files from the ‘missing’ drive that I had removed. I had to wait for WHS to work through all the files and folders that it expected to see on the missing drive before it would begin removing the missing drive from the storage pool.

Even then this process failed due to file conflicts. The culprits I found were my Media Center and the online backup software that I had installed on WHS. I shut these both down and rebooted WHS and finally the missing drive could be successfully removed.

The drive that I removed was a 2TB drive and it took a long time for WHS to repair itself. I probably had about 5 days of downtime in total which is far from great.

Having WHS repair itself from folder duplication saved me a lot of hassle though as there is nothing like trying to organize a couple of TB of files from a backup.

The only thing that I lost were the backups of my Windows computers. I plan to install an add-in called Windows Home Server Backup Database-Backup (BDBB) so that I can backup my backups to a network share on another machine and/or enable duplication on my WHS.

For now I am just happy that WHS did its job. Folder duplication can be a life saver when a drive fails – but I still have a backup (offsite) of my most critical data.