I tried a couple of methods of installing LibreELEC in a VMware virtual machine before finding the solution. The first was to simply pass through a Jump drive with a bootable image to the virtual machine. This didn’t work as I discovered that vSphere virtual machines cannot boot from USB.
I also tried converting an image file to an .iso and booting from that – that also didn’t work.
I finally discovered the LibreELEC virtual appliance and simply deployed that to vSphere.
I would typically just post a link, but it appears that the LibreELEC mirrors seem to change so it is best to know how to get a link to the .ova file.
First head to the LibreELEC download page. What we are looking for is a link named info behind which is a mirror list.
On the mirror list page you can see that I have two mirrors available in the US:
Notice that I have the first portion of the path to a LibreELEC image selected. This URL takes me to the full listing of available LibreELEC images on that mirror.
Copy the link to the LibreELEC .ova file and then run the vSphere Client to start deploying the virtual appliance.
Click the File menu and then Deploy OVF Template.
Paste the LibreELEC .ova hyperlink and then click Next to continue through the rest of the deployment wizard.
Zentyal Server is an open source Linux small business server, that can act as a Gateway, Infrastructure Manager, Unified Threat Manager, Office Server, Unified Communication Server or a combination of the above.
I am performing this setup on a minimal virtual machine installation of Ubuntu Server 14.04. At the time of writing Zentyal 3.5 is the most current Zentyal release.
First make sure that repositories and software are up to date:
sudo apt-get update
sudo apt-get upgrade
Add the Zentyal 3.5 repository to /etc/apt/sources.list:
echo "deb http://archive.zentyal.org/zentyal 3.5 main extra" | sudo tee -a /etc/apt/sources.list
The D34010WYH1 NUC gives me the option of storing virtual machines on a 2.5 inch HDD or SSD inside the NUC (and the 1TB WD Red drive gives me a good amount of local storage to play around with). The RAM is low voltage (1.35v) which is required. The 32 Gb USB 3 flash drive is over-kill (only 4GB is required for vSphere 5.5) but it is very small (and pretty fast too). I needed the HDMI adapter to connect the NUC to my HDTV during vSphere installation.
The installation process is quite straight-forward and you will need the following:
My current VMware vSphere white-box will be 5 years old in August. It has an AMD Athlon X2 BE-2400 Brisbane @2.3GHz and 8Gb of RAM – and these days 8Gb of RAM is just not enough.
The hardware for my NAS is more recent – a HP Microsever N40L with 6Gb of RAM, running FreeNAS 8.x.
The cpubenchmark score for my vSphere box is 1333 – the score for the N40L is 979.
While I still need to look at the performance of ZFS on the N40L (it is OK but not exactly where I would like it to be) I know that a lot more CPU is not desperately needed for new vSphere hardware (but it would be nice).
I have been considering the Intel NUC (Next Unit of Computing) as an alternative to having a tower PC to run vSphere for a while now. It maxes out at 16Gb of RAM and it really shines in terms of its power efficiency (13-27 watts) and diminutive size (4″ x 4″). The i3 -3217U DC3217IYE NUC (Ivy Bridge architecture) is the current NUC that I have my eye on.
The issue with the NUC though is storage – I can either install an msata SSD in the NUC or use shared storage on my NAS (or both). I would like to use local storage on the NUC for speed and back up VMs to my NAS – the cost of SSDs will limit my local storage capacity though.
The next generation of NUCs are based on the Haswell architecture and include Core i5 (Horse Canyon) and i7 (Skull Canyon) CPUs. The i5-3427U offering (cpu benchmark: 3580) is of interest to me here as it includes Intel vPro remote management capabilities.
This still leaves us with the 3rd generation of NUCs (also Haswell) which have an on-board sata and sata power connector – these are slated to arrive in Q3 2013.
The other option for a diminutive vSphere box is the Gigabtye take on the NUC called Brix. It looks like Gigabyte plans to offer Intel (i3 – i7) CPUs and AMD Kabini (E1-2100, E1-2500 & E2-3000 dual core, and A4-5000 quad core) CPUs.
I think it will be worth keeping an eye on the Brix offerings to see where they differ from the NUC. The key areas for me will be efficiency, pricing and storage – what if Brix offers a 2.5 or 3.5″ internal drive bay, for example? I imagine that the AMD offerings will be cheaper than the Intel NUC – but we will have to wait and see.
On the home NAS side of things HP very recently updated their Microserver (Gen 8) with Celeron and Pentium models:
This does potentially make the Microserver a better vSphere candidate too, especially as the supported RAM has been upped to 16Gb.
The other good news is the built in iLO support, dual gigabit NICs and USB 3.0 ports (as seen on the beta unit, at least):
So I’ll be keeping an eye on the new generation of Microserver too. The additional CPU and RAM are quite welcome (especially for ZFS). I am also keen to know the power consumption for these machines as a whole.
Either way with both the NUC and the Microserver I can build a power efficient and much smaller lab.
If I can score a couple of NUCs and another Microserver by the end of the year, I will be a happy man!
I’ve lost count of the number of times that I have installed Ubuntu Server on my VMware vSphere box – so I finally looked in to performing an unattended install.
I could have setup DHCP and TFTP servers and done PXE boot from images over the network – but I wanted to work on something quicker than that (and I don’t have that much spare RAM on my vSphere box as it is).
So I settled on re-mastering an Ubuntu Server .iso image. The result is an unattended install, except for the initial boot screen (where I need to select a minimal virtual machine installation anyway).
The following steps were performed on Ubuntu Desktop.
The -r switch copies directories recursively and -T specifies no (singular) target directory.
Now we have a copy of our Ubuntu .iso to work on in /opt/serveriso – but we need to make these files writable:
sudo chmod -R 777 /opt/serveriso/
With this preparation done we can start customizing things.
If we look at the isolinux/langlist file we see all the supported languages listed that Ubuntu supports (in an abbreviated format):
I am only interested in an English install so I am going to overwrite the contents of isolinux/langlist with the single abbreviation for English, which is “en”.
echo en >isolinux/langlist
This stops the language selection menu from appearing during installation.
The next step of the process is to create a kickstart file – this will provide the server install with the answers to the various questions asked during installation, such as timezone, username, password, partition structure and so on.
Install Kickstart Configurator:
sudo apt-get install system-config-kickstart
Click the Dash button and type kickstart and then click on the kickstart application.
Obviously you should customize your settings as you see fit – I have provided mine for reference.
Click File, Save File and save the kickstart file ks.cfg to /opt/serveriso.
While using the Kickstart Configurator you may have noticed that the Package Selection screen did not work. Fortunately we can manually edit the ks.cfg file so that the packages that we want are installed during Ubuntu Server installation.
At the end of ks.cfg add %packages and then list the packages that you want installed. I chose to install nano, openssh-server and open-vm-tools: