I tried a couple of methods of installing LibreELEC in a VMware virtual machine before finding the solution. The first was to simply pass through a Jump drive with a bootable image to the virtual machine. This didn’t work as I discovered that vSphere virtual machines cannot boot from USB.
I also tried converting an image file to an .iso and booting from that – that also didn’t work.
I finally discovered the LibreELEC virtual appliance and simply deployed that to vSphere.
I would typically just post a link, but it appears that the LibreELEC mirrors seem to change so it is best to know how to get a link to the .ova file.
First head to the LibreELEC download page. What we are looking for is a link named info behind which is a mirror list.
On the mirror list page you can see that I have two mirrors available in the US:
Notice that I have the first portion of the path to a LibreELEC image selected. This URL takes me to the full listing of available LibreELEC images on that mirror.
Copy the link to the LibreELEC .ova file and then run the vSphere Client to start deploying the virtual appliance.
Click the File menu and then Deploy OVF Template.
Paste the LibreELEC .ova hyperlink and then click Next to continue through the rest of the deployment wizard.
MediaDrop is a open source online video platform for managing and delivering video, audio and podcasts.
Sadly I found the official documentation to be lacking and had to cross reference it with other install guides to even get a basic setup running.
This guide will take you through a basic installation of MediaDrop that utilizes the built in Paste Server provided by Python. If you prefer a more permanent solution you can setup an Apache 2 or Nginx web server yourself. For the time being I am happy enough to just have MediaDrop running – having experienced a couple of bugs I do not want to mess with my working configuration any further right now.
My working environment is a minimal installation of Ubuntu 14.04 Server on VMware vSphere 5.x.
Let’s begin our installation – first we will elevate ourselves to the root user and then install MySQL, System libraries, development headers, python libraries and tools:
Download and install all the necessary dependencies for MediaDrop into your virtual environment:
python setup.py develop
Generate the deployment.ini file:
paster make-config MediaDrop deployment.ini
We will now bring up a mysql> prompt to administer the MySQL database:
mysql -u root -p
Enter your MySQL password when prompted.
Create the MySQL database mediadrop_db and the MySQL user mediadrop_user and a password for mediadrop_user:
mysql> create database mediadrop_db;
mysql> grant usage on mediadrop_db.* to mediadrop_user@localhost identified by 'mysecretpassword';
mysql> grant all privileges on mediadrop_db.* to mediadrop_user@localhost;
Note: Change ‘mysecretpassword‘ to the password you want for mediadrop_user.
Edit the delpoyment.ini file:
Under the [app:main] heading, look for the sqlalchemy.url setting:
The D34010WYH1 NUC gives me the option of storing virtual machines on a 2.5 inch HDD or SSD inside the NUC (and the 1TB WD Red drive gives me a good amount of local storage to play around with). The RAM is low voltage (1.35v) which is required. The 32 Gb USB 3 flash drive is over-kill (only 4GB is required for vSphere 5.5) but it is very small (and pretty fast too). I needed the HDMI adapter to connect the NUC to my HDTV during vSphere installation.
The installation process is quite straight-forward and you will need the following:
My current VMware vSphere white-box will be 5 years old in August. It has an AMD Athlon X2 BE-2400 Brisbane @2.3GHz and 8Gb of RAM – and these days 8Gb of RAM is just not enough.
The hardware for my NAS is more recent – a HP Microsever N40L with 6Gb of RAM, running FreeNAS 8.x.
The cpubenchmark score for my vSphere box is 1333 – the score for the N40L is 979.
While I still need to look at the performance of ZFS on the N40L (it is OK but not exactly where I would like it to be) I know that a lot more CPU is not desperately needed for new vSphere hardware (but it would be nice).
I have been considering the Intel NUC (Next Unit of Computing) as an alternative to having a tower PC to run vSphere for a while now. It maxes out at 16Gb of RAM and it really shines in terms of its power efficiency (13-27 watts) and diminutive size (4″ x 4″). The i3 -3217U DC3217IYE NUC (Ivy Bridge architecture) is the current NUC that I have my eye on.
The issue with the NUC though is storage – I can either install an msata SSD in the NUC or use shared storage on my NAS (or both). I would like to use local storage on the NUC for speed and back up VMs to my NAS – the cost of SSDs will limit my local storage capacity though.
The next generation of NUCs are based on the Haswell architecture and include Core i5 (Horse Canyon) and i7 (Skull Canyon) CPUs. The i5-3427U offering (cpu benchmark: 3580) is of interest to me here as it includes Intel vPro remote management capabilities.
This still leaves us with the 3rd generation of NUCs (also Haswell) which have an on-board sata and sata power connector – these are slated to arrive in Q3 2013.
The other option for a diminutive vSphere box is the Gigabtye take on the NUC called Brix. It looks like Gigabyte plans to offer Intel (i3 – i7) CPUs and AMD Kabini (E1-2100, E1-2500 & E2-3000 dual core, and A4-5000 quad core) CPUs.
I think it will be worth keeping an eye on the Brix offerings to see where they differ from the NUC. The key areas for me will be efficiency, pricing and storage – what if Brix offers a 2.5 or 3.5″ internal drive bay, for example? I imagine that the AMD offerings will be cheaper than the Intel NUC – but we will have to wait and see.
On the home NAS side of things HP very recently updated their Microserver (Gen 8) with Celeron and Pentium models:
This does potentially make the Microserver a better vSphere candidate too, especially as the supported RAM has been upped to 16Gb.
The other good news is the built in iLO support, dual gigabit NICs and USB 3.0 ports (as seen on the beta unit, at least):
So I’ll be keeping an eye on the new generation of Microserver too. The additional CPU and RAM are quite welcome (especially for ZFS). I am also keen to know the power consumption for these machines as a whole.
Either way with both the NUC and the Microserver I can build a power efficient and much smaller lab.
If I can score a couple of NUCs and another Microserver by the end of the year, I will be a happy man!
Enter a password for Tiny Tiny RSS to register with MySQL – a random password will be generated if left blank:
Confirm your application password:
Next we need to use nano to edit some configuration files.
First we need to edit our server address in /etc/tt-rss/config.php:
sudo nano /etc/tt-rss/config.php
Find the line define('SELF_URL_PATH', 'http://yourserver/tt-rss/'); and change it to define('SELF_URL_PATH', 'http://localhost/tt-rss/'); (as per the server address that we set previously):
Press Ctrl + O then Enter to save the changes to config.php and then Ctrl +X to exit nano.
To get Tiny Tiny RSS to update feeds we need to edit /etc/default/tt-rss:
sudo nano /etc/default/tt-rss
Change DISABLED=1 to DISABLED=0 to allow the Tiny Tiny RSS daemon to be started:
Press Ctrl + O then Enter to save the changes to config.php and then Ctrl +X to exit nano.
Start the Tiny Tiny RSS service:
sudo service tt-rss start
Obtain the IP address of your Ubuntu Server installation:
Open a browser on another machine and navigate to your Tiny Tiny RSS URL:
Login with the username: admin and the password: password.
Click Actions, Preferences and Users to change your admin password and add users. You can import feeds under the Feeds tab or click Exit Preferences and then Actions, Subscribe to feed to add feeds manually.
I’ve lost count of the number of times that I have installed Ubuntu Server on my VMware vSphere box – so I finally looked in to performing an unattended install.
I could have setup DHCP and TFTP servers and done PXE boot from images over the network – but I wanted to work on something quicker than that (and I don’t have that much spare RAM on my vSphere box as it is).
So I settled on re-mastering an Ubuntu Server .iso image. The result is an unattended install, except for the initial boot screen (where I need to select a minimal virtual machine installation anyway).
The following steps were performed on Ubuntu Desktop.
The -r switch copies directories recursively and -T specifies no (singular) target directory.
Now we have a copy of our Ubuntu .iso to work on in /opt/serveriso – but we need to make these files writable:
sudo chmod -R 777 /opt/serveriso/
With this preparation done we can start customizing things.
If we look at the isolinux/langlist file we see all the supported languages listed that Ubuntu supports (in an abbreviated format):
I am only interested in an English install so I am going to overwrite the contents of isolinux/langlist with the single abbreviation for English, which is “en”.
echo en >isolinux/langlist
This stops the language selection menu from appearing during installation.
The next step of the process is to create a kickstart file – this will provide the server install with the answers to the various questions asked during installation, such as timezone, username, password, partition structure and so on.
Install Kickstart Configurator:
sudo apt-get install system-config-kickstart
Click the Dash button and type kickstart and then click on the kickstart application.
Obviously you should customize your settings as you see fit – I have provided mine for reference.
Click File, Save File and save the kickstart file ks.cfg to /opt/serveriso.
While using the Kickstart Configurator you may have noticed that the Package Selection screen did not work. Fortunately we can manually edit the ks.cfg file so that the packages that we want are installed during Ubuntu Server installation.
At the end of ks.cfg add %packages and then list the packages that you want installed. I chose to install nano, openssh-server and open-vm-tools: