Author Archives: Sam Nicholson

Run your dual-booted Ubuntu install under Windows with VMWare

First off, this isn’t what you might call a “supported” configuration so as ever YMMV and don’t blame me if it goes horrifically wrong and you end up with two/three systems that won’t boot (your VM, your native install and optionally, the VM host :P). Having said that it worked fine for me, but it might be wise to make a disk image somewhere else and have a livecd ready if it goes wrong and you need to do recovery. Also you can possibly do this with VirtualBox or VMWare Player, but I haven’t tried it. I started with Ubuntu 15.04 installed on my second hard disk in an extended partition along with an NTFS data partition, and Windows 10 on the other disk which is the primary (but is an SSD, so it’s small) along with VMWare Workstation 12.

In short we’re going to create the VM, then add an extra boot disk and install GRUB into this from the Ubuntu DVD.

First off, create a new VM, choose a Custom machine and go through all the usual steps of names and locations and cores and RAM. When you get to “Select a Disk Type”, choose SATA (if you choose SCSI it will whinge on first boot that performance will be poor). Then select “Use a physical disk”, set Device to the correct disk (PhysicalDrive1 in my case) and choose Use individual partitions, then tick the relevant partitions for your Linux system (ie the root filesystem and the swap partition).

Finished dual boot VM

Click Next a few more times and then Finish (don’t hit Customize Hardware, or it doesn’t bother adding the disk). Then edit the machine to add a hard disk. This one can be SCSI since it’ll be an actual virtual disk, you want to create a new one and set the size for something like 200MB (it’s going to have the boot partition, so not much space needed!). Finally mount the Ubuntu 15.04 live disk in the CD drive, as you’ll need to boot this first.

Now boot the machine. At this point you might be told “Insufficient permission to access file”; when I had this working in Workstation 9 it would just throw up a UAC prompt but apparently 12 doesn’t, so restart VMWare Workstation as administrator to get that low-level disk access we need. Once you’ve got to the live desktop, open a terminal (Ctrl-Alt-T in Ubuntu). We’re going to format our new virtual disk, then chroot to it and install GRUB. To partition and format the new disk, run the following, but first heed this warning: for me the virtual disk was /dev/sda, but I suggest checking with ls /dev/sd* to make sure you don’t overwrite your dual-boot disk, as that would be a mess!

parted /dev/sda mklabel msdos
parted /dev/sda mkpart primary ext2 0% 100%
mkfs.ext2 /dev/sda1

It goes without saying these have to be run as root. Next up comes courtesy of AskUbuntu user Nathan Kidd, on this question which explains how to chroot to an empty disk; it looks like this:

mkdir /mnt/chrootdir
mount /dev/sda1 /mnt/chrootdir
for dir in proc dev sys etc bin sbin var usr lib lib64 tmp; do
    mkdir /mnt/chrootdir/$dir && mount --bind /$dir /mnt/chrootdir/$dir
done
chroot /mnt/chrootdir

Finally we need to install GRUB and then run an update to generate the menus, and we’re done!

grub-install
update-grub

That’s it! Exit the chroot, shutdown the VM (it hangs, I had to hard-reset) and remove the ISO, and the next time it gets booted it’ll be running the real Ubuntu install on your hard disk. Remember to install the open-vm-tools and open-vm-tools-desktop packages or VMWare Tools to make auto-resize and stuff work. Also I wouldn’t suspend the VM and then boot the real copy…that pretty effectively kills both the VM and the underlying install in my experience.

Installing Ubuntu wireless drivers on fresh install

So here’s the problem I found myself faced with today; when I ran Ubuntu 15.04 live on my laptop I could pop open the Additional Drivers tool, enable the driver for my wireless card and get on the Internet, lovely. Then I made the (I think very sensible) assumption that the same would work after I’d installed Ubuntu, until to my surprise, it just sits there thinking about it.

Wireless Driver Working

It turns out once you’ve installed Ubuntu expects an Internet connection and doesn’t use the CD (or USB stick in my case), not ideal when it’s the network driver and you don’t have wired in your house. In principle you should be able to go to Ubuntu Software under Software & Updates and just select the CDROM option, then tell it to install the driver.

CDROM option enabled

No dice, it still just sits there. However this AskUbuntu question (found using my phone!) points to the commands to do it from a terminal, which usually gives a bit more helpful info. First run ubuntu-drivers devices which gives the name of the package to install, then attempt to install the package by the usual means:

root@buccaneer-linux:/home/sam# ubuntu-drivers  devices
== /sys/devices/pci0000:00/0000:00:1c.1/0000:03:00.0 ==
modalias : pci:v000014E4d00004358sv0000105Bsd0000E040bc02sc80i00
model    : BCM43227 802.11b/g/n
vendor   : Broadcom Corporation
driver   : bcmwl-kernel-source - distro non-free

== cpu-microcode.py ==
driver   : intel-microcode - distro non-free

root@buccaneer-linux:/home/sam# apt-get install bcmwl-kernel-source
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  dkms
The following NEW packages will be installed
  bcmwl-kernel-source dkms
0 to upgrade, 2 to newly install, 0 to remove and 217 not to upgrade.
Need to get 0 B/1,574 kB of archives.
After this operation, 8,390 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Media Change: Please insert the disc labelled
 'Ubuntu 15.04 _Vivid Vervet_ - Release amd64 (20150422)'
in the drive ‘/media/cdrom/’ and press enter

At this point I put my USB stick back in and symlinked it (ln -s /media/sam/UUI /media/cdrom if you don’t know) and pressed Enter, which presented me with exactly the same message again. Apparently doing that just makes Ubuntu helpfully unmount the USB stick!

Adding new software source

The solution is to add the USB stick as a separate software source, although identifying the correct syntax took some trial and error. Under the Other Software tab, hit Add, then enter a line something like below, hit Add Source, do apt-get update (there will be a lot of “failed to fetch” errors!) and then try the install.

deb file:///media/sam/UUI vivid main restricted

However (and this is where I had trouble and had to run the update quite a few times), the three words after the path depend on the distribution. So, on your USB stick in the folder you specify (UUI in my case) should be a dists folder and in that folder should be another folder for the distribution – vivid in my case. The final two refer to the components available and should match the subfolders of vivid

Folder structure of Ubuntu install package archive

After you’ve run the update the install should finally succeed with apt-get install bcmwl-kernel-source in my case, and then all that’s left is to remove your new software source.

tl;dr

Add a new software source like deb file:///media/usbstickmountpoint vivid main restricted, run ubuntu-drivers devices then apt-get install the relevant package under “driver”

Hope that helps someone!

How to auto-reload binaries in GNU ARM Eclipse

I’ve been working on my final year project using the GNU ARM Eclipse plugin and the STM32F4 Discovery board to write some simple signal generation software. Eclipse has a button to reset the target and restart debugging on the toolbar, and I figured it would also download a new binary if there was one. Unfortunately it doesn’t, and I couldn’t find any explanation how to start debugging, find a problem, make a change, recompile and reload the software without stopping and restarting the debugger, which isn’t all that quick.

Turns out its really easy, at the bottom of the Startup tab in the Eclipse Debug Configurations box is a space for commands to run after a reset, and it looks like that’s run when you hit the reset and restart button too. The commands I used were:

symbol-file C:\\Users\\Sam\\...someotherstuff...\\Debug\\projectname.elf
load C:\\Users\\Sam\\...someotherstuff...\\Debug\\projectname.elf

It looks like the double backslash is important to escape the path properly, and obviously I’ve changed the path and remove the project name to protect the guilty!

Hope that helps someone, or possibly me if I forget how to do this!

Linker trouble in GCC 4.8

Thought I’d share this one as it stumped me for a while. I discovered recently that Tarantula, a project I’ve been working on for YSTV, wouldn’t build using GCC 4.8 despite building fine with 4.7. It complained about not being able to resolve a few pthread symbols, specifically:

../../bin/libCaspar.so: undefined reference to `pthread_create'
../../bin/libCaspar.so: undefined reference to `pthread_detach'
../../bin/libCaspar.so: undefined reference to `pthread_join'

It turns out in 4.8 the –as-needed linker flag is enabled by default, which will not link with symbols the linker doesn’t think are needed at build time. Unfortunately because of the way it works, if a library was specified in a link line before the file that depends on it, it will not be linked.

However, if instead a shared library has a dependency, it must be specified on that library’s link line rather than the main executable’s whereas it used to be enough to specify just for the executable. For me the solution was to add the $(LIBS) makefile var to the end of the link line for the .so file.

Alternatively, for the “big hammer” solution, the as-needed behaviour of GCC can be turned off using -Wl,–no-as-needed as a global compiler flag in the Makefile.

Inexplicably slow hard disks? Try disabling PIO mode

A baffling problem I came across recently was a server where copying between internal disks ran painfully slowly, getting around 3.3MB/s between internal SATA drives. Of the machine’s three disks, two were failing with huge numbers of ATA errors and bad sectors, so I pulled those out an ran test copies to and from the remaining system disk, with no improvement. Incidentally, to find out that the drives were failing I used a brilliant little tool called HDD Guardian which reads SMART monitoring values the same way smartctl does, just with a nice graphical interface.

It turns out the server had defaulted to IDE mode on the disk controller as Server 2003 doesn’t really contain any SATA AHCI drivers and you’d need a floppy disk to install them. In of itself this wouldn’t cause an issue, but when Windows detects multiple CRC errors on a disk transfer, it will reduce the transfer rate. Eventually the disk will become stuck in Programmed Input/Output or PIO Mode, which essentially means all data transfers must pass through the narrow CPU IO bus rather than going via system memory which chews up a lot of CPU time. There’s a lot more info on this topic at http://wiki.osdev.org/ATA_PIO_Mode.

To fix this, there are some instructions in KB817472 in the Microsoft Knowledgebase and some other instructions in this TechLogon article, neither of which fixed the problem. Since the server in question is a VM host and all the VMs were copied onto other systems anyway when the drives failed, I just reinstalled it with Server 2008 and the disks set in AHCI mode, which is cheating, but it made the problem go away.

Now my transfer speeds are back to normal, so I just have to replace the two 500GB drives with 4000 reallocated sectors. Drive magnets anyone?

Neat Video noise reduction test

A quick comparative test of the Neat Video noise reduction plugin, mostly because YouTube does its own noise reduction so I can’t host it there. You probably want this full-screen to see the effect.

Notice the slight noise increase on the left hand side, and more signficantly the herringbone pattern caused by RF interference with the composite video over the long (ish) cable run and the not great analog performance of the cameras anyway.

The source isn’t really going to improve at all as the equipment it runs through is very old, but the plugin has pretty much completely removed the noise!

My flight simulator experience

Today I got the opportunity to fly a Boeing 747-400! Cue cheesey grin in shoddy phone photograph:

Me in a 747 cockpit

OK, so it wasn’t a real one. British Airways ran a giveaway over Christmas and I was lucky enough to win one of three simulator sessions at the Cranebank training centre where BA train and certify their pilots. After a couple of emails and fighting my way through the rush hour traffic heading into London I dutifully showed up at a painfully early time this morning (8:15 is a dirty word among students!).

I was introduced to Andy who would be my pilot and instructor for the session, and ushered through a maze of corridors to the simulator itself. The simulators at Cranebank are full-motion, so they are basically a cockpit wrapped in screens, all supported on big hydraulic jacks that make it feel like real flight. The inevitable safety briefing followed and like everything in aviation redundancy was everywhere; in an emergency the simulator would ‘land’ and the access bridge would lower, and if that failed there was a ladder, and if the simulator angle stopped the ladder coming down there was a scramble net! Andy pointed out they’d never even needed the ladder before as the bridge had always worked.

By Denelson83 (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

Primary Flight Display (PFD) with pink Flight Director lines

Some years ago I learned to fly a Grob Vigilant motor glider and even made a solo circuit, so we went through the controls and what the instruments did fairly quickly, paying particular attention to some of the automatic systems that make these aircraft (relatively) easy to fly. One such system is the Flight Director, which means altitudes, headings and climb rates can be programmed into the autopilot panel and pink lines on the PFD tell the pilot where to point the plane.

We set up for a takeoff on runway 27L at Heathrow, and I immediately got confused as the conditions were set for early evening but I’d arrived in brilliant sunshine at 8AM, which just goes to show how realistic the simulator is. You could even see headlights from the cars driving along the A30 and M25 near the runway! I made a rather wobbly takeoff that just about stayed on the runway, then flew a couple of turns, climbs and descents to try and get used to the aircraft, it feels a lot more sluggish than the little gliders did!

The simulator set up for takeoff

The simulator set up for takeoff

Up next was the big one, landing again. Andy talked me through it and handled all the aircraft setup like flaps and landing gear, and after nearly missing the runway and going quite a long way down it before touching the ground, it just about worked out. Oddly I landed quite a long way down the runway on my solo glider flight too after completely stuffing the approach angle and speed. Still, what is it they say, a good landing is any one you can walk away from and even better if you can reuse the plane!

We reset the simulator to 12 miles out to try again, which is where things went a bit wrong. The autopilot was engaged and setup to capture the ILS (autoland…ish) at Heathrow and the flight freeze was released. The throttles immediately advanced to full power and the whole cockpit shuddered a lot while Andy said “I’m not really sure I like that vibration” as he hastily went for the freeze button again. We reset the simulator, reset some of the controls and changed some configuration, shut off the autopilot and autothrottle this time, and tried again with much the same result. I noticed that the engine power gauges are still sat at idle despite the throttles being in the middle, and it looked to all appearances like we were dropping straight down towards Clapham when Andy hit the freeze button again.

It turned out that above the power gauges the “REV” indicator was still on, showing the thrust reversers were still extended after the earlier landing. This shouldn’t be possible in flight and the lever won’t even move unless the plane is on the ground – for good reason too as it makes the plane fall out of the sky! To get them to close again we called another Andy who had set up our simulator session and he pulled all the flight management computer circuit breakers, positioned us back on the ground and let the reversers close, then reset everything again to clear the problem.

As a minor technical note, particularly topical at the time I’m writing this; there is a huge panel of circuit breakers above the pilot’s head in the 747, including two clearly labeled ACARS. For all the news media reporting on the tragic disappearance of MH370 and how detailed technical knowledge is needed to turn ACARS off in the 777, I think they need to check their facts as once in the cockpit you just need to be able to read and pull two breakers!

After a slightly more successful landing (well I landed at the right end of the runway) Andy showed me what a category 3 autoland looks like in thick fog. Imagine staring at a blank white sheet, until at 50 feet above the ground a runway materialises out of nowhere and a second later you hit it, hoping that the autopilot has set everything up properly!

Finally we went for a change of scenery and took off out of Geneva in Switzerland, climbing out towards the Alps before turning back for my neatest landing yet, in that I stopped in the middle of the runway even if I touched down quite a long way to the left!

We’d run out of time by then, but the experience was absolutely fantastic, and I’d like to say a big thank you to British Airways for setting up the competition, and to Andy C and Andy S for a great morning out.

How to fix a loose battery on Acer Aspire 5750

I’ve had my trusty Acer Aspire 5750 for a few years now; and with the addition of an SSD, from PC World of all people (there was a sale), I expect it to last for a while yet. On and off I’ve noticed it failing to resume, or just powering off at odd times and worked out the battery is loose, but up until now I haven’t been able to fix it.

Inside the battery compartment are a pair of plastic loops which are the catch for the release mechanism, and then there are fixed plastic wedges on the battery for the loops to snap into. It looks like one or both of them is on the edge of tolerance, so occasionally the battery would fall far enough to lose contact and power off.

A little bit of plastic to the resuce

A little bit of plastic to the resuce

The solution: Cut a tiny piece from a plastic card (I used an expired gift card) and leave it on top of the battery when it goes back in, and then pull open the battery release and push it down fairly hard – who’d have thought such an annoying problem would be so easy to fix!

Anyway, I couldn’t find anything on Google for it, so hopefully this helps somebody.

EDIT 28/08/2014:
Well it turns out that wasn’t a very permanent solution to the problem, it kept coming back when the bit of plastic fell out. In the end I bought a new battery from Amazon (one of these) which has actually fixed the problem, it doesn’t wobble anymore.

The astonishing connected world

Ever looked at a vision of the future from, say, 30 years ago? According to those 1980s authors, our future contains such excitements as flying cars, living in space and colonies on the moon. Unfortunately, none of these things have really materialised; although we’ve made progress in quite a few of them. However, one thing almost never predicted is every individual wandering around, always connected, always linked to the sum of human knowledge, able to share thoughts and experiences with the entire world as a matter of course.

Naturally I’m talking about the Internet, and the evolution of the smartphone and the widening spread of mobile data, all of which add up to rich, connected applications pervading our daily lives. For many of us, we can’t imagine life without always-on connectivity, and are completely stuck when our phones/tablets/whatever break or run out of power. Going beyond the mundane desire to tweet pictures of what I had for dinner, connectivity has even played a part in overthrowing dictatorships and corrupt regimes in the Arab Spring. If I want to know the situation in Mali, learn how to fix my car, or just watch a cat falling down the stairs repeatedly, its available near-instantly, wherever and whenever I want it.

Today I’ve worked with people I’ll never meet, on a product they’ll never see, I’ve pair-programmed with a friend 80 miles away as if he were sat next to me, and I’ve watched a music video performed by 40 people, from radically different backgrounds, scattered all over the world. The likes of YouTube have made unlikely celebrities (Numa Numa guy anyone?) and launched countless careers and forced the music and broadcasting industries to turn established ideas on their head to stay relevant in today’s connected world.

Looking to the future, I’m hesitant to speculate what will come given the start to this post; and there are a great many terrifyingly plausible visions of our future, however I think its safe to say that it’ll be interesting.

Tools Abuse

Watching people using complex software is often rather entertaining. As engineers we have a vast number of extremely powerful tools at our disposal, such as EDA systems, version control and IDEs, to name but a few. In most cases they come with a relatively low barrier to entry and a lot of flexibility to adapt to the way you work. On the face of it this should be a good thing, it gets supposedly grown-up enterprises out of the dark ages (Excel is not a bug tracker!!) and using real grown-up tools.
The problems arise when you take advantage of the low barriers and dive straight in without taking time to learn how to use the tools correctly and what the recommended/standard approaches are, so that your skills are transferable. For students this is a known problem: students are graduating without skills (soft or hard) that accurately match what the industry need, and end up learning bad/lacking habits at University. The seasoned engineers and managers however, should know better, and be able to mentor the students back on to the straight and narrow. These engineers tend to either have made it up as they went back in the early days, or read half the book and concluded they knew enough. Combine this with years of finding DIY solutions to little quirks and the blind leading the blind ensues.
Its little things like “You have to close down the IDE in order to change project” when a quick empirical test shows you don’t or “If I’ve committed to the SVN repository you need to undo all your changes, update and then redo them”, ever heard of the merge tool? Or even the little things like doing 10 copy-paste operations by selecting and going up to Edit->Copy and Edit->Paste even after being told about the perfectly good keyboard shortcuts, which is infuriating when you have to sit and watch it.
This kind of abuse of tools, and the perpetrators’ stubborn refusal to do anything about it, leads to the existence of sites such as TDWTF, which on the flip side does make for some very entertaining (if sometimes slightly scary) reading.