Building a cheap gaming PC – Part 2

In part one of this series, I ran through the build of the first machine, the Optiplex 9010. This was the more ‘high end’ of the builds, and the machine I set out to build in the first place. So what of the 7010 I never intended to order? Well I set myself a challenge to see what I could throw together for ~£100.

With the £36 Optiplex 7010 as a base I decided that 8GB of memory would be enough for it, given that I already had 2x2GB sticks (One from the 9010 and one that the 7010 came with) I went and purchased another 2 of them from CEX for £3.50 each to bring me up to 8GB, which should be just about enough for a gaming system.

Next was the CPU, the Pentium was agonisingly slow and would bottleneck any half decent graphics card, so it had to go! I sprung for a modest, but much better Core i3-3220. It cost me a grand total of £12 from CEX, so the price was right. Next up I needed storage, so at £8 a pop from CEX I grabbed another 500GB mechanical drive. It’s no SSD but it will do for what I am trying to achieve with this build.

Keeping in mind the £100 budget, but still wanting decent performance didn’t leave me with many options. I was already at £63 deep at this point and I still needed a power supply and a graphics card! Again, I wanted to go new for the PSU as it’s something that can bring down the entire system if it goes wrong. Luckily I found a factory refurbished Corsair VS450 on SCAN PC for £27, so I grabbed it, bringing us up to £90… dangerously close to the £100 budget. I was wondering how I was ever going to get a decent GPU for anywhere even close to £10…

But then I stumbled upon a GTX580 on eBay, it was listed as ‘make and offer’ and marked as ‘not working’ with the description citing a driver issue. I figured it was worth the risk and made an offer of £20, £10 over my budget, but I knew the seller wouldn’t have taken £10! I waited and a few hours later my offer was accepted. The card was a full size one so I knew some modifications would need to be made to the case to make it fit, if I could even get it working properly.

So I got all the parts together and installed Windows 10, and all seemed well except there were no video drivers. On first boot, Windows downloaded the Geforce drivers and installed then, and then I got the same driver error that the seller had said in the listing. I thought this was very strange, given this was a completely fresh install. I rebooted the machine, but this time noticing that there were two ‘out of place’ pipe symbols ( | ) on the VGA bios screen, which concerned me. After googling around and trying a few re-installs I figured it was most likely a hardware issue and I’d wasted £20.

Before I resigned myself to binning the card and jumping back on eBay, I thought I’d try one last thing… The oven! This is a pretty well documented tactic for fixing broken electronics. The basic idea is that you heat the board up just enough so that it melts and re-flows the solder, re-forming any bad joints and (hopefully) fixing the card.

I stripped the card down to a bare PCB, put it on a baking tray raised up by little foil balls and pre-heated the oven to 200c (~390f) and baked until golden brown for 8 minutes. I shut off the oven and opened the door, waiting a couple of minutes before carefully removing the card. I applied new thermal grease to the chips and re-installed the heat sink, fans, shrouding, etc that I had cleaned thoroughly while the card was baking.

Getting ready to test the oven fresh GTX580

In order to fit the large card into the 7010’s small chassis I had to make some modifications. I needed to drill out the rivets holding in the lower drive bay (see the picture of the 9010 in my earlier post, bottom right) and given the heft of the card, I used one of the old PCI slot covers to make a card support bracket. Given that I now didn’t have anywhere to mount the hard disk, I made some adapter brackets out of a couple more of the old PCI slot covers so that I could mount the 3.5in drive in the un-used 5.25in bay. Given the ‘tool less’ design of the case, this was a bit more painful than it could have been, and I may yet order an adaptor online to use, but for now it works fine.

I installed the card in the Optiplex and crossed my fingers… powering the machine on, I saw the VGA bios screen (sans |’s) and the machine booted in to Windows. I was pretty excited and then all of a sudden the screen went blank. Crap, my little cooking adventure had been for nothing! But then it came back on, at the right resolution, it was just windows installing the drivers! Cautiously optimistic, I began installing some benchmark software to see if my freshly baked GPU was a) stable, and b) any good.

The completed 7010 build

So here is the final spec of the machine, and the cost of each item.

Item Price From
Dell Optiplex 7010 £36 eBay
Intel Core i3 3220 £12 CEX
2x2GB DDR3 1600MHZ RAM £7 CEX
2x2GB DDR3 1600MHZ RAM £0 eBay
Sparkle Geforce GTX580 1.5GB £20 eBay
650W PSU £27 SCAN
Total £110

In part three, we will benchmark both systems and see how they stack up performance/cost wise to some pre-built machines, and see what it would have cost to build similar spec machines with new parts!


Building a cheap gaming PC – Part 1

Recently, I decided I might want to get back in to PC gaming. The last time I built a PC or really played any (at the time…) modern titles was in 2007/8 when I was rocking an Athlon 64 x2 5600+ with a Geforce 8800GT, needless to say my MacBook Pro with its shitty Intel graphics would probably be faster than that machine ever was! Unfortunately the MacBook Pro sucks for playing anything modern my 2018 standards so I would need something with a little more oomph in the graphics department!

With his little foray back in to PC gaming, I decided I didn’t want to spend too much money, in case I got bored of it again. So inspired by ‘Scrapyard Wars‘ and from doing some research (e.g watching lots of YouTube videos),  I figured anything supporting Intel’s LGA1155 CPUs would be a good, cheap base. LGA1155 includes Intel’s second and third generation Core i7 CPUs, which are still decent even today. With that in mind the cheapest and easiest option seemed to be; find an old office machine or workstation and throw a decent graphics card in it. So off to eBay I went!

I ended up placing bids on two Dell machines, Optiplex 9010 and Optiplex 7010. Both pretty low spec, with 2GB RAM, Pentium G645 CPUs and no hard disks. As is the way when you bid on more than one item, you win both. I got the 9010 for £35 and the 7010 for £36, which actually was a pretty decent price! But now I had two machines… so I figured I’d build two rigs, one ‘high end’ (9010) and one ‘low end’ (7010) see how they go for price/performance and probably get rid of the ‘low end’ machine when I was finished with it.

Both the systems are very similar, both supporting using a Q77 express chipset and having 2x PCIe x16 (one at x4) slots, with the 9010 also having RAID support. The next thing I needed was graphics cards for the machines. My research told me that probably one of the best price/performance cards on the used market was the Geforce GTX970 4Gb (well.. 3.5Gb). Thanks to crypto miners, graphics cards are still fairly expensive, though not as bad as they were earlier in the year. I ended up getting a Geforce GTX580 for a good price for the 7010 (more on that in the next post!)

Getting ready to build

So lets run through the Optiplex 9010 build.

I ended up finding a Gigabyte GTX970 designed for Mini-ITX machines (perfect for me as the dell’s case’s aren’t huge) for £120. A bit pricey, but not really more than these cards go for on average. Next up I needed memory, as 2gb was not going to cut it! I had a look at new memory but I was looking at £39/stick for 8GB DDR3 1600MHz, so back to eBay I want. Prices on there weren’t much better, so out of curiosity I checked out CEX (a second hand ‘entertainment’ store in the UK) and found they had the memory I wanted for £25/stick, so I ordered two of them. No, they aren’t matched pairs and yes they are used but given the ‘budget’ theme of this, who cares. CEX also offer a 2 year waranty on nearly everything they sell, which I unfortunately had to make use of.

One of the sticks arrived DOA, so I took it into the local store and they swapped it out, no questions asked! The new stick worked just fine so it was time to turn my attention to the CPU. The highest CPU that the 9010 officially supports is an Intel Core i7 3770, so I had a look for one of those. Once again I tried eBay but the best price I found was actually at CEX (£85) again, so I ordered it from them, once again having a warranty on a used part.

The machine was coming together now, but I still needed a few items; storage and a power supply, as the 240W power supply that came with the Optiplex would definitely not cut it trying to run an i7 and a dedicated GPU and the hard disk was non-existent! The power supply was a part that I wanted to try and buy new, as if it goes wrong it can wreck the entire system! I ended up going with a Coolermaster Masterwatt 650W Modular PSU (that I bought at a stupid price from Currys, simply because I had some money left on a voucher to use up). This PSU goes for £64 from SCAN, however If I didn’t have the voucher I would have gone with something cheaper, like this refurbished VS650 from corsair.

For a boot drive I wanted to have an SSD, luckily I already had a 240GB PNY SSD that I was using in an old laptop, however these are worth about £34 new. For game/file storage I decided to go for the super cheap option, so I bought two 500GB mechanical drives from CEX for the princely sum of £16 and ran them in RAID0, because why the hell not!

I installed windows 10 pro on the machine, and much to my happiness it activated automatically. The machine didn’t come with a COA, but it would have come with a digital license for Windows 8 Pro when it was new, which seemed to carry across to Windows 10, so I was sorted for an OS.

The complete 9010 build

Unfortunately cable management options are limited in this case, as baring one, the panels are riveted on, so unfortunately I couldn’t hide cables behind motherboard as I would have liked. I think it’s an OK job given the limitations and shouldn’t impede airflow. I am also limited to the size of GPU I can use, with about 22cm (9in) of space to play with, unless I want to mod the case… see below for more on that.

So here is the final spec of the machine, and the cost of each item.

Item Price From
Dell Optiplex 9010 £35 eBay
Intel Core i7 3770 £85 CEX
2x8GB DDR3 1600MHZ RAM £50 CEX
Gigabyte Mini ITX GTX970 4GB GDDR5 £120 eBay
PNY 250GB SSD (£34) Owned
2x 500GB SATA HDD £16 CEX
650W PSU (£68) Currys
Total £306  (£408)

Not including the items I either owned, or had a voucher for, the total cost for the 9010 build was £306, if you include the SSD and PSU it comes up to £408, however I probably would have gone for a cheaper, non-modular PSU if I was buying it with cash.

So far I have been running this setup for about a month and I am very happy with it. It has had no issues whatsoever and runs modern games on high graphics at 1080p very well, actually much better than I thought it would. Stay tuned for the next instalments when we talk a look at what I did with the Optiplex 7010 and then how these machines stack up in terms of price/performance!



Continuous Integration with GitHub, SFDX and CircleCI… Easier than you think!

This post is a follow up/companion to the talk I did at Dreamforce 2018. If you didn’t get to see it in person, you can check out the slides here, and I will update this post when the recording becomes available, but for now, read on.

In the salesforce ecosystem, the traditional way of moving code from development environments (sandboxes, etc) to production has been either change sets, or the migration tool (ANT). Neither method is perfect, but until recently they were all we had. Change sets are an easy, but time intensive process. They can be created in the user interface and are well within the reach of most people. The migration tool was arguable more powerful (CLI based tool, able to be scripted, etc) but a lot more difficult to use. Neither tool was particularly well suited to an agile environment that required continuous integration or delivery.

So what do I mean when I say continuous integration? I am referring to both the development practice and the tooling required to facilitate it.

A good explanation of CI, taken from Microsoft’s Azure docs is as follows: “Continuous Integration (CI) is the process of automating the build and testing of code every time a team member commits changes to version control. CI encourages developers to share their code and unit tests by merging their changes into a shared version control repository after every small task completion. Committing code triggers an automated build system to grab the latest code from the shared repository and to build, test, and validate the full master branch (also known as the trunk or main).”

This has traditionally been difficult with salesforce, because we lacked the tooling to do it effectively. In the old world, salesforce development took an ‘org-centric’ view of the world, with your production org serving as the ‘source of truth’ and sandboxes containing work in progress. This ‘org-centric’ model has a number of problems (e.g prod can be changed by anyone, developing dependent features in separate environments, merge conflicts, etc).

Since the advent of Salesforce DX (SFDX) we have been handed the tools to move towards a ‘source-centric’ world, with our source/version control system (e.g Git) becoming the source of truth, and our scratch org’s essentially becoming ‘runtimes’ rather than org’s in their own right. Scratch orgs are ephemeral things that can be created and destroyed at will, only needing to live as long as the development cycle for whatever feature you are working on, or as long as it takes for code to be pushed to them and tested (if used in a CI pipeline)

Because of SFDX, we now get access to all of the power of modern source/version control systems, such as Git. Git providers like GitHub give us powerful user interfaces, the ability to perform code reviews (pull requests) with ease, the ability to track every single change with information like who changed the file, what was changed and when. As code is versioned we also now gain the ability to revert to previous versions of the source making it much easier to recover if something goes wrong. Along side this, we know that our code is stored safely outside of salesforce and we can set up access control to prevent code being overwritten by un-authorised users.

Another huge advantage we gain from source/version control is the ability to create branches of our source code. For example; you may have your ‘production’ ready application in the 'master' branch, with the version you are currently working on in the 'develop' branch (think of your UAT/pre-prod environment). Whenever you work on a new feature, you can make a copy of your 'master' branch in to its own new branch (e.g'feature/new-feature') and do the work there. Once you are happy with it, a pull request can be made to merge it in to develop for testing. Once this has been completed and all of the code in 'develop' is ready for release, this can then be merged in to 'master' for your release.

Source/version control is only half of the equation. It is all well and good to have your code in git, and this itself is valuable, but the real power comes from automation and continuous integration. When we have CI setup, every time we make a commit to our feature branch ('feature/new-feature') we are pulling it from git, pushing it to a scratch org and running all of our tests. This lets us know very quickly if a) our code even deploys and b) if we’ve broken any tests. We also use a scratch org ‘locally’ for running and testing our code on our local machine (of-course the scratch org actually runs in the cloud) in a similar way we would have used a developer sandbox in the past.

Once our code is ready for the next environment (e.g UAT) we can have our CI setup automatically push our code to our UAT sandbox whenever a commit is made to develop, finally, once we are happy and ready to move to our production org, we can then have our CI deploy to production upon a merge into the 'master' branch.

Example of CI Development Process

So lets talk about how we actually achieve this with salesforce. In this example I will be using GitHub as my source/version control system (other alternatives are BitBucket, GitLab) and CircleCI as my CI automation tool (other alternatives are TravisCI, Jenkins). The ‘glue’ that ties all of this together is SFDX.

I’ve created a GitHub repository with everything you need to get started, I would suggest you clone it from here as instructions below reference scripts from it. If you want to find out what the scripts are doing, simply open them in a text editor. These instructions require a *nix like environment (e.g macOS, Linux, Bash on windows) and have been tested on macOS only.

NOTE: The first time this build runs it will deploy whatever is in your force-app/ folder to production. As with anything you do, be sure to try this in a developer edition org, or a sandbox first before using this with your production org!

In this example there is only a simply ‘Hello World’ apex class, test and lightning component. You should remove these before you begin and replace with your own source code. Salesforce provide steps here on how to migrate your existing code to SFDX here, After you’ve cloned the repository to your machine you should follow the instructions here (inside the folder you clone to).

Now that we have our code ready to use with CI, lets get going.

Lets tackle authentication first. To authenticate to our production org and to create scratch orgs we are using JWT to do this.

  1. You first need to create a certificate and key to authenticate with. To do this you can run the script in build/
    • Follow the prompts when creating the certificate files
    • Take note of the Base64 output (big long chunk of text), as you will need this to set up CircleCI later

      Output from the key generation script

  2. You will need to create a connected app in your production (and any sandboxes you wish to use CI with)
    • First, from Setup, enter App in the Quick Find box, then select App Manager. Click New Connected App.
    • Give your application a name such as ‘CircleCI’
    • Make sure you check Enable OAuth Settings in the connected app
    • Set the OAuth callback to http://localhost:1717/OauthRedirect
    • Check Use Digital Signatures and add your certificate file (server.crt), this will be in the build/ folder. Once you have done this delete this file
    • Select the required OAuth scopes
    • Make sure that refresh is enabled – otherwise you’ll get this error: user hasn't approved this consumer
    • Ensure that Admin approved users are pre-authorized under Permitted Users is selected
    • Ensure that you allow the System Administrator profile is selected under the Profiles related list
    • Take note of the Consumer Key as you will need it for to setup CircleCI

Connected App Settings

Now that we have authentication setup, we can configure CircleCI. I have provided a basic config.yml file, this is already within the .circleci/ directory along with some shell scripts for circle to use for deployment and validation. Circle has extensive documentation on these config files here and stay tuned for a future post covering this area in more detail.

  1. You now can set up your CircleCI build
    • Ensure you have connected your GitHub account to CircleCI, to do this go, click 'Signup' and then'Signup with GitHub'

      Adding a project in CircleCI


    • Once logged in, click on Add Projects choose your GitHub to use repository and click Set Up Project then click Start building there is an example config.yml in this repository already. You can edit this to suit your needs.
    • Cancel the first build, as it will fail without any environment variables set
    • Click the gear icon next to the repository name on the left hand side of the screen
    • In the settings screen, choose Environment Variables you will need to add three variables by clicking Add Variable
      • SFDC_SERVER_KEY is the Base64 output generated in Step 1
      • SFDC_PROD_CLIENTID is the Consumer Key from Step 2
      • SFDC_PROD_USER is the username to use with CircleCI (This should be an Integration user, with the System Administrator profile)

Setting Environment Variables in CircleCI

    • You can now re-run the first build.

Once you have all of this configured and working, you can use the CI build process from here on our, and hopefully never have to worry about a damn change-set again!

Now, every time you push to any branch other than 'master' a scratch org will be created, your code deployed to it and all tests run. If you then merge in to 'master' a production build will be run, validating and deploying your code.

Remember, this is just an introduction. In future posts I will explain in further detail the scripts and config files used in this repository, so you can customise them to suit your exact requirements.


Building a smart ‘anything’ with salesforce

This blog post is an expanded version of a talk I gave at the July 2018 London Salesforce Developer meetup with slides available here

I am sure by now we’ve all heard of the ‘internet of things’ and maybe someone us even have some ‘smart devices’ in our homes or offices. Maybe you’ve heard of Salesforce IoT. This post will show you the things you need to build a smart ‘anything’ on salesforce.

Before we go too much further, we need to understand event driven architecture, which is the underpinnings of many IoT (and general) integrations. What I mean by event driven architecture is this; Rather than a standard point to point (one system connects directly to another to exchange data) or middleware (there is software in the middle of the systems to manage the exchange of data) integration, event driven integrations rely on three things. These are publishers, subscribers and the event bus. Simply put, systems or devices publish or subscribe to messages to send and receive data, and the event bus manages this. A major advantage of this approach is that systems or devices are decoupled from one another, so if things need to change or a system needs replacing entirely, the other systems are not effected by this change.

Now that we understand event driven architecture (hopefully, here is a good trailhead for more info), on to our smart ‘anything’, in this case its a smart powerstrip. I chose this example because it is easy to demonstrate and relate to, as many people have smart devices in their homes or businesses. The concepts here are not limited to domestic use, this could just as easily be applied to a smart refrigerator, projector, water tank, conveyor belt, etc. Lets work out how we are going to make our smart power strip. We will need some hardware in the form of relays, sensors and microcontrollers (more on this in a minute) and we will need an event bus, so say hello to platform events (and MQTT), we will also need some automation and a user interface, so enter Salesforce IoT and Lightning respectively.

Lets start with the hardware, inside of this fetching white box we have the following items

  1. A modified two gang powerpoint (modified so that each outlet can be controlled individually)
  2. Two cheap 5v relay boards from amazon
  3. A current sensor (ACS712A, 5 amp version)
  4. An Arduino nano
  5. An NodeMCU 1.0 ESP8266 Dev Board
  6. A 240VAC to 5VDC power supply (aka. sacrificial phone charger)

The hardware is a tad more complex than necessary because I used what I had on hand to build this device. The relay boards and the current sensor both are designed to operate on 5v, where as the ESP8266 is a 3.3v device. So I am essentially using the Arduino nano to control all of the sensors and relays, and the ESP8266 is acting as a glorified serial to wifi converter. If I were to build this again I would do away with the Arduino Nano, as the ESP8266 does have everything needed to interface with sensors and relays directly.

The Arduino Nano’s code is simple, every 5 seconds it prints the current draw (in watts) to the serial port, It also listens for some JSON from the ESP8266 to tell it which pin and what value to set it to, to turn the relays on or off. You can check this code out on my github here

The ESP8266’s code is a bit more complex, it acts as an interface between the Arduino Nano and the outside world, so on an emulated serial port it listens for the current draw information and sends it via MQTT and Platform Events in to salesforce. It also listens for a Platform Event telling it which device to turn on or off. You can check out the code for it here.

So now we need a way for our hardware to talk to our event bus, while it is possible for it to talk directly to salesforce, as I discovered this isn’t a great idea (more on that later). I built a very quick and dirty ‘proxy’ between Platform Events and MQTT (MQTT is another Event Driven protocol, very similar to platform events but designed for low power applications). This proxy is written in NodeJS and runs on heroku, it simply takes a Platform Event and sends publishes it to MQTT, and it listens for MQTT events from the smart power strip and publishes them as a Platform Event. The code for this is available here.

Now comes the fun part, Salesforce! The first part is of course, platform events. For this to work, we have three platform events defined

  1. Smart Meter Reading – This event is published to from the device and contains the current power consumption in watts
  2. Smart Meter Event – This event is published to from Salesforce IoT and contains the type of chatter alert to generate
  3. Smart Device Event – This event is published from Lightning and subscribed to by the device to turn sockets on and off

We also have some standard salesforce objects defined these are;

  1. Smart Home – A collection of devices under one roof
  2. Smart Device – The smart device itself, child of ‘Smart Home’
  3. Smart Device Pin – An individual pin (relay in this case) on the smart device, child of ‘Smart Device’

These objects are used to store basic information such as Device ID, Pin Number and current state. We could also hang reporting from these objects to measure changes in state over time, such as power consumption on a daily/weekly/monthly basis or how often devices are on or off.

When the device is powered on, every 5 seconds, a ‘Smart Meter Reading’ event is generated from the hardware, this is subscribed to in salesforce in two ways; The first being Salesforce IoT, which in itself creates another platform event (‘Smart Meter Event’) if power usage goes over a certain level

The IoT orchestration itself fires another platform event, which is subscribed to via an apex trigger. When an event is received the apex trigger creates an appropriate chatter post to warn the user that their usage is high

The second way is directly in a lightning component, this component displays a gauge (built using gauge.js) of the current power usage to the user. It is only subscribed when the page is open, not all of the time. There is a good trailhead about how to use platform events in lightning here and the code for my lightning components is available here

From this user interface, we can generate ‘Smart Device Events’ that are sent off to the device to tell it what to turn on or off, these are generated via an Apex controller that also lists ‘Smart Device Pins’ that are related to a given ‘Smart Home’

And thats it, now of course this is just a basic example, but as you can see, its not too difficult to control physical devices from salesforce!


A RetroPie (or similar) controller for £5?!

I recently found myself in Poundland seeing what a humble pound coin could get me, aside from the usual cables, chargers and similar accessories I buy… seriously, they work fine and are only £1/£2, not to mention their chargers are far better than cheap ones you’d find on ebay! Check out this video from bigclivedotcom on the subject.

As I was browsing, I happened upon the £5(!) electronics/games section. There were a few XBOX360 games and such, but what caught my eye was this;

I forgot to take a photo at the shop

It is a Gioteck ‘Turbo Controller’ for the Nintendo Classic Mini. It looks basically like a NES controller with turbo buttons. Considering my RetroPie setup at home, but having no idea of the protocol/connector it used, I decided it was worth the sacrifice of £5 to find out if I could make it work. You can also get these controllers from the likes of argos/ebay (for £5.99, the horror!).

I got it home, opened it up and saw the Nintendo ‘nunchuck‘ style plug on the (surprisingly long) cable. This was a good start and I figured that it probably uses the same protocol as the Nunchuck or the Wii Classic Controllers and similar that use the same plug. Both of these use the I2C protocol and there are various libraries out there to allow them to be use with Arduino’s and compatible micro-controllers.

I was hoping there would be something similar for the Raspberry Pi, given it has an I2C bus built in, but unfortunately the only information I could find was on drivers for the Wii controller with a Nunchuck or Classic Controller connected to it, connected to the Raspberry Pi over bluetooth, which was no use to me as I don’t own a Wii controller.

So I decided to write my own ‘driver’ for it (more of a daemon actually!) and here is how I did it;

First thing I had to do was crack it open to see if I could find the pinout. Mercifully it was printed right on the board, along with several test points I plan to investigate later. I2C devices generally use four wires VIN (Power, 3.3v) GND (Ground), SDA (Data) and SCL/CLK (Clock).

In this case, VIN is red, GND is black, SDA is green and CLK is white.

Controller PCB with connections labeled

Given that this experiment was so cheap, I simply cut off the nunchuck style plug to expose the wires and I then attached my own pin sockets/plug for easy connection to a Raspberry Pi or other devices.

Controller with new Raspberry Pi compatible plug

Adding the plug made it very easy to connect and disconnect it from the various raspberry Pi’s I used for testing, namely a Raspberry Pi 3 I use to run RetroPie in my lounge room, and a Raspberry Pi 0W that I used for headless testing/development.

Connected to the Raspberry Pi

If you didn’t want cut it up, you can get you could grab a ‘Nunchucky‘ from adafruit and solder wires and an appropriate plug to that, or scavenge sockets from a broken system.

Once I had the new plug on it, I connected it to an Arduino nano to do some testing. I initially tried the WiiClassicController library to see if it used the same protocol as the Nunchuck/Classic Controller and luckily for me, it did. So now I had to work out a way to get that data into a useable form on the Raspberry Pi using its I2C bus.

Ideally you would write a kernel module in C for this, but given my very limited knowledge of C and desire to get it running quickly I had to pick something else. I am most comfortable with Java so my my first attempt was to write a simple app that used the PI4J and Robot libraries to take the data from the I2C bus and turn it in to keyboard commands. This was very quick and easy to write, but unfortunately was a failure as Robot on linux requires X11 to be running for it to work, and RetroPie does not use X11.

I looked around, and a good way to achieve keyboard emulation at a lower level was with the ioctl call, and there happens to be an wrapper for it in NodeJS. I am not brilliant with JS but I have written node app’s before and figured it was going to be easier than learning C (which I do want to do at some stage!)

My first attempt was using the virtual-input library, but nothing I did would make it work with the Raspberry Pi. I could get it work fine on an ubuntu VM to send keystrokes, but never on the Pi.I saw that it was used in another project, node-virtual-gamepad which is a really cool project. So I tried it and it and worked fine on the Pi.

I then had a look through the source to see if I could extract its virtual keyboard code for use in my own project and after much wrangling, I got it to work! I used evtest to detect the virtual keyboard codes as they were sent by the virtual keyboard code.

evtest running

The next thing to do was integrate the keyboard code with the I2C library to come up with some sort of daemon that would interpret the commands sent from the controller over I2C into keypresses on the virtual keyboard, thus controlling the game.

There was also code for emulating joysticks/gamepads which I do plan to build in to the daemon, so that you can choose to emulate a keyboard or gamepad depending on your needs. But the first order of business was to get it working as a virtual keyboard.

Once I had both portions working, both I2C reading and virtual keyboard, i was able to combine them to build the daemon that will run in the background and interpret the data from the controller in to keyboard presses to control the Raspberry Pi.

Testing it out with a bit of Mario

The code and is available on my github here, along with instructions on how to setup and use it. If you want more detail on how I built it, read on.

Once I had both the virtual keyboard and I2C code working combining them was relatively straightforward, but there were a few gotchas.

  1. As I learned from the Arduino library, the gamepad sends data in ‘packets’ of 6 bytes
  2. When there is no buttons pressed, the result always begins with a 0x0 (0) with the packet looking like this (decimal);
    [ 0, 0, 128, 128, 255, 255 ]
  3. The gamepad sends a ‘heartbeat’ packet of 6x 0xFF (255) byte values every ~8 seconds and a randomly times packet that begins with 0x1 (1), these look like this (decimal);
    [ 1, 0, 164, 32, 1, 1 ]
    [ 255, 255, 255, 255, 255, 255 ]
  4. In the linux event subsystem when a key is pressed a 1 is sent and it will remain pressed until a 0 is sent for the same key, you can send multiple 1’s and 0’s at once
  5. All 8 buttons are handed by the last two bytes in the array (5 and 6) and some buttons when pressed together send a new code if they are on the same byte. I had to test and map these out.
  6. I needed to ensure that 2 buttons can be pressed at a time in order for the controller to be useful

Below is a table of the keys to their ‘bytes’ that I am using to detect keypresses;

Button Position Hex Dec
D-pad Up Byte 5 0xFE 254
D-pad Down Byte 4 0xBF 191
D-pad Left Byte 5 0xF3 253
D-pad Right Byte 4 0x7F 127
Start Byte 4 0xEF 239
Select Byte 4 0xFB 251
A Byte 5 0xEF 239
B Byte 5 0xBF 191

Given that some buttons share the same byte (such as A&B) they give different results if pressed at the same time. Below is a table of the ‘Combination’ bytes and positions;

Combination Position Hex Dec
A & D-pad Up Byte 5  0xEE 238
B & D-pad Up Byte 5 0xBE 190
Select & Start Byte 4 0xEB 235
A & D-pad Left Byte 5 0xED 237
B & D-pad Left Byte 5 0xBD 189
D-pad Up & D-pad Left Byte 5 0xFC 252
D-pad Down & D-pad Right Byte 4 0x3F 63
 D-pad Down & Start Byte 4 0xBB 187
 D-pad Down & Select Byte 4 0xAF 175
 D-pad Right & Select Byte 4 0x6F 111
 D-pad Right & Start Byte 4 0x7B 123
 A & B Byte 5 0xAF 175

Once I had this information, the code itself is fairly simple.

It polls the controller every 10ms (this can be changed) for the 6 byte array. From that I build JSON object containing each button and its state (0 or 1). I then check this against the last iteration to see if its changed to detect a change in state of a button, if its changed I then set the key high or low using the virtual keyboard library, at the end of the iteration i pass the current button states in to the ‘old’ iteration variable and start again. Only if the key has changed from one iteration to the next do I send a key event to change its state in the events subsystem.

The daemon is designed to be run in the background upon boot of the system to register events from the controller and pass them to the virtual keyboard. I also noted that the controller can be connected and disconnected while the daemon is running with no ill effects.

Let me know if you found this useful or interesting, or if you have any suggestions on improving it!


Dreamforce 2017, What an experience! Part 2: Dreamforce itself

The dust is still settling from Dreamforce 2017, having only gotten back to the UK Monday afternoon, but I wanted to share my thoughts while they were still fresh in my mind. This is part two of of this blog, the first is here, about my experience speaking at Dreamforce. This blog is about Dreamforce itself.

Dreamforce 2017!

So Dreamforce is over for another year, and it was just as huge and insane as ever. This is my second Dreamforce, with my first being in 2011. It certainly is a lot bigger than when I last remember! As anyone who has been to Dreamforce knows, it is an overwhelming experience and unlike any other tech conference in existence.

For those that don’t know, Dreamforce is Salesforce’s annual user/partner/customer conference and is held each year in San Francisco, CA. This year is was four days from 6th – 9th November and had speakers like Michelle Obama and performances from Alicia Keys and Lenny Kravitz (see, not your usual tech conference!) plus over 2700 sessions from Salesforce employees, partners and customers (one of which was mine!)

Trailhead was very much at the forefront this year, with an entire ‘Trailhead Area’ in moscone west, decked out with fiberglass rocks, trees, grass and even a waterfall. The road between moscone south and north was closed with ‘Dream Valley’ being created, completely covered in astroturf and home to food stands / cafe’s, lots of seating, a music stage and even a rock climbing wall. There was a trailhead quest to complete, a Dreamforce specific badge and plenty of trailhead swag on offer (including the coveted trailhead hoody).

As I experienced the first time at Dreamforce back in 2011, it is one of these conferences you make all of these plans to see 100 sessions and catch up with everyone you know in the community. In reality you and end up seeing 10% of the sessions you planned to, and see more people than you ever expected to. This may sound like a bad thing, but so much of the value you get from Dreamforce is from the people you meet, the sessions you never thought to attend and the demos you see from Salesforce and from other partners/vendors. A key to enjoying Dreamforce is not worrying too much about what you have planned, and just go with the flow of the week.

Dreamforce is where Salesforce makes its big product announcements for the year and holds Developer, Admin, Trailhead and many more keynote sessions. The theme of the main keynote was ‘We are all trailblazers’ highlighting the economic impact salesforce has had, the fourth industrial revolution and the impact it continues to have on the world, how the 1:1:1 model allows salesforce, and other companies to ‘do well, and do good’. Also highlighted was the stories of ‘trailblazers’ such as Stephanie Herrera, most famous for #SalesforceSaturday.

The focus of the product announcements was on customisation, personalisation, deeper AI integration and IoT, with the announcement of myTrailhead, myLightning, myEinsetin, mySalesforce and myIoT. Of these, I particularly liked myTrailhead, Trailhead is a great learning management system, so rolling out to customers, allowing them to create their own internal trails and track metrics etc is a great move. Hopefully this means the end of super boring and clunky internal training systems.

As usual, here were customer demos, this time from T-Mobile, Adidas and 21st Century Fox to highlight these new product announcements. The T-Mobile and 21st Century Fox presentations had the usual level of salesforce polish and smiling people in trailblazer hoodies, however the Adidas one felt a bit odd to me, especially seeing Marc rocking a full adidas tracksuit and trainers! They centered around how salesforce provides a better understanding of customers.

I attended the developer keynote and was excited about some of the announcements, and a bit disappointed in the lack of others. The major focus of the keynote was platform events, a publish/subscribe architecture allowing you to build event driven applications (similar conceptually to things like MQTT), having used similar tech before I was impressed with this and can’t wait to play with it. Improvements to the Lightning Data Service and new standard lightning components were also announced, bringing it closer to making visualforce completely obsolete. There didn’t really seem to be much in the way of enhancements to the Apex language itself (still no case statement…) which was a bit disappointing.

I managed to attend some sessions as well, including Keir’s session on building offline mobile apps with the Salesforce Mobile SDK, Philipe’s on platform events and Chris Eales’ on helping not-for-profit’s succeed. As always the quality of these sessions was very high and it was great to learn new things from others in the community. When the recorded sessions are release, I will do another post about these in more depth.

One of awesome things about Dreamforce is the opportunities to catch up with people in the community that you’ve not seen, and to meet new people that maybe you only know from twitter / online. I thankfully was able to catch up with many people whom I worked with in Australia and had not seen in a few years. I met some great people from the Good Day Sir podcast community, and I met some new people who were fans of SchemaPuker!

As always, the ‘customer success expo’ was full of Salesforce partners and ISVs showing off their products (and giving out some cool swag). Fidget spinners and socks seemed to be all the rage this year. It is always interesting to see what is available for use with Salesforce, and working at a Salesforce partner its good to have a knowledge of what may be out there to provide solutions to customers needs.

Dreamforce is always a huge week, and it never ever feels like you get to do everything you want to do. While some people think that the whole trailblazer/trailhead/character thing is a bit over the top, the underlying message is solid and its good to part of such a supportive community and to be able to attend events like Dreamforce.

As always, a tonne of talks and keynotes are recorded and will be available online, with some already available here, so even if you didn’t make it to Dreamforce this year, you can get some idea of what it was like.


Dreamforce 2017, What an experience! Part 1: Speaking

The dust is still settling from Dreamforce 2017, having only gotten back to the UK yesterday, but I wanted to share my thoughts while they were still fresh in my mind. This will be a two part blog, first about my experience speaking there, and the second about Dreamforce itself.

Update: Video of my talk has been posted on YouTube, check it out here!

Speaking at Dreamforce

As you may be aware, this was my first time speaking at Dreamforce, and my first time speaking at such a huge event. As a fresh graduate of Speaker Academy, I had decided to submit an abstract for Dreamforce (actually, I had not even graduated at that stage, as the call for papers closed before our graduation) I had thought to myself that this would be a good opportunity to practice writing an abstract.

To my great surprise, my talk got waitlisted, meaning that it would be accepted if other accepted talks were not able to go ahead.I was happy I had gotten that far, I figured it meant my abstract was at least half decent. Much to my surprise, a week or so later I was told my talk had been accepted! To be honest I was absolutely terrified, I still hadn’t graduated speaker academy at this stage, so I’d not really done any proper public speaking before and now, all of a sudden I am talking at the largest tech conference in the world!

I started work on my talk immediately, I had the same topic as my graduation speech, so I already had some content, however my graduation speech was a lightning talk (5 minutes max) and my Dreamforce slot was 20 minutes. Graduation rolled around and I gave my lightning talk, I think it had gone fairly well and I got good feedback from my peers and Jodi and Keir.

I was feeling a little bit more confident at this stage, but still quite scared about Dreamforce. I was assigned Philippe Ozil as my session owner from salesforce, and he was amazing, he helped me hone my title/abstract, and my presentation itself, listened to my dry runs and gave helpful feedback. Having a good session owner made the whole process that much easier.

Having a couple of dry runs under my belt (with just myself and with Philippe) I was asked if I wanted to present at the London Salesforce Developer User Group as additional practice. I jumped at this opportunity and was very glad that I did, as I uncovered a bug in my presentation (Lucidchart’s import process had changed) and was able to both talk around it on stage, and had a chance to fix it before Dreamforce. It also gave me a chance to make some slight changes to my presentation and hone it that extra little bit.

The week before Dreamforce I gave a dry run to my partner, which, to be honest was probably the hardest one to give. No one likes to look like an idiot in front of a group of people (which is a big fear I had about talking), but you REALLY don’t want to look like an idiot in front of someone you love! However, I am glad I did it, as a) she had some suggestions that as an outside person, I would never had thought of and b) It wad good practice for dealing with nerves.

Finally the big week came, I hopped on a plane to San Francisco and Dreamforce began! My talk was on the second day of Dreamforce (Tuesday 7th Nov) at 12.30pm, so I had a good chance to get over jet lag and a little bit more time to prepare myself. On Monday, I made sure to explore the Trailhead area at Dreamforce, where my talk was to take place, so that I had an idea of how long it took to get there, the setup, the layout of the stage, etc.

I also had a chance to test my laptop and make sure everything would work as expected on the big day. Another advantage of scoping out the venue first, was I was able to see where related areas were that I could reference in my talk, e.g I was able to refer people to the Heroku area, or the SLDS area if they wanted more information about these things.

Tuesday finally came, and I would be lying if I said I wasn’t nervous. I woke quite early and did yet another dry run in front of the mirror, I also updated my speaker notes in my slides to include the things I had noted the day before. I got to my stage in time to watch the talk before, and by this time the neves had well and truly set in! Funnily enough, the talk before actually included screen dumps of schema builder, something I specifically reference in my own talk. Several of my colleagues from BrightGen and even some guys I used to work with in Australia were already in the audience in anticipation.

The previous talk ended, and it was my turn to take the stage. I got mic’d up and set up my laptop, by the time this was all done I only had four minutes to wait until my talk would begin… these felt like the longest four minutes of my life, nerves were in full force as I waited for the sound guys to indicate my mic was on and it was time to start. I finally get the thumbs up, mic was on and the counter started.

I begin my talk, introduce myself and SchemaPuker, and something quite strange happened, all of the nerves I had before melted away, I was focussed and it almost felt like the audience wasn’t there, I felt like I had found my stride and my talk was flowing well, I finished talking about how SchemaPuker came to be and it was time for the live demo… the nerves were back, even though I had run through it that morning, and again just before I did my talk, I was still worried it would somehow go wrong.

Luckily, it worked flawlessly, and I transitioned into the more technical half of my talk, explaining the tech I used to build and host SchemaPuker. Before I knew it, I was at about the 18 minute mark, and with only one slide to go I was ready to conclude and ask for questions, I had hit the timings I was hoping for and the talk was over.

I was feeling quite pleased with how it had gone, and I had quite a few people come and ask questions afterwards, about the tech, about SchemaPuker itself, and even questions on how I could integrate or take the tool further, which was awesome. My BrightGen colleagues told me that they though it went very well and that I had come across confident and that my pace and articulation were spot on.

So what did I learn from all of this? Well, first of all, speaking at Dreamforce is an amazing experience, and the folks at salesforce want you succeed and provide you with all of the help and support you need, not only that, but you won’t be boo-ed off stage or heckled by the audience, the Salesforce ohana is a supportive place. Secondly, I am incredibly thankful for the hard work Jodi and Keir put in to speaker academy to help prepare me for something like this! Finally, that public speaking is terrifying, but strangely addictive… I definitely want to do more of it, and I plan to submit talks for other conferences and events in the future.

I want to thank Salesforce, for accepting my talk and providing me with this opportunity and also I want to thank BrightGen, for sponsoring my trip and being 1000% supportive of me and my talk, both before Dreamforce, during Dreamforce and afterwards, It’s amazing to work at such a supportive company! Finally, I want to thank everyone that came to my talk, to my dry runs and who asked questions and gave feedback, knowing that people use the things you create, want to listen to you want to say, and want to help you to improve makes it all worthwhile.

So my advice to anyone is; we all have something interesting to share, it could be something we have built, something we have learnt or even our journey and perspective on things, so I would encourage you to get out there and talk about it, even if public speaking isn’t for you, you can always blog, tweet, podcast or contribute to the success community/stack exchange! It’s not as scary as you think.


Speaker Academy: Better than gouging your eyes out with a rusty spoon!

Like many people, when given the choice between speaking in public, and gouging my eyes out with a rusty spoon, I’d opt for the spoon.

However, public speaking happens to be a very useful skill, and very good for your career. Luckily for me, there was a third option… two members of the salesforce community here in London run an excellent programme to help people like me learn how to speak in front of others.

Jodi and Keir first ran their speaker academy course last year, with a second course running early this year. I was unable to attend the first two, but as they say, third time’s the charm, so I signed up and hoped for the best.

So what is speaker academy?

It’s a course run by salesforce MVPs, Jodi Wagner and Keir Bowden (aka. Bob Buzzard) with the intention to help people in the salesforce community learn how to speak in public, and encourage a more diverse range of people to participate in user groups, community events and even World Tour/Dreamforce.

The course goes for 6 weeks, and covers topics like; Choosing a topic, writing an abstract, developing a presentation, body language and overcoming fears. Each session runs for about an hour and at the end we are given homework. Over the course of the 6 weeks, we develop a 5 minute lightning talk, with the graduation being to present this talk at a user group, in front of real live people!

To add a bit of encouragement and competition, there would be a prize for the speaker that the audience thought was best, last time it was a speaking slot for Londons Calling, and this time, up grabs was a speaking slot for Surf Force! (which I wasn’t in the running to win, since I am co-organising Surf Force)

If you want to read more about it from the facilitators perspective, check out Jodi’s blogs (here, here and here) or Keir’s blog here.

How did it go?

Our graduation was held at the August London Salesforce Developer Meetup. I’d be lying if I said I wasn’t nervous, as I’m sure my class mates were. We had all developed and practiced our talks in the relative safety of a small group, by the end of the course we were all pretty comfortable presenting in front of one another.

This was different, this was getting up in front of 50+ people and giving a talk, something I’d not done since being ‘forced’ to in high school! I think I made it both easier and harder for myself by talking about schemapuker… easier in the sense that I wrote the tool and of course have deep understanding of it, harder in the sense that because the topic was quite ‘personal’ to me, I really didn’t want to make any mistakes!

Overall, I think my talk went well, I feel that I probably spoke a bit too fast, I’m not sure how close I was to the 5 minute mark, but to me it felt like it was over in 30 seconds! I definitely need to work on body language and movement (I didn’t do much of it). However, I did get positive feedback from people in the audience, with a few people coming up to ask more about schemapuker afterwards.

My fellow classmates presentations all went without a hitch, at least from where I was sitting.

Connor went first, talking about the advantages of using middleware.

Followed Oliver, showing us how to supercharge our sandbox refreshes.

Next was Jin, on how to make life easier for your sales team with automation

I was up after Jin, presenting schemapuker (slides here)

After my was Kyra, telling us how we can help the scouts

and last but not least, was Sean who spoke to us about securing salesforce communities.

The winner of the Surf Force slot was Sean Dukes, and I look forward to seeing him at Surf Force!

So what did I learn?

One of the things I struggle with (and this applies to my blog too) is having something interesting to say. I’ve often found myself thinking “I could talk about that” or “I should write a post about that” and then going “nah, it’s been done” or “nah, no one would be interested in that”. What I had not considered, is that everyone has unique exiprence and a unique take on things, so while maybe someone has written or spoken about something before, what they have to say and what I have to say may be different.

The fear of getting it wrong/looking stupid/being seen as a fraud goes hand in hand with this, aka. impostor syndrome. Jodi and Kier helped us to, at least somewhat overcome this and to stop comparing ourselves to others… In reality we are all in the same boat!

I learnt that its important that you talk about something you actually care about/are interested in. When it came time to write abstracts, we had to prepare three and read them to the class. That really drove home how obvious a persons preferred topic was, and how it comes across in your talk.

I also learnt that, reading from a script or from your slides is NOT a good approach, and your slides/presentation should be there to compliment and support your talk, not contain it! As Keir said many times, “less is more”.


I want to sincerely thank both Jodi an Keir for running the class, they put a lot of effort in to preparing materials, organising, giving feedback and actually teaching the course and it is of great benefit to the graduates and the community at large. Many people who have done the course have gone on to speak regularly at user groups and at other events like Salesforce World Tour. Jodi and Keir should be proud of what they are doing, and I’m grateful for the opportunity to have attended.

Will this be the start of an illustrious speaking career? …Maybe not. Do I still think I’d rather gouge my eyes out with a rusty spoon than give a talk? Not at all, I hope to be able to talk again at another meetup and improve my skills!


Supercharging a cheap GPS Tracker – Part 1: Hardware

Recently my motorcycle was stolen, which is not an experience I would like to repeat. Motorcycle crime in London is quite prevalent, and unfortunately, I didn’t secure the bike as well as I should have.

Luckily I was able to find and recover it with minimal damage, as it was dumped quite close to my home.

I decided I needed to beef up security, so aside from a chain and disk lock, I also wanted an alarm and a GPS tracker.

Having a look on amazon, I came across the ‘XCSOURCE Vehicle Tracker‘ for £14.99. This seemed very cheap for a GPS tracker, as I’d seen them advertised for hundreds of pounds in other places, but for £14.99, what do I really have to lose?

It seemed to have the basic features I wanted, with real time tracking, SMS capability and TCP/IP reporting (e.g sending data to a tracking service). I did notice that in the pictures there were many unpopulated pads for other functions, so I thought maybe I could do more with it than advertised.

Turns out my hunch was correct, it was very easy to expand the capabilities of the device, as well as get the data in to a tracking service!

Initial Setup

You will need a SIM card in order for this device to work, the device uses the 2G (GPRS) network, so make sure the provider you choose still has a 2G network.

In the UK, ee, O2 and Vodafone all provide 2G networks. Three does NOT.

I purchased a sim from giffgaff as they use O2’s network and are very cheap. I pay £5/month and get 100mb of data and 500 texts. The device tends to use ~50mb of data per month and ~100 texts, but of course YMMV.

If you just want to use the SMS functionality, all you need to do is pop the sim card in and connect up the device, however there is some configuration required if you wish to use it with a tracking service (which I will cover in my next post)

It is a good idea to change the password for the device, the default is 123456. You can do this by sending password<oldpassword> <newpassword> (e.g “password123456 111111”) to the device vis SMS.

Hardware Modifications

As shown in the pictures on amazon, you need to open it up to put in the SIM card. You can also see that there are several pads on the board that are unpopulated. This particular GPS tracker (often known as a GT06/GT02) had quite a few more features than listed, and these can be accessed by simply soldering wires to the pads.

I removed the standard wiring, and soldered in new wiring to the following pads

  1. ACC – This is for sensing if the vehicle is on, connect it to power that is switched by the ignition
  2. OIL – This is used for disabling the vehicle remotely (which i’m not currently using)
  3. MIC+ and MIC- These are used for an optional microphone (you can call the device and hear what is going on in the vehicle, no use on a bike but probably handy on a car)
  4. + and – on the back of the device – This is for a speaker (you can yell at the person who is stealing your vehicle!)
  5. VBATT and GND – This is for your backup battery (more details on that below)
  6. GPSTX and GPSRX – This is for the data stream coming from the onboard GPS (more details on that below)
  7. TX and RX – This is for the data coming from the device itself (more details on that below)

With that done, you should have something like this.


Once you’ve done that, you can re-assemble the device. With the original wiring removed, the new wires have no issues fitting through the opening for them at the end.

Battery Backup

Probably one of the most useful things you can enable the battery backup. Not only will this help to prevent the GPS tracker from draining your vehicles main battery, it also has a feature that it sends an SMS to you with the devices location as soon as it detects a power cut. So even if thieves find the main wiring and cut it, it will keep transmitting location, as long as the backup battery remains connected/charged.

If you get ahold of a battery mobile phone charger, they normally contain a 18650 3.7v cell, you can use one of these (or any other 3.7v battery really) as backup for the GPS.

Alternatively, you can buy a pair of 18650s here for £10.49 if you don’t have some laying around.

A word of caution though, soldering to batteries is not ideal, you may be better suited either getting a hold of batteries with tabs/wires, or using a holder. If you must solder, be very quick about it as heat is bad for these batteries.

Once you’ve got your battery, you simply connect it to the wires you soldered to the VBATT and GND pads earlier, I have a plug on either end of mine so I can remove it easily if required

Serial Output

You can gain access to both the device’s serial output, as well as the GPS module’s serial output from the pads on the board, the serial output is useful for debugging, as it reports overall status (e.g battery, GPS, GPRS, etc) every few seconds.

The GPS output is useful if you need data from the GPS for another purpose e.g a custom navigation, digital dashboard, etc.

The pins ‘GPSRX and GPSTX’ on the back side of the board are for the GPS signal and the ‘TX’ and ‘RX’ near the USB port are for the console. Remember for serial to work you also need a ground, the power ground (‘GND’ at the bottom of the board with the other pads) works fine here. The serial voltage is +5v (so be careful when using it with 3.3v devices like raspberry pi’s). The GPS operates at 9600bps, 8-n-1 and outputs standard NMEA sentences. The console operates at 9600bps, 8-n-1 and outputs text.

Next Steps

Once all this is done, find somewhere inconspicuous to mount the device on your bike or car, and connect all of the lines you wish to use (+12v, ACC, backup battery, etc)

In part two, I will detailing how you can use this in conjunction with software called traccar for logging and tracking of your vehicle, which really unleashes the potential of this device!


…and now for no reason: Emoji’s in your Wi-Fi name!

A while ago, I came across someone using emoji as a Wi-Fi network name (SSID), I tried to do the same on my wifi router (I wanted the delightful smiling poo emoji 💩) but my router, sadly wouldn’t let me.

I saw it again the other day, and I thought I’d have another try, after all this was years ago and I have a much newer router, and newer version of DD-WRT running on it.

But, I was rudely told that what I was trying to do was illegal.

Not to be deterred, I thought I’d try changing it via SSH… but that was not to be either.

Inserting the emoji returned “p)” which was not accepted, I did also try the unicode char for it “U+1F4A9” but that didn’t work either.

Turning to google, I wondered if anyone had done this successfully before, but all I could find was this article, This was done on the same model of wireless router as I own, but using the stock firmware.

But it did give me a good idea… So taking the same approach as in the article, but skipping straight to the server-side method, I used chrome dev tools to inspect the request;

So that is all well and good, but I need to replicate the request with new parameters, so turning to postman

Fingers crossed, I hit send on the request… and low and behold!


So far I’ve not had any issues with modern-ish devices finding/connecting to it, however I did leave the 2.4ghz radio of my wireless router alone, so that older devices can use it if need be.