How many 1st class stamps to use on heavy letters?

In our office, we buy Royal Mail stamps in ready to use denominations of “1st Class”, “2nd Class”, “Large 1st Class” and “Large 2nd Class”. However, these are only useful for 100g weight letters.

For heavy letters, we can use multiples of these stamps. I noticed I was routinely having to do the calculation in my head as to what is the optimum combination of stamps. For example, what to stick on a 500g 2nd class large letter. I know that’s £1.58, I know the stamps represent the 100g values, so 1st=65p 2nd=56p Large_1st=98p and Large_2nd=76p.

It’s not too hard to work out, but it takes time and gets harder with bigger letters and parcels.

For a coding challenge, I wrote “Stampulator“. It’s a single web page that tells us which combinations of stamps to use, so for the 500g 2nd Class large letter, £1.58 example, we need 1 x 1st class and 1x Large 1st class. That’s over by 5p, but it’s the nearest value to the cost.

I also made it so that if I have different value stamps or a different target value to reach (say; special offer or I’ve been slow to update the values when Royal Mail prices change) I can input those and get an instant result.

I then printed the page and stuck it buy our post box.  Stampulator is on my web server and free to use – it works well from a mobile phone too.

Post a comment here if it’s useful and that’ll encourage me to keep it up to date.

Google Chrome ” Failed – Network Error.” on downloading files greater than 5MB (ish) – solution for me = Disable QUIC

For the last few weeks I’ve had trouble downloading from Google Photos files. It only happened on large files; videos & zip files of multiple images. The problem only affected Chrome – the files would download fine using Firefox. For single images the download would work so I decided it must be something related to the files size. A lot of googling and it seemed to only affect me so I removed and fully reinstalled chrome, that didn’t fix it.

I found similar problems in the help forum but none of them had solutions (technically, one did by changing the download folder location but that didn’t work for me).

I used developer console to see what happened on the page and try and get more of an error message. I found “load resource: net::ERR_QUIC_PROTOCOL_ERROR”. I then googled that and found a page on stack overflow suggesting that disabling QUIC would help, and to disable it here: chrome://flags/#enable-quic

What is QUIC? It appears to be a protocol to improve performance of pages of the network using UDP. There were mentions that some networks/routers/devices don’t work properly with this format. I don’t know which part of the network between me and Google Photos is at fault, but once I disabled QUIC the downloads completed perfectly. If you want to know more, you’ll need to research it from here yourself. Sorry, I have work to do 🙂

Blog traffic

sep2016_cloudflare_traffic According to cloudflare, my blog traffic this month.

sep2016_analytics_trafficAccording to google analytics, my blog traffic this month.

The difference?
a) Cloudflare is counting requests to my server (each image, each file, each style sheet) whereas Google Analytics is joining requests from the same visitor into a session.
b) Hackers don’t always ask for a page, and don’t load the javascript that google analytics need to track the visitor.

What can we infer from this?
Computers in Ukraine that are visiting the blog aren’t reading this text, but they visit a lot.
We cannot state with certainty the users instigating this are in Ukraine though, just that the requests can be traced as far as there. Likewise, computers in the USA requesting items from my server are less likely to be people than computers in the UK.
My guess of the reason for this: Hacking attempts, search engines.
You cannot tell from the source (eg: Ukraine) that the people controlling the computer requesting files [be that hacking or search engine or other uses good and bad] are in Ukraine. For example, for a long time my web server lived in Paris, but I’m not in France to control it.

Upgrading my server to PHP7 broke wordpress admin interface (cloudflare)

wp-cli, a command line interface for WordPress, just saved me a big headache.

It all began when I updated my server to the latest ubuntu LTS. That removed PHP5 support and replaced it with PHP7. That meant several of my sites stopped working as the nginx configuration had to change. With that done I thought the blog as the pages were still displaying OK but for some reason I could no longer access the admin interface to make new posts or moderate comments. Instead, when I logged in I would see a message: Sorry, you are not allowed to access this page

I decided the most likely cause of this would be a plugin. However, I couldn’t use the admin interface to disable all the plugins so I needed to find another way. I read that I could start editing the database directly but then I stumbled upon wp-cli. Installing that allowed me to use an SSH shell to check, disable, enable and update the plugins. I quickly discovered that there was an update to cloudflare that hadn’t been applied. I applied it and it failed. I’m not entirely sure if it failed due to the background change of PHP5 to PHP7, or if my earlier fault finding had changed a file owner (I tried removing the plugins by moving the directory to a new name), but once I used wp-cli to update cloudflare everything started working perfectly, including the ability to preview theme changes which had stopped working a long time ago.

If I didn’t find wp-cli I would have been checking my backups, creating a test server in my office to install and test backups and assuming they worked removing and reinstalling wordpress.

Hot topic – our backup server (overheating!)

Ahh, computer backups. I’ve said before and learn’t first hand how important they are over the years.

My business backup routine is to copy the files from the server in another building. I used to have a ReadyNAS NV+ (image on wikipedia) and for many years it served us well. Actually, 8 years, which is long enough for any piece of hardware you rely on. Last Christmas (yeah… 6 months working on this between other tasks) I spent some time reading up on hardware and software solutions to replace our ReadyNAS. I did consider getting the latest ReadyNAS or equivalent but in the end decided to build my own solution. An excellent blog post by Brian Moses about FreeNAS set me on the right track and pretty much I followed the suggestions there along with reading the FreeNAS forums. Brian chose the the Silverstone DS380 case and so did I. I was really tempted by the 8 hot swap trays.

I’ll not repeat all the logic of Brian’s post but the one thing that didn’t work for me was the case. The airflow was so poor the disks were overheating. Reading forum posts, it seems the biggest difference between users is how many disks people were using. 4 disks 5 disks with space between each one and things are fine. Using all 8 slots for disks and they all get a touch on the hot side.

2pm Wed 11th May – with Stock Fan

Using the fans that came with the case, all 8 bays populated with disks, and an additional piece of cardboard to help guide the flowing air over the disks (which did help a little) these were the temperatures:

(nb: I wasn’t worried about the CPU temperatures, but I’ll share them anyway)

CPU 0: 47 C
CPU 1: 48 C
CPU 2: 48 C
CPU 3: 50 C
CPU 4: 49 C
CPU 5: 49 C
CPU 6: 48 C
CPU 7: 48 C

ada0 WD-WMC300111734: 37 C
ada1 WD-WMC300109616: 35 C
ada2 WD-WCC4N3KK2FF4: 43 C
ada3 W6A0JJ85 : 42 C
ada4 PK2234P9J6RM5Y : 45 C
ada5 WD-WCC4N4VA3V61: 40 C
ada6 W6A0FZ8V : 36 C
ada7 PK2234P9J95JKY : 43 C

Monday 17th May 17:30 – with PWM Fan

The stock fans were 3 pin but the motherboard supported 4 pin ‘PWM’ fans. This allows it to control the fan speed so if the system heats up it will speed up the fans. I thought maybe all I need to do is change the fans.

CPU 0: 51 C
CPU 1: 51 C
CPU 2: 52 C
CPU 3: 52 C
CPU 4: 51 C
CPU 5: 50 C
CPU 6: 53 C
CPU 7: 53 C

ada0 WD-WMC300111734: 41 C
ada1 WD-WMC300109616: 39 C
ada2 WD-WCC4N3KK2FF4: 47 C
ada3 W6A0JJ85 : 48 C
ada4 PK2234P9J6RM5Y : 52 C
ada5 WD-WCC4N4VA3V61: 44 C
ada6 W6A0FZ8V : 40 C
ada7 PK2234P9J95JKY : 46 C

So, £50 of fans later (I didn’t scrimp) and…. oh, it’s hotter. I guess the orignal fans were always running at full speed.

Wed 1st June, in the Fractal case – with fans that Fractal included.

I settled for buying a new case, a “Fractal Design ARC Midi R2 Black Mid Tower Quiet Performance Case with Side Window USB 3.0 w/o PSU”. The window wasn’t important but that’s what the supplier had in stock. It’s a much bigger case and reading reviews and looking at pictures cooling seemed better and there’s more air gap between each drive. The one thing I’ve given up by choosing this case is having a hot-swap facility for the drives. In truth, I’ve only ever swapped hard drives out about once a year or less so I decided I really don’t need hot swap.

So, what difference did it make?

CPU 0: 30 C
CPU 1: 31 C
CPU 2: 31 C
CPU 3: 31 C
CPU 4: 32 C
CPU 5: 31 C
CPU 6: 32 C
CPU 7: 31 C

ada0 WD-WMC300111734: 28 C
ada1 WD-WMC300109616: 28 C
ada2 WD-WCC4N3KK2FF4: 26 C
ada3 W6A0JJ85 : 26 C
ada4 PK2234P9J6RM5Y : 31 C
ada5 WD-WCC4N4VA3V61: 25 C
ada6 W6A0FZ8V : 27 C
ada7 PK2234P9J95JKY : 30 C

It made a lot of difference!

Royalty free music and a time lapse video for work

Here’s a great way to start 2016, Win an award for “Best Domestic Bathroom Installer 2015“.
My brother David entered the Geberit Awards, Geberit being a large multinational manufacturer of bathroom products and out of all the entries from all of the UK, he won. We’re very proud 🙂

That prompted us to finish editing a timelapse video of the winning bathroom. Rather than a silent movie sound I went searching for suitable music to accompany the movie and found the track “Pamgaea” by Kevin McCleod. Best of all, the licence to use this sound track was ‘Royalty Free‘ as well as being free of cost on condition it was clearly attributed to the author. That’s very much like the software code I’ve written and shared, although Kevin is a master of his craft, whereas I’m just an amateur coding for fun.

As well as free when attributed, the music can also be licensed for a fee when an attribution is not possible or wanted. Example: Background music when you’re on hold. In my mind I always thought licensing that type of music was expensive, turns out to be a lot less than I expected.

Migrating from phpBB to Google Groups

For many years I’ve run a tiny web site for the village we live and work in. 8 years ago (or maybe more) I added a forum to the site using phpBB, as they say about themselves ‘THE #1 FREE, OPEN SOURCE BULLETIN BOARD SOFTWARE’.

It’s been very good software, regularly updated and very easy to maintain. However, the most interaction I have with the forum now is blocking spam registrations and migrating it to new servers every couple of years. There are only a couple of posts a year now, so I wanted to find a way of reducing my administration workload.

I decided to migrate it to a “google groups” group. Which is just like a forum with less customisation options. I couldn’t find any guides to migrate away from phpBB so I worked out my own method and here’s how I did it, in case you’re trying to do the same.

Steps in short form:
1) Get data from phpBB database tables as CSV file
2) Write script to process CSV file into multiple emails to the group

1) Get data from phpBB database tables as CSV file
I only needed to migrate each topic and all it’s replies. None of the other database content was important to me.
To do this, I wrote a SQL query:

SELECT po.post_subject, po.post_text, po.post_id, po.topic_id, po.post_time, us.username_clean, top.topic_title, top.topic_time
FROM phpbb_users as us, phpbb_posts as po, phpbb_topics as top
WHERE us.user_id = po.poster_id and po.topic_id = top.topic_id
ORDER BY po.topic_id ASC, post_time ASC

Essentially, this takes selected columns from the tables ‘phpbb_users’, ‘phpbb_posts’ and ‘phpbb_topics’. I’m not sure using ‘WHERE’ is very efficient and perhaps ‘INNER JOIN’/’OUTER JOIN’ would be technically better, but mine was a small database and this was more than fast enough for me (58ms for 114 rows).

Then, I saved the result as a CSV file. Opened it in LibreOffice to check. Several of the fields needed some hand editing, remove first line (headers), replacing some html characters, escaping speech marks, etc. I may have been able to fix those when saving the result of the query as CSV but I didn’t have many to do, so hand fix and move on was fastest.

2) Write script to process CSV file into multiple emails to the group

My script language of choice is ruby. Not because it’s any better than anything else, just what I happen to be using lately. I could have done the same in PHP if I spent a little more time on it.

This is the script:

# I saved file as: process.rb
# to run, "ruby process.rb" ... assuming you have ruby installed ;-)
# I had to install Pony from github, which i did using the specific install gem
# gem install specific_install
# gem specific_install -l
# If you're reading this later and forget where it came from,
# Share any tips and fixes in the comments there to help others please!

require 'csv'
require 'date'
require 'Pony'

#initialise the topic counters
#some default text for the first email
#you will need to delete this manually in the google groups!
currenttopic = 0
lasttopic = 0
body = "initialise"
subject = "initialise"

CSV.foreach('phpbb_data.csv') do |row|

#get current topic
currenttopic = row[3]

if currenttopic == lasttopic
#This is a reply to the topic, add to the existing body
body = body+""+"n"
body = body+"-----------------------------------------------------"+"n"
body = body+"reply_by_username: "+row[5]+"n"
body = body+"reply_date: "+DateTime.strptime(row[7],'%s').strftime("%d/%^b/%Y")+"n"
body = body+""+"n"
body = body+row[1]+"n"
#This is a new topic. SEND the last group of messages
:to => '',
:subject => subject,
:via => :smtp,
:body => body,
:via_options => {
:address => '',
:port => '587',
:enable_starttls_auto => true,
:user_name => 'YOUR-EMAIL-ADDRESS',
:password => 'YOUR-PASSWORD',
:authentication => :plain, # :plain, :login, :cram_md5, no auth by default
:domain => "YOUR-SENDING-DOMAIN" # the HELO domain provided by the client to the server

#A message to terminal on every send, nice to know that something is happening!
puts "Sent "+subject

#Reset the body (subject is set only once, no need to clear)
body = ""
#Set subject, create standard header text and set subject for email.

#Set the subject as the topic name
subject = row[6]

#Put some generic header text in place
body = body+"-----------------------------------------------------"+"n"
body = body+"This post was transfered to the google group when the phpbb based forum was shutdown"+"n"
body = body+"You might find relevant information at YOUR-DOMAIN"+"n"
body = body+"This entry includes all replies to the original topic"+"n"
body = body+"-----------------------------------------------------"+"n"
body = body+""+"n"

body = body+"Topic: "+row[6]+"n"

body = body+"created_by_username: "+row[5]+"n"
body = body+"topic_date: "+DateTime.strptime(row[7],'%s').strftime("%d/%^b/%Y")+"n"
body = body+""+"n"
body = body+row[1]+"n"
#set the value of last topic ready for the next loop.
lasttopic = currenttopic


# These are the fields in order in the CSV. Here for easy reference whilst I coded
# numbers start from zero (so post_subject = row[0])
# "post_subject", "post_text", "post_id", "topic_id", "post_time", "username_clean", "topic_title", "topic_time"

Being very lazy, I didn’t write the code to understand the first pass should *NOT* be emailed to the group, so the first email to the group titled ‘initialise’ will need to be deleted manually.

You will need to enter your own values for: Forum name, your email address, your sending domain. You’ll need a password, but be aware that if you use 2 factor authentication you’ll need to get an app specific password from your apps account.

You will want to customise the text that is added to every email, perhaps correct the spelling of ‘transfered’ too 😉

The script isn’t particularly fast as it connects and sends each email individually. We use google apps and as there weren’t many topics to send it was well within my daily limit of gmail usage. However, if it was higher then I could have sent them directly via smtp. There are instructions for using different email methods on the ‘Pony’ github pages. The other problem I had was errors in the CSV causing the script to stop. For example some replies had no topic name and that made the script error when it encountered them. For me, I had fixed the CSV, deleted the posts already made to the forum, and run the whole script again. For others, you might like to set up a dummy group to send your messages too first to make sure everything works, then delete the dummy group and re-run the script to send messages to the new group.

To test the email messages, I suggest you take a few rows of your CSV file and send them to your own email to check formatting and content.

If you’re wondering what my results looked like, here’s one of the topics with a reply once posted to the google group

Birthday Calculator – in case you don't want to wait a whole year to celebrate being alive

We have a tradition where I live. We celebrate being alive with a party and that party generally coincides with being alive for another 31,557,600 seconds.  31,557,600 seconds happens to be just about equal to a solar year, which is a happy co-incidence as it’s not so easy to remember otherwise.

I decided I could really do with a good excuse to party before that arbitrary unit of time though.  The solution? Write a web application where I can put in my date of birth and it will tell me other dates that I can celebrate on.

Try it for yourself at and it will tell you amazing things like;

  • How old you would be if you were born on Mercury, Venus, Mars and the other planets in our solar system
  • When your next MegaSecond birthday is (so you can have a party when you survive another 1 million seconds of existence)
  • Or for a really big bash, celebrate the very infrequent in our lifetime GigaSecond birthdays.

If you’d like me to add another arbitrary repeating unit of time post a comment.

Virtual PDF Printer for our small office network – a step by step how to

Alternative title: How I got multiple cups-pdf printers on the same server. (I didn’t, but postprocessing let me work around the problem).


I have a small business. For years we’ve been creating PDFs from any computer on our network through a “virtual appliance’ called YAFPC (“Yet Another Free PDF Composer”).

The appliance originally ran on an old PC, then on a server that ran several other virtual machines. It had a neat web interface and would allow PDF printers to be created that would appear on the network for all of our users to use. It had one printer for plain A4 paper, one for A4 paper with a letterhead background, another one for an obscure use of mine, and so on. If you printed to it, it would email you the PDF (for any user, without any extra setup needed per user). It could also put the PDFs on one of our file servers or make them available from it’s built in file server.

If I remember correctly it cost £30 and ran since 2006 right through until today, November 2014. One of my best software investments!

However, Windows 8 came along and it no longer worked. Getting Windows 8 to print to it directly turned out to be impossible.  The program was not going to be updated or replaced with a new version. I managed a short term work around having windows 8 print to a samba printer queue which converted and forwarded to the YAFPC virtual appliance. There were problems, page sizes not be exact and so on but it worked in a fashion.

Roll forward to today when I’ve just got a new network PDF virtual printer working. It wasn’t so easy to do (some 20 hours I guess) so here are my setup notes for others to follow.  The final run through of these notes had it installed and working in about an hour.

These steps assume you know quite a bit about setting up linux servers. Please feel free to use the comments to point out errors or corrections, or add more complete instructions, and I’ll edit this post with the updates.  Also please suggest alternatives methods that you needed to use to meet your needs.

Overview – We are going to create:

  • a new Ubuntu based linux server as a virtual machine
  • Install CUPS, the Common Unix Printing System
  • Install CUPS-PDF, and extension that allows files to be created from the print queue
  • Create a postprocessing script that will run every time CUPS-PDF is used that will customise our PDF’s and send them where we want them (to our users).

Sounds simple, right 🙂

Continue reading “Virtual PDF Printer for our small office network – a step by step how to”

sunspot solr slow in production (fixed by using IP address instead of domain name)

Short version:
In my sunspot.yml I used a FQDN ( ). Solr was slow
When I used the server IP ( Solr was fast.

Setting the scene (you can skip this bit):
I’ve been slowing working on some improvements to our business system at work. Whilst most of it currently runs on MS Access and MySQL, I’m slowing working on moving bits into Ruby on Rails. One of the most important things our current system does is store prices and descriptions for over 200,000 products. Searching that database is a crucial task.

Searching in Rails turned out to be very easy. Sunspot had it working very quickly on my development machine. I also had it running on my production server using the sunspot_solr gem which is meant for development only (but mines a small business, so that’s fine). However, when the server was restarted sunspot_solr needed to be manually restarted which was a pain. I thought I should probably get around to setting up a real solr server and point my application to there. So far, so good, simply: copy the config from my rails app to my new Solr service , set the servers hostname in solr.yml, commit, deploy, it worked!

The problem – Solr was terribly slow!
Re-indexing was slow. I could tell something wasn’t right. Neither my rails server or my new solr server were under load.
I created a new product instead (so that would appear in the solr index).
That was slow, but it worked. Displaying search results was also slow.

Check the logs – wow! Yep, Solr is the slow bit

Started GET "/short_codes?utf8=%E2%9C%93&search=test" for at 2014-10-01 14:28:03 +0100
Processing by ShortCodesController#index as HTML
Parameters: {"utf8"=>"✓", "search"=>"test"}
Rendered short_codes/_navigation.html.erb (1.0ms)
Rendered short_codes/index.html.erb within layouts/application (6.7ms)
Rendered layouts/_navigation.html.erb (1.3ms)
Completed 200 OK in 20337ms (Views: 10.3ms | ActiveRecord: 1.7ms | Solr: 20321.1ms)

No way should Solr take 20321ms to respond.

I tried the search on the solr admin interface and the response was instant, so I knew that solr wasn’t the problem. It must be my code (as always!).

As solr replies over http, I tried querying it from my rails server command line. Also slow. So… maybe it’s not my code… then I tried pinging the solr server from my rails server:


it said replies were coming back in less than 1ms .. but then I realised they were taking about 3 or 4 seconds between each report.
I tried pinging another server … same effect…
then I tried pinging my office router… reports every second, just as fast as I’m used to seeing it. But this was the first time I’d used an IP address and not a FQDN
Then I tried pinging my solr server by it’s address … reports every second!

So, maybe all I have to do is configure my application to talk to solr via the server IP instead of FQDN…

I tried…

Started GET "/short_codes?utf8=%E2%9C%93&search=test" for at 2014-10-02 11:51:49 +0100
Processing by ShortCodesController#index as HTML
Parameters: {"utf8"=>"✓", "search"=>"test"}
Rendered short_codes/_navigation.html.erb (0.9ms)
Rendered short_codes/index.html.erb within layouts/application (8.4ms)
Rendered layouts/_navigation.html.erb (0.8ms)
Completed 200 OK in 27ms (Views: 12.2ms | ActiveRecord: 1.1ms | Solr: 8.3ms)

… and I fixed it 🙂

Well, solr is working great. Now I need to figure out what’s wrong with using FQDNs in my network.