The login process involves submitting a form consisting of a username, a PIN and a password over a secure https connection. The PIN and password are sent in plain text, but that is fine as the connection is secure.
An initial investigation of what was now happening when the login form was submitted showed that the PIN and password appeared to have been obfuscated in some way, so I grabbed what appeared to be the relevant javascript and ran it through a beautifier, revealing this:
$("#obfuscatedPin").val(CryptoJS.MD5(Value("originalPin")).toString(CryptoJS.enc.Base64).toUpperCase());
$("#obfuscatedPwd").val(CryptoJS.MD5(Value("originalPwd").toUpperCase()).toString(CryptoJS.enc.Base64));
So it seems they are just being run through MD5 on the client before being sent to the server which seems a little odd given the connection is secure. This does however give rise to a whole series of questions:
Why hash passwords on the client? They are secure in transit so the only threat this seems to protect against is somebody at the server end reading them.
Given that this is new code why on earth is it using MD5 which is widely considered to be at best unsuitable for new applications.
Why reduce entropy unnecessarily by uppercasing the password before hashing it?
The answer to the those last two questions probably lies in what this whole scheme implies about how both the PIN and password are being stored on the server, namely that this code is mirroring how they have historically been stored, which is to say as unsalted, unstretched MD5 hashes, in violation of pretty much every modern password security guideline.
We pretty much know that must be the case, as the client side hashing means they are not able recover the original password in order to hash it more securely, so unless they are rehashing the MD5 hash with a salt which would be very odd they must just be comparing what the client sends directly with their database.
So the end result is that what was presumably intended as a security upgrade (hashing passwords on the client) has wound up revealing just how bad the backend security is without actually improving anything!
One odd thing I noticed is that although the code shown above appears to be base64 encoding the hash what I was seeing on the wire appeared to be a hex string rather than a base64 string.
A little investigation revealed that although they were using a
standard javascript cryptography library
it had apparently been run through a javascript minifier that
had wrongly decided that CryptoJS.enc.Base64
was unused and
removed it meaning that it evaluated to undefined
and
caused toString
to default to hex encoding instead.
It then turned out that the server didn’t actually accept base64 encoding so obviously it was either changed to accept what the client was actually sending or the attempt to do base64 encoding was never right in the first place!
]]>Back in 2007 I was in dispute with them, because I had moved supplier but they had decided to ignore my final meter reading and try and bill me for gas that I should have been paying my new supplier for and in the course of that dispute I received this letter from “Central Recoveries” saying that my account had been “passed to them”:
If you look closely at the small print ringed in red you will see that they are in fact a “Centrica business” and appear to be just another name for British Gas Trading Limited.
As much as I would have loved to see them in court and ask them to explain why I should pay them for gas that somebody else had supplied me, my new supplier decided that they would credit me what British Gas was wrongly demanding so I unfortunately had to let the whole thing drop.
]]>All I need now then is an option to have them hold the CD until I have something else I want that will put me over the free delivery limit!
]]>It was always my suspicion that such a demand was behind the long impasse that prevented Universal films being available on LOVEFiLM for the last couple of years, although I have no actual evidence to support this.
That impasse was recently resolved, and at much the same time some films started appearing on LOVEFiLM with a statement at the top that reads:
The studio have licensed us to make this title available to rent on the release date below.
When that message appears the rental release date is normally shown as roughly two months after the sale release date - in other words just the sort of delay the studios have been demanding.
That doesn’t bother me too much though, because you can normally reserve a film as soon as it is listed and it will be sent to you once it is released for rental, but it has been noticeable that at least some films displaying that message cannot be reserved.
I was interested therefore when I noticed the @LOVEFiLM twitter account replying to a query about why such a film couldn’t be reserved as follows:
@sunshine_sarah6 The studio have licensed us to rent this title from the 08 Feb 2013. You'll be able to add this to your list on this date.
— Amazon Video UK (@AmazonVideoUK) September 4, 2012
so I replied, asking why the fact that it wasn’t released yet should affect the ability to reserve it:
@LOVEFiLM @sunshine_sarah6 So what happened to allowing us to reserve before things are released?
— Tom Hughes (@thughes) September 4, 2012
in return I got this rather surprising response:
@thughes This will be down to the license we have with the studios, sorry for any disappointment this may cause.
— Amazon Video UK (@AmazonVideoUK) September 5, 2012
So it seems that the studios now want to control not only when we can rent a film, but when we should be allowed to add it to our queue of things we would like to see… Presumably they they think that the more annoying they can make the process of renting a film, the more likely we are to buy it instead.
I guess what we need now is an external site that runs a pre-reservation list and tracks what we would like to be able to reserve but can’t, so that it can notify us when they become available for reservation in the normal way…
]]>The latest organisation to suddenly conclude it has the right to send me such unwanted communications is LOVEFiLM which has recently decided it should send me regular SMS messages full of some banal nonsense.
The first such message I received was two weeks ago, on 18th December 2011, and when I then checked my account settings on LOVEFiLM I was surprised to find that all the various “LOVEFiLM Marketing” preferences where unchecked apart from one labelled “by SMS” which I am quite sure I would never have checked, and certainly not while I was refusing much less annoying things like email marketing.
I immediately unchecked the box and then expressed my annoyance on twitter:
Wondering if "please SMS spam me" box on @LOVEFiLM preferences is new as mine is set and I wouldn't have done that voluntarily...
— Tom Hughes (@thughes) December 18, 2011
Needless to say my tweet was not one that the @LOVEFiLM account chose to respond to…
I assumed however that, having unchecked the box allowing SMS marketing, that would an end to it. It appears that I couldn’t have been more wrong however as today, two weeks later, another spam SMS was received from them. So twitter has one again been deployed:
So @LOVEFiLM can you explain why you are still sending me SMS spam two weeks after I turned off the SMS marketing flag on my account?
— Tom Hughes (@thughes) January 1, 2012
In addition to which I have contacted them directly via their web site to let them know what I think:
Two weeks ago, on Sunday 18th December, I was somewhat surprised to received a marketing text message from you. I was surprised because I had never knowingly agreed to receive such messages and I would normally refuse any such invitation as a matter of routine.
I was even more surprised when I checked my account details on your website and found that all the marketing permission boxes were unchecked except the SMS one. I don’t believe I would ever have chosen those settings, so I wonder if you added that option and defaulted it to on without requesting my permission?
In any case I unchecked that box on Sunday 18th December.
Perhaps you can therefore explain why, two weeks later, I have just received another unwanted spam SMS from you in defiance of my clearly stated preferences, and hence illegally, being in breach of the Privacy and Electronic Communications (EC Directive) Regulations 2003.
PLEASE CEASE AND DESIST WITH THIS BEHAVIOUR IMMEDIATELY.
For the avoidance of all doubt please take this message as notice pursuant to regulation 22 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 that you are not to send me unsolicited SMS messages again.
Any repeat will be reported to the Information Commissioner’s office for possible enforcement action with no further notice to yourselves.
Now I will sit back and see if I have managed to get their attention this time and if they are willing to learn that this sort of behaviour is completely unacceptable.
]]>Today I got the final statement which, because of the refund of the fee for the second year, showed a credit balance. Rather than enclosing a cheque for the balance though, or indicating that they had repaid it to the account that I had been using to pay the card bills, the statement had this curious message:
What I want to know is, how many people exactly don’t want a refund, and would instead prefer to hand their credit balance over to American Express?!?
Of course their secure messaging system insists on me selecting a card before I can send a message, and won’t let me select a cancelled card, so sending them a message to ask for my refund turned into a bit of a palaver as well…
]]>I wonder how many million of those they’ve just sent out…
]]>String-based interpolation of association conditions is deprecated. Please use a proc instead. So, for example, has_many :older_friends, :conditions => ‘age
#{age}’ should be changed to has_many :older_friends, :conditions => proc { “age > #{age}” }.
Now call me confused if you like, but isn’t the suggested replacement still doing just as much string interpolation as the original?
]]>The issues he highlights can be broadly divided into two categories: problems with our stylesheets and rendering technology; and problems with our data, and in particular with our US data.
The issue which I intend to address here is the one he tackles first – that of label density which is something that stems largely from data quality and, more importantly, consistency issues. Specifically, although the post talks about cities, the real question is about what is tagged as a city and what is tagged as some lesser type of place.
By way of explanation I should probably start by explaining that in OpenStreetMap tagging there are four commonly used used values for the place tag which designate a populated place. In order, from largest to smallest, those are: city, town, village and hamlet. The question which then arises is, how do we decide which of those values to use for a given settlement?
Like so many tags the specific names used come, because of OpenStreetMap’s origins, from typical British usage. It is therefore generally not a good idea to interpret the names too literally in other jurisdictions – indeed some tag values like highway=trunk aren’t even interpreted literally in England!
To the British the question of which places should be cities is fairly clear – there are a few alternative definitions (places with royal charters vs places with cathedrals) but those only relate to a few edge cases and in general there is little debate and only a relatively small number of large and/or important towns will qualify.
At the other end of the spectrum a hamlet would normally only be used for very small places that amount to little more than a handful of houses.
In between lies the distinction between villages and towns which is much less well defined but in my opinion would generally lie around the few thousand mark in population terms – once you reach 2-3 thousand residents you are probably a town rather than a village.
Interestingly the OpenStreetMap wiki disagrees a little here and suggests hamlet for populations up to one thousand and village up to ten thousand. I would argue that both of those values are too high for normal British usage and certainly larger than I would use when tagging places.
All of which brings us back to the variations in density in the US map…
The first thing to understand about the US is that most populated places there appear have been initially imported from the USGS GNIS data set. I haven’t found any documentation as to how places were categorised but I suspect it was done based on population and most likely using the values in the OpenStreetMap wiki or something close to them.
Justin’s first example starts with the apparent high density of places in Florida so I took a look at a randomly selected place in his example which appeared to be fairly small – the town(?) of Frostproof. The OpenStreetMap history for Frostproof reveals that it was originally imported from GNIS as a village (probably because of it’s population of 2922) but has recently been retagged as a city.
My suspicion is that this is the result of an overly literal interpretation of the place=city tag – as I understand things many relatively small places in the US officially style themselves as cities – certainly Wikipedia describes Frostproof in this way. Nobody in Britain, or indeed probably in Europe as a whole, would consider somewhere that small to be a city however and tagging it as such certainly goes against normal OpenStreetMap tagging practice.
In most of the rest of the US no such retagging of small towns as cities appears to have taken place, making place names there appear much less dense at low zoom levels. The sort of places which Justin’s article suggests should be appearing in those areas mostly appear to be in the 25-100 thousand population range and hence have been tagged as towns during the GNIS import. The solution here, if more place names are considered cartographically desirable, would either be to adjust the threshold at which places are tagged as cities instead of town, or to alter the stylesheets to render towns at lower zoom levels.
The relatively high density around Los Angeles which the article mentions appears to be the result of a fairly large number of places with populations just over the 100 thousand mark. Despite their large populations, and the fact they are likely independent cities legally, I suspect that many of them would be tagged as suburbs in Britain rather than as cities or towns and hence would be given lower priority when rendering.
The real lesson to be drawn from all this however is that the US OpenStreetMap community probably needs to reach a consensus on how to map populated places to tag values so that a better level of consistency can be achieved with less variation from area to area across the map.
]]>The most significant item in the collection was a personal diary for the year 1897 which carried an inscription on the flyleaf of “John Unwin, Fanny Street, Saltaire”.
In itself the diary is a fascinating piece of social history and that is certainly the main reason for including it in the archive for the benefit of future generations. The diary is of interest to my family for a second reason however, which is the mysterious way in which it came to be in the possession of my grandfather.
The story is that in the 1930s my great-grandfather, David Unwin, was the general foreman on a building site in Cricklewood when one of the workers on the site came to him with the diary and asked, given that as foreman my great-grandfather would simply have been “Mr Unwin” and that the diary belonged to John Unwn, if the diary was his.
Obviously David Unwin knew that the diary was not his, but equally he knew that his father and his older siblings had been born in Shipley and that he had many relatives in the Saltaire area, and he therefore assumed that the diary must belong to a relative of his.
Despite a number of attempts by members of the family over the last eighty years to figure out who John Unwin was and how, or if, he was related to our family no real progress was made until I started on some genealogical investigations a few years ago in an effort to draw up a family tree.
Looking at the tree I had assembled there was one obvious candidate for a John Unwin who would have been a suitable age in 1897 and reading the diary confirmed beyond any doubt that he was in fact the author – among other things the diary records the departure of his brother Robert on his way to a new life in Bridgeport, Connecticut; an event that I had already discovered evidence of during my research.
The result of my research was therefore to discover that the author was in fact the first cousin of my great-grandfather, and my first cousin three times removed. Quite how the diary came to travel from Shipley in West Yorkshire to Cricklewood in North London between 1897 and 1930 remains a mystery, especially given the excellent condition in which it survived despite being found on a building site!
The diary is now, as I indicated at the start, in the Saltaire Archive, along with my scans of the diary and my uncle’s transcription, having been handed over to representatives of the Saltaire History Club and members of the Salt family (the great-grandson of Sir Titus Salt, Denys Salt, and his nephew, Jonathan) during this year’s Saltaire Festival.
]]>