Sometimes it seems that while we were not looking, things changed.
Not too many years ago -
- Hardware was the largest part of any software project budget. Now, unless you are working at a massive scale, the cost of the computing hardware is a rounding error on the bottom line.
- Scripting languages were too slow for use on real projects, but the web has well and truly demonstrated that this is false.
Not too sure how this is happening, but it seems that when we first learn about something, those ideas stick and it is hard to change what we know to match the current reality. When I started commercial software development, it was common to build systems on a PDP-11 with under 512KB of RAM. These days a laptop comes with at least 2GB of RAM, an increase of main memory of a factor of 4,000, but sometimes I still catch myself trying to save a few bytes when designing some aspect of a system.
The open question for now is how to detect this type of slow change (even if the pace of technological change is not all that slow compared to other changes.) This is an important question because many societies and groups have been hit by surprises that in hindsight are obvious, and the consequences were catastrophic;
- When cutting down trees in an area, when does the population realize that there is a serious problem with deforestation?
- When does a drought become a climate shift that means the area is no longer amenable to the current mode of agriculture?
- When does the exploitation of fish in a fishery result in the collapse of the stocks in that fishery?
On the technology side, when do the desktop application developers get hit overtaken by the web applications running in a browser? Functionality wise, we can deliver nearly equivalent functionality over the web provided we have the bandwidth, so maybe it is time to recreate departmental applications as web applications?
This is old news to europeans, but Canada has just started to move to this technology, and it looks like the same systems that are deployed in Europe. With that in mind, here are a few links to known problems in the European model
Chip and Spin is a site that looks at the overall context of the Chip and PIN model, but most interesting of all is that of all places to be doing this type of research, the University of Cambridge is investigating Banking security.
The main issue is that with a credit card containing a chip and the customer providing the PIN, it is going to be a lot harder for the account holder to prove that the transaction is fraudulent. But as the study shows, cloning a card containing a chip is not that hard, and obtaining the pin is not much harder (even before we get into the social engineering possibilities).
Money quote from the Banking security study:
We demonstrate how fraudsters could collect card details and PINs, despite the victims taking all due care to protect their information. This means that customers should not automatically be considered liable for fraud, simply because the PIN was used. Even though a customer’s PIN might have been compromised, this is not conclusive evidence that he or she has been negligent.
Update from the same source - How Not to Design Authentication talks about the problems of using credit cards for online transactions (card not present transactions).
Yet another update from the same team: Chip and PIN is broken
The flaw is that when you put a card into a terminal, a negotiation takes place about how the cardholder should be authenticated: using a PIN, using a signature or not at all. This particular subprotocol is not authenticated, so you can trick the card into thinking it’s doing a chip-and-signature transaction while the terminal thinks it’s chip-and-PIN. The upshot is that you can buy stuff using a stolen card and a PIN of 0000 (or anything you want). We did so, on camera, using various journalists’ cards. The transactions went through fine and the receipts say “Verified by PIN”.
Since I was on the team that developed it, thought it was about time to install Tynt Insight on this blog, so I can now see what gets copied and the links will be a bit different when you copy from the site.
Based on this trend we will probably reach 400ppm in April or May 2015.
Read more: http://www.improvingwetware.com/#ixzz0dDTBA0Gp
Under Creative Commons License: Attribution Share Alike
If Tynt Insight is working correctly, clicking on that link will take you to the CO2 blog post and highlight what was copies on that posting.
This link http://www.improvingwetware.com/2010/01/09/why-this-site-has-the-co2-badge#ixzz0dDTvtIq1 goes to the articles permanent page ans should always work even after there are more blog posts on the home page that have moved the CO2 article off the home page.
In Optimised to fail the authors start with a great quote…
The late Roger Needham once remarked that ‘optimisation is the process of taking something that works and replacing it with something that almost works, but is cheaper’. [emphasis added]
Although the technical details of the protocol are not public, the authors seem to have managed to replicate what happens, but the key part of their paper are the vulnerabilities that they reveal. These vulnerabilities coupled with the transfer of liability for fraudulent transactions from the banks to the customers means that this protocol and the associated hardware and banking cards should be withdrawn from use.
Justin Etheredge has an interesting rant about browsers and the compatibility with standards. The paragraph below should have rounded corners from CSS, but as he says…
And how about this? If you’re looking at this in Safari, Opera, Firefox, or Chrome, then you are seeing nice rounded corners which are created using only CSS. Guess what you’ll see in IE8… nothing. A square box.
Looks like jQuery might be the way to go rather than trying to deal with these browser issues.
Decentralizing social media s likely to become a hot topic.
Dave WIner has created RssCloud to enable more or less real time RSS updates and notifications, helping to decentralize the notification system. Twitter was an interesting model for a while, but it has demonstrated that it does not scale to a real flash mob. Sure it works well for large traffic volumes, but when there is a massive spike in traffic the centralized model is always going to be in danger of slowing down.
At some level high traffic is indistinguishable from a denial of service attack, sure the traffic is wanted, but if the servers cannot handle it, then the system exhibits the same behaviors that it would under a real denial of service attack - no new traffic gets through in a timely manner.