10 Jan 2011 at 10:14
The web is inherently a social medium because it is easy to share URLs. The common name for this is that sites became Slashdotted, named after the popular site Slashdot that was the first of the sites were posted links could generate massive traffic.
Website traffic is social in that while it has the usual day/night differences in traffic that follows a relatively predictable curve, if an article becomes popular, traffic can spike very rapidly to previously unseen levels. A site that typically gets 1 million page views/day may find that suddenly the usual daytime peak of 100,000 page views/hour (less than 30/second) has suddenly spiked to over 500/second (which if sustained would be 1.8 million page views in the hour). All it needs is an influential site to link to the smaller site, or for lots of different social sites to jump on the link.
In contrast, iPad and similar applications do not exhibit this type of traffic. Partly this is because the apps need to be installed, but also because the apps do not lend themselves to the social sharing of content. Yes, most apps have a way of sending out a URL, but that just feeds the social web, it does not add to the traffic on the servers feeding the application.
The nice thing about this is that it makes it easy to size the servers that the application uses. It also makes me think that the application developers are missing an opportunity …
07 Jan 2011 at 16:32
Recently I had to replace a Volkswagen TDI Golf (after 300,000km it was well used), but was appalled at the lack of improvement in fuel efficiency over the past 10+ years.
Overall I normally averaged 5.1 l/100km in the TDI, normally managing 1000km between 51 liter fill ups. In Canada the Ford Fiesta is advertised as
Best in class fuel efficiency. Well it might be, but only because nobody seems to be importing the really fuel efficient cars. Based on the Canadian figures, the Fiesta will probably end up somewhere around 6.0 to 6.5l/100km. On the european figures, it is listed as 5.9l/100km, for the 1.6L 120 HP version - the only engine spec that is available in Canada.
Read this and weep
The 1.6 Duratorq TDCi ECO version of the same vehicle that is NOT available in Canada gets 3.7l/100KM and still pumps out 90HP, there is another version listed at 95HP that gets similar fuel efficiency. For people who do not like diesel, there is a 1.25L version that still does 5.5l/100km, and another1.25L petrol engine that does 85HP that does 5.6l/100km.
Canadian figures for the Fiesta are 7.1 city, 5.3 highway. There is supposedly going to be an ECO version out later, but for now an average that we might be able to expect is 6.2 l/100km.
After much looking around I ended up with a Honda Fit, (Jazz in europe). It claims 7.2 city. 5.7 highway for a combined 6.4, but in practice I’m averaging 6.6l/100KM, more than 2l/100km worse than I would be if I could have got one of the fuel efficient cars that are available in Europe.
A new TDI Golf was not on the cards since it is only available in the high “comfortline” spec, for CDN$28,000, and not very fuel efficient as the version available in Canada is 140HP, so 6.7l/100km city, 4.6l/100km highway for a combined 5.65l/100km. So in 10 years the car has more power and worse fuel economy than the previous model.
Time to keep on watching the CO2 level.
05 Jan 2011 at 14:49
The syntax for this is relatively simple
$('#footer').load('index.html #footer ul.menu');
This solves the problem of wanting to make sure that the footer is identical on all pages without having the problem of making sure that you edit all 20+ pages in a static site. Sure it would be a whole lot easier to just use a dynamic site and include whatever code was needed in the page, but some sites are still made out of static (X)HTML so this is a neat fix.
$('#footer').load('index.html #footer ul.menu'); line is the key one, it loads the index.html page and then extracts the contents using a CSS selector #footer ul.menu and replaces the existing footer div on the current page with the specified content from the index page.
Yes, the obvious complaint is that it slows down the page load time, but for most static sites this is less of an issue than the maintenance hassle of ensuring that every page is updated whenever a change occurs. It also has the side effect of cutting down the total size of the pages for sites that have lots of boilerplate code in the footer or sidebars.
For completeness I should also show the footer from the index page
<div><a title="Home" href="index.html">Home</a></div>
... lots of other links missed off here
05 Jan 2011 at 09:01
Interesting article in the sloan review on outsourcing too much. Admittedly it is about the design process in the car world, and it is short on details, but the overall implications are clear.
It seems that the business world is now waking up to the fact that it is overall systems performance that matters, not just local optimization of a single point function or module. The problem seems to be that as you Separate the Design from the Implementation the local knowledge you lose is much worse than the small gain you make in financial efficiency of the outsourcing.
29 Dec 2010 at 13:41
Having just been reminded that there is a Manifesto for Software Craftsmanship I have to point out that although I wrote the Software Craftsmanship book I have nothing to do with the manifesto.
Part of my disagreement with it is that it is a not very well disguised take off of the Agile Manifesto. I can leave aside the fact that they did not even keep the sequence the same, but to suggest that Software Craftsmanship is something beyond Agile is taking the idea to places where I do not think it belongs. Craftsmanship is about our individual relationship to the work that we do, it is not tied to a particular way of doing the work.
For me, Software Craftsmanship is about putting the individual back into the activity of delivering software. I have no interest at all in a community of professionals, the passionate amateur is much more likely to create interesting and valuable software. Professionals are too serious, amateurs get the idea that Software development is meant to be fun. One now very famous amateur has since written about something being Just For Fun.
In part my book was a rant against Software Engineering, mainly because several institutions were trying to take ideas from mechanical engineering and manufacturing practices and apply them to software development. But it was also a rant against the idea of professionalism. Rather than try to emulate the buttoned down professionalism that kills productivity and creativity, I wanted software development to become more skill and experience based. Yes, there are some practices that help in certain circumstances, but not in all. The professionals who spout about Best Practices and Certification do us all a disservice since they lock us into things that worked one time in one circumstance.
In the end, Software Craftsmanship is about producing great software. In the old traditions of craftsmanship, to be accepted a journeyman had to produce a masterpiece. Something that their fellow craftsmen would acknowledge as being worthy of the craft. For me, this is what Software Craftsmanship means, the ability to create Great Software and have fun while doing so.
10 Dec 2010 at 21:55
Had a need to look at Python textbook recently and got confused several times by the examples that were needlessly obfuscated (slightly paraphrased code):
>>> def actions():
... acts = 
... for i in range(5):
... acts.append(lambda x, i=i: i **x)
... return acts
>>> acts = actions()
- For a newbie, it would not be obvious that
acts inside the function is NOT the same variable as the same variable
acts outside of the function.
i=i is designed to be confusing, the first one is a new name in the scope of the lambda, the second is from the
for loop. The first should have had a different name to make it obvious what was happening.
- The name of the function
actions just does not communicate anything.
I’m going to have to think on this more, but maybe my next project is going to involve creating some material for newbies.
16 Nov 2010 at 10:30
Lots of interesting articles about the life of a chemist. Particularly interesting are the series of posts about Things I Won’t Work With
Tetrazole derivatives have featured several times here in “Things I Won’t Work With”, which might give you the impression that they’re invariably explosive. Not so - most of them are perfectly reasonable things. A tetrazole-for-carboxyl switch is one of the standard med-chem tricks, standard enough to have appeared in several marketed drugs. And that should be recommendation enough, since the FDA takes a dim view of exploding pharmaceuticals (nitroglycerine notwithstanding; that one was grandfathered in). No, tetrazoles are good citizens. Most of the time.
Well, the authors prepared a whole series of salts of the parent compound, using the bangiest counterions they could think of. And it makes for quite a party tray: the resulting compounds range from the merely explosive (the guanidinium salts) to the very explosive indeed (the ammonium and hydroxyammonium ones). They checked out the thermal stabilities with a differential scanning calorimeter (DSC), and the that latter two blew up so violently that they ruptured the pans on the apparatus while testing 1.5 milligrams of sample. No, I’m going to have to reluctantly give this class of compounds a miss, as appealing as they do sound.
14 Nov 2010 at 19:22
Playing Devils advocate to win
25 Oct 2010 at 21:26
Sometimes we do not need to be exact in order to understand what is feasible, sometimes ballpark estimates and back of the envelope calculations are sufficient.
For a set of calculations and supporting data about the climate and energy issues, look at the Without Hot Air site that has the full PDF of a book available that is full of the calculations and approximate data, allowing the ethical considerations to be addressed after we understand the physical constraints.
24 Oct 2010 at 17:56
Introducing Climate Hawks as an alternative way of framing the conversation.
it becomes about values, about how hard to fight and how much to sacrifice to defend [the] future.
07 Oct 2010 at 22:22
PHPUnit has the concept of Incomplete and Skipped Tests something that is missing in the Ruby TestUnit framework. Yes, technically TestUnit has flunk(), but that is not the same as TestUnit only reports failures and errors, whereas PHPUnit reports failures, errors, skipped and incompletes.
In Ruby, the idea of skipped and incomplete tests is not really supported, but since much of the Ruby world supports Test Driven Development, it is not really necessary. From what I have seen of the PHP world, Test Driven Development is not normally practiced, so it is useful to have the ability to mark tests as skipped or incomplete, and have PHPUnit report the test suite as passing, with a number of skipped or incomplete tests. It gives a visual reminder that more testing is needed, but it does not cause the test suite to report an error, so the next stage in the deployment process can continue without manual intervention (assuming that there are no failures or errors from the rest of the test suite.)
27 Sep 2010 at 21:09
It has been a while since I read The Inmates Are Running The Asylum, but I was still intrigued to se that Zed has an article on Products For People Who Make Products For People.
While it is true that Alan Cooper blames the programmer for the appalling state of many (most?) applications, there is a disconnect between how I read the Inmates book and the way the Zed reports it. We agree that programmers do not really have much control over the way that applications are designed, the “Product Manager” types in organizations control that these days., but in my view programmers still have a lot of influence.
The problem as I see it is that most programmers build applications with the toolkit that is ready to hand, as opposed to taking the time to create a better toolkit for the job. So when it comes to design the UI, even if an interaction designer has come up with a great idea for a widget, and the product manager wants to use the widget, the programmers influence this by claiming that it will be hard to do with the existing toolset. So what gets built in most cases is what can easily be achieved by the toolset.
Zed lives at the opposite extreme, having written both Mongrel and Mongrel2, so his experience is totally different - if the tools don’t do what they need to do, Zed writes a new tool. The rise of open source proves that Zed is not alone, and there has been a massive improvement in the usability of software being written by the better programmers, so things are changing.
The long bearded programmers are coming into their own and the designers like Alan are going to have to adjust their expectations, because experience programmers can do a good job with design, so the interaction designers are going to have to earn their money in the future.
13 Aug 2010 at 18:17
I ran across a really annoying error message from Adobe Acrobat in a corporate setting this week. It was refusing to print anything and the explanation message it gave was
there were no pages selected to print
Nothing much out there on the net was helpful in fixing this, but eventually I discovered that the print queue I was using had been deleted from the print server. So technically there were no pages selected to print because Acrobat could not find the printer so did not know what it was supposed to be printing on.
It would have been much easier and faster to resolve this issue if the developers had considered this as a possible failure mode and rather than put out the not very descriptive “no pages selected to print” they had reported the fact that Acrobat could not contact the printer because it no longer existed on the target print server.
So if I ever see this again I’m going to confirm that the printer queue still exists before trying anything else. Fix in the end was trivial, add the printer again under its new name and delete the old printer.
06 Aug 2010 at 22:00
Craig Venter has an interesting view of the state of the Genome Project.
I am, for example, a fan of the work that was done a short time ago that led to the decoding of the Neanderthal genome. But we don’t need any more Neanderthals on the planet, right? We already have enough of them.
Some serious thoughts as well:
The real problem is that the understanding of science in our society is so shallow. In the future, if we want to have enough water, enough food and enough energy without totally destroying our planet, then we will have to be dependent on good science.
The same could be said about the effects of computing power. Craig Venter managed to decode the human genome by harnessing the power of computers 10 years ago. We still do not appreciate the potential that computers have to change society, and maybe few realize how much computers have already changed society.
02 Jul 2010 at 13:05
There has been a lot written about the hazards of outsourcing, I even covered it in Software Craftsmanship, but it is nice to see that others are thinking about the long term implications of outsourcing manufacturing. Andy Grove has noticed that scaling is hard, and outsourcing scaling means a loss of expertise and jobs.
A new industry needs an effective ecosystem in which technology knowhow accumulates, experience builds on experience, and close relationships develop between supplier and customer. The U.S. lost its lead in batteries 30 years ago when it stopped making consumer electronics devices. Whoever made batteries then gained the exposure and relationships needed to learn to supply batteries for the more demanding laptop PC market, and after that, for the even more demanding automobile market. U.S. companies did not participate in the first phase and consequently were not in the running for all that followed.
An addition from the US viewpoint Harold Meyerson in the Washington Post points out that the outsourcing was not necessary since Germany has managed to prosper in high tech manufacturing at the same time that the US shuttered factories:
So even as Germany and China have been busily building, and selling us, high-speed trains, photovoltaic cells and lithium-ion batteries, we’ve spent the past decade, at the direction of our CEOs and bankers, shuttering 50,000 factories and springing credit-default swaps on an unsuspecting world.
28 Jun 2010 at 18:14
It turns out that I was mistaken, the Onion did not write the Rugged Software Manifesto, or at least if they did InfoQ got taken in as well.
But it still sounds like a parody even if they are serious…
Rugged takes it a step further. The idea is that before the code can be made secure, the developers themselves must be toughened up.
The InfoQ article is not a complete waste of time though, there has been some conversation sparked by the parody, but Andrew Fried seems to have taken the idea in a new direction with his condensed version with just three points:
- The software should do what it’s advertised to do.
- The software shouldn’t create a portal into my system via every Chinese and Russian malware package that hits the Internet virtually every minute of every day.
- The software should protect the users from themselves.
The first point is obvious and does not really require stating except to those developers who are not aware of authors like Gerald Weinberg.
His second point is again obvious, if you are building any software, you should know what the libraries you are including do. Well, Duh!
His third point is just plain wrong, and shows the usual arrogance of developers. Protect users from themselves is not an attitude I would like to see on my teams. Yes, protect users from stupid programmer mistakes, but the kind of arrogance shown in protect users from themselves leads to the kind of problems that fly by wire aircraft have had, notably the Airbus that did a low level pass and continued in level flight into trees and more recently shutting off a warning alarm that released the brakes causing the plane to slam into a wall.
Overall, it still sounds like a parody, but it is getting to sound like a very sick parody, more like graveyard humor.
27 May 2010 at 18:48
In Fast or Secure Software Development Jim Bird points out that many software organizations are going down the road of releasing the software very frequently.
Some people are going so far as to eliminate review gates and release overhead as waste: â€œiterations are mudaâ€. The idea of Continuous Deployment takes this to the extreme: developers push software changes and fixes out immediately to production, with all the good and bad that you would expect from doing this.
There is a growing gap between these ‘Agile Approaches’ and mission critical software that has to be secure. Jim highlights many aspects of it in his article, but there is another wider aspect to consider.
Humans are bad at planning for the long term
The really bad news with this is that in this context the long term can be days or weeks, but as a society we need to be planning in terms of lifetimes.
For ‘Agile Approaches’ it makes sense to automate the release process so that it is easy to do with minimal impact. There are lots of technologies that help us achieve painless releases, and most of them are useful and effective. But just because we have a tool does not mean we should rely on it. As Jim points out, the frequent release mantra is based on the idea that it is easy to see if a release is broken, but
it’s not always that simple. Security problems don’t show up like that, they show up later as successful exploits and attacks and bad press.
Some problems require careful planning, appropriate investment and preparation. One domain I am familiar with — credit card processing — has been fraught with problems because developers have not dealt with all of the corner cases that make software development so interesting and painful. Reportedly there have been exploits from server logs that included the http post parameters, which just so happened to be credit card numbers. Of course no organization will repeat any of the known mistakes — but without the long term focus on planning, investment and preparation that mission critical software requires, some other problem is bound to occur.
Failing to address the long term
Several years ago now, the Club of Rome study on the Limits to Growth was met with disbelief in most quarters, but less than half lifetime later we are hitting up against the limits that were identified back then. Peak Oil, Climate Change and the current financial meltdown are all related to these limits. Admittedly we are not good at dealing with exponentials, but reality has a way of making sure we cannot forget them. As the demand for oil reached the existing supply levels, the lack of successful investment in other supplies or alternatives meant that many economies crunched when we got near to peak oil supply volumes.
To think, a relatively simplistic computer model from nearly 40 years ago was able to produce a crude simulation of where we are now. Yes, many details were wrong, and the actual dates they were predicting were out, but what they were predicting was inevitable without policy changes and we failed to make the necessary changes. It was always easier to just keep on doing what we have been doing until suddenly oil is over US$100/barrel and then suddenly we act all surprised and horrified about the price of everything.
Software is Infrastructure
Software lasts a long time unless you make really bad mistakes, so we need to start treating it as infrastructure. But not like we are treating our current physical infrastructure. It is nice to be able to release software easily with minimal impact and costs, but we need to make sure that in doing so we are not exposing ourselves to longer term pain. Yes, make it easy to release, but only release when you know through intense and rigorous evaluation that what you release is better than what you have out there already.
Core systems do not get replaced quickly. Our 32 bit IP addresses are starting to hurt, but rolling out IP6 is a slow process. Just like our roads and bridges we have patched the existing system up lots of times, but it needs to be replaced and soon. Unfortunately, just like our roads and bridges, the necessary investment is daunting and we still want to put it off as long as possible.
As we enter the post cheap energy era, we need to reevaluate our planning horizon. We need to rediscover how to plan for the long term. Somehow or other, forty years after NASA put a man on the moon, NASA is now reduced to hitching rides with the Russian Space Program …
16 May 2010 at 12:52
While reading Eaarth I came across an number that needed further thought. Supposedly a barrel of oil contains as much energy as a man working can put out in 11 years.
OK, so that needs some cross checking. A barrel of oil is 42 US gallons, or 158.987 liters (used to link to a Thailand website that was near the top of the google search for that conversion, now linking to wikipedia). That barrel of oil has about 6.1 GJ or 1,700 kWh, or 38 MJ per liter. roughly 10.5 kWh.
Maximum sustained energy that a human can put out is around 100W over an 8 hour period, so 0.8kWh/day, but that would be classed as extremely hard labor, so a better number to use would be about half that, so a good estimate would be 0.5kWh/day. Allowing for 250 working days/year, that means a strong, fit human can put out about 125kWh/year.
Comparing this to the 1,700kWh per barrel of oil, a human capable of a sustained output of 50W would take about 13.6 years to generate that much energy, so the figure of 11 years energy per barrel is reasonable. Other numbers in Eaarth suggest that in North America, the annual consumption of oil per person is about 60 barrels, so in a year, a human uses the equivalent of 660 humans worth of work.
Since that is hard to imagine, a smaller scale to look at it is that 1 liter of oil, enough to drive a car for maybe 15 km or 10 miles (assuming an efficient car), uses as much energy as an average human could put out in a month of working 8 hours a day, 5 days a week. (21 days work at 0.5kWh/day for the 10.5kWh in a liter of oil).
Small wonder then that the thoughts of Peak Oil and Global Warming from excess CO2 have everyone concerned in one way or another…
15 May 2010 at 20:31
One difference between fossil and the other version control tools I use (git and svn) is that by default it ignores files that are not in the repository. So rather than having to set up a .gitignore to have it ignore stuff, fossil does that automatically
To see the other files, you need to look for the extra files
And the really nice thing about extra, is that it does not matter where you are in the project directory structure when you issue the command, it searches the entire local-root looking for the unexpected files. This also means that fossil has a way of cleaning out any unwanted temporary files:
Clean prompts you to confirm every delete, so it is safe to run, but it is a nice feature when the tools you are using generate a lot of temporary or intermediate files as part of their processing.
15 May 2010 at 10:23
After a few years of using Git as my version control system, it is now time to move on to a distributed version control system that is more suited to projects with a smaller number of contributors Fossil.
Main advantage I see that fossil has over git is the ease of setup for remote servers. A 2 line cgi script is sufficient for fossil to operate as a server over http. Git has something similar, but after several years of using git setting it up on a server is still not an easy task, which is why many people choose to use a service like Github or Gitorious. But the existence of these services points to a problem, why choose to use a distributed version control system and then adopt a centralized hosting service to use it. Surely the point of using a distributed system is to make it distributed — but we have created a system where the key repositories are all stored in a central place with a single point of failure.
Yes, I know that with a distributed version control system the clones have all the information that the original has, but the concept of centralizing the infrastructure seems strange to me. I much prefer the ease with which I can create a new fossil repository and then get it served out from an available web server relatively easily. Admittedly fossil does not integrate with any other identity management system, so the creator of the fossil repository has to manage the users (other than the anonymous user), but for small teams this is not an issue.
The other key feature for me is the autosynchronize on commit, so whenever a local commit is made, the changes are pushed to the original repository. Automatic, offsite backup without having to remember to do anything beyond the normal checkin of completed work.