Improving Wetware

Because technology is never the issue

Some History Of Programming

Posted by Pete McBreen 12 Mar 2011 at 12:05

A fascinating one hour videos of a talk by Douglas Crockford on Open source Heresy. The talk includes some humor around the JDON license

The Software shall be used for Good, not Evil.

Interesting to hear the idea that open source dates back to the very early days of Univac …

Celebrating 15 years of the Sokal expose...

Posted by Pete McBreen 04 Mar 2011 at 09:58

In the Spring of 1996 the Alan Sokal had his article Transgressing the Boundaries published in the Social Text journal. To coincide with the article’s publication, Sokal arranged for another article to be published A Physicist Experiments with Cultural Studies.

The displacement of the idea that facts and evidence matter by the idea that everything boils down to subjective interests and perspectives is – second only to American political campaigns – the most prominent and pernicious manifestation of anti-intellectualism in our time. – Larry Laudan, Science and Relativism (1990)

For some years I’ve been troubled by an apparent decline in the standards of intellectual rigor in certain precincts of the American academic humanities. But I’m a mere physicist: if I find myself unable to make head or tail of jouissance and différance, perhaps that just reflects my own inadequacy.

So, to test the prevailing intellectual standards, I decided to try a modest (though admittedly uncontrolled) experiment: Would a leading North American journal of cultural studies – whose editorial collective includes such luminaries as Fredric Jameson and Andrew Ross – publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors’ ideological preconceptions?

The overall result of the experiment was that the parody article was published as if it were a valid work of scholarship in the field.

Hoax or Expose?

Sometimes it is not enough to just question something, sometimes you have to go further. Yes, Sokal’s experiment is often labelled a hoax, but my take is that it was an expose of many things that are wrong with out current social and political discourse.

TL;DR Soundbites Are Going To Kill Us

Posted by Pete McBreen 02 Mar 2011 at 22:14

TL;DR If you let other people tell you what you should think, don’t be surprised if you end up doing things that are not in your own long term interest.

Niel Postman was right, we are Amusing Ourselves to Death.

Soundbites cannot communicate nuances of ideas

For whatever reason, few people take the time to really find out what is going on in the world, being happy to be few a soundbite by a politicain or demagogue. Since I have the CO2 level shown in the sidebar, a good soundbite to use as an example is “CO2 is Plant Food”. Yes, CO2 is required by photosynthesis in plants, but the role of CO2 is much much more complicated than that.

Television and Radio news rely on soundbites, and as such are destroying public discourse about important matters that as a society we need to deal with. And yes, I know that TV and Radio news have some value, but that value needs to be considered in the light of what it also does to our understanding of science, technology, economics and the political choices facing us in the 21st Century.

When the Low Cost Supplier Stops Supplying

Posted by Pete McBreen 22 Feb 2011 at 18:59

A parallel to the outsourcing debacles is what is currently happening in the supply of Rare Earths (mainly the transition metals if you remember your periodic table). The rare earth metals are commonly used in high tech components, often for the super strong magnets, but also for lots of other electronic components.

China used to be such a low cost provider that it managed to capture most of the market, all of the other suppliers basically closing their mines. Now China, which produces 97% of the world’s supply of rare earths, slashed its exports to a trickle to feed its growing domestic needs. The link goes to a feel good story about a mine that is trying to ramp up production quickly, but there is still a measure of reality in the storyline

“Bottom line, we fell asleep as a country and as an industry,” Smith said. “We got very used to these really low prices coming out of Asia and never really thought about it from a supply chain standpoint.”

In more and more areas we seem to be bumping up against problems when demand starts to exceed easily available supply. It is not that we are running out, it is just that demand is larger than expected so there is not the production capacity, so the price in the market becomes unstable, with large spikes and then resulting drops as some customers leave the market for alternatives.

Mainstream media is catching up with outsourcing

Posted by Pete McBreen 16 Feb 2011 at 09:41

The LA Times is the latest to report on Boeing’s costly lesson on outsourcing. They have an interesting lead to the story

The biggest mistake people make when talking about the outsourcing of U.S. jobs by U.S. companies is to treat it as a moral issue.

Sure, it’s immoral to abandon your loyal American workers in search of cheap labor overseas. But the real problem with outsourcing, if you don’t think it through, is that it can wreck your business and cost you a bundle.

I don’t agree that many people thought of outsourcing as a “moral issue.” The conversation was more about the balance between short term economics and the long term implications of outsourcing. Short term the numbers can look better, but long term organizations lose the ability to do the work. The end result is that in the end the supplier becomes dominant, captures most of the available profit, and the outsourcer ends up being responsible for the downside risk.

See also Outsourcing too much part II

Time for Intolerance?

Posted by Pete McBreen 16 Feb 2011 at 07:50

Jim Bird recently pointed out that there’s a cost to building good software, so calls for Zero Bug Tolerance are sometimes misguided.

There’s a cost to put in place the necessary controls and practices, the checks and balances, and to build the team’s focus and commitment and discipline, and keep this up over time. To build the right culture, the right skills, and the right level of oversight. And there’s a cost to saying no to the customer: to cutting back on features, or asking for more time upfront, or delaying a release because of technical risks.

He goes on to say …

That’s why I am concerned by right-sounding technical demands for zero bug tolerance. This isn’t a technical decision that can be made by developers or testers or project managers … or consultants. It’s bigger than all of them. It’s not just a technical decision – it’s also a business decision.

Alistair Cockburn has pointed this out in his Crystal set of methodologies, that the cost of an error in different kinds of applications varies, and that an appropriate methodology takes this into account.

  • Life and Safety Critical - things like pacemakers and flight control software
  • Mission Critical - systems that the business depends on, if the system fails you make the evening news (for example a stock exchange)
  • Departmental applications - yes the business depends on it, but a certain amount of downtime can be accepted
  • Comfort - yes, twitter and facebook, I’m looking at you

Processes and verification procedures that are appropriate for a mission critical application would be detrimental to a departmental time booking application. A good test suite would be appropriate for both, but only the mission critical application needs a formal review of the test code to make sure it is covering the appropriate cases.

This is one of the reasons why I support Jim’s idea of being intolerant of the Zero Bug Tolerance mantra.

Wider Intolerance

Some ideas just fail the laugh test, but they are afforded too much deference by people who should know better. The Rugged Software Manifesto fails the laugh test as well.

After reading about the attempts to brand Alberta’s Tar Sands as “Ethical Oil”, maybe it is time to start laughing at politicians that espouse crazy ideas as well.

The UK Government Chief Scientific Adviser John Beddington recently called for scientists to be “grossly intolerant” if science is misused

“I really would urge you to be grossly intolerant,” he said. “We should not tolerate what is potentially something that can seriously undermine our ability to address important problems.”

Beddington also had harsh words for journalists who treat the opinions of non-scientist commentators as being equivalent to the opinions of what he called “properly trained, properly assessed” scientists. “The media see the discussions about really important scientific events as if it’s a bloody football match. It is ridiculous.”

Limits Of Expertise

Posted by Pete McBreen 15 Feb 2011 at 18:09

A common problem in software development and science is that people are often not aware of the limits of their knowledge.

The electric car provides a great example of this. Richard Muller in Physics for future Presidents makes several statements about the feasibility of electric cars that have turned out to be incorrect.

This quote is from Confusing Future Presidents part 2

High-performance batteries are very expensive and need to be replaced after typically 700 charges. Here is a simple way to calculate the numbers. The computer battery for my laptop (on which I am writing this) stores 60 watt-hours of electric energy. It can be recharged about 700 times. That means it will deliver a total of 42,000 watt-hours, or 42 kilowatt-hours, before it has to be replaced for $130.

Muller implies in this post that a car would need to replace the battery after only 700 cycles, but although this might have been the case at the time, car battery suppliers are continually extending the life of a battery.

Better Place which is the pioneer in rapid change for electric batteries in cars has also pointed out that the end of life is not a fixed point. In their blog post batteries can have a second life they state

In the world of batteries, what does “end of life” really mean? According to the industry, end of life is defined as that point in time when a battery has lost 20% of its original energy storage capacity or 25% of its peak power capacity. This implies that an EV battery, with an initial range of 100 miles per charge, will reach its end of life when, years later, it only delivers 80 miles per charge.

That time is likely to be reached only after the battery has carried an electric car about 200,000 miles or 2,000 cycles. But that’s not really the end for an EV battery – it’s just the beginning of a second life that not many people know about.

All this goes to show that Muller’s expertise in a branch of physics does not necessarily extend to expertise in economics, engineering and battery chemistry. Muller had tried to dismiss the Tesla roadster before it was released by claiming that it would cost too much and the batteries would be too heavy Confusing Future Presidents part 1. But the Tesla was launched successfully as a niche sports car. Now the GM Volt and Nissan Leaf are also proving that it is possible to build electric cars.

The Better Place business model of renting the battery through quick change stations that can swap a battery faster than you can refuel at a filling station is likely to be a game changer as well.

So the next time someone states categorically that something is not possible, first check to see if they have the relevant expertise in the appropriate areas before listening too long or hard.

Outsourcing Too Much Part II

Posted by Pete McBreen 08 Feb 2011 at 13:02

To follow on the outsourcing saga, the Seattle Times reported that 10 years later Boeing is finally listening to John Hart-Smith. Note that that PDF link is covered in BOEING PROPRIETARY labels.

One interesting item from my viewpoint is that it took 10 years for the warnings to be heeded, so there are echoes of the climate change issue there.

TL;DR Fourteen pages later …

Although John Hart-Smith’s paper deserves to be read in full, I know few people will, so here are some key excerpts.

Almost all potential suppliers indicated a preference for being subcontractors rather than risk-sharing “partners’. Could they have known more about maximizing profits, minimizing risk, etc. than the prime manufacturer who sought their help?

One must ask the question as to where the skills for writing such specifications will come from if there is no continued in-house production from which to learn.

Outsourcing the generation and distribution of a company’s proprietary intellectual data would seem fraught with opportunities for potential customers to acquire the knowledge they need elsewhere.

The inherent difficulty with such an approach to business is the need to retain and develop the technical skills needed to develop future products. … One must be able to contribute in some way to products one sells to avoid becoming merely a retailer of other people’s products.

The paper does discuss some situations whereby outsourcing is useful. On a personal level this is useful because I do a lot of contract software development, so whenever I am working, my client is in effect outsourcing the work.

It is accepted that prime manufacturers cannot afford to have expensive facilities that are under utilized, even if it would save on downstream rework. It really is better to out-source such work, IF AND ONLY IF the selected supplier has excess capacity on such equipment.

In the software world, this equates to contracting for work that is done infrequently, when the skills necessary are not needed on a constant basis. It is also useful to contract in skills that are to be passed on to the in-house employees, so that the contractor comes in, sets up particular processes/machines and then trains the other staff in how to maintain and use them on a regular basis.

Using Climate Change to study Software Development

Posted by Pete McBreen 04 Feb 2011 at 20:51

Although lots of software is used in studying climate change, for example a python re-implementation of a climate model, there are many issues in climate change that relate to software development.

Parallels between Climate Change and Software Development

  • Things change slowly and then suddenly things are different

    An extra 2ppm of CO2 might not seem much, but over a working lifetime, an extra 60-70ppm has changed lots of things. In software development Moore’s Law was slowly making computers faster and cheaper. Suddenly things changed when we realized that hardware was no longer an expensive item in projects. When I started software development, the developer costs on a project were swamped by the cost of the hardware. Now using cloud machines, the hardware costs of a project can be less than the cost of gas to commute to the office.

  • What we are sure we know is not necessarily true

    In climate change, the distinction between weather and worldwide climate is not well understood. Also it is hard to figure out how a 2C change could matter that much when locally the weather can range over 60C from the depths of winter to the height of summer. In software development, historically it really mattered that the code was as efficient as possible because the machines were slow. So everyone knows that scripting languages are not useful for large scale development. Enterprise systems are built in traditional languages, but most web companies are using some form of a scripting language, or one of the newer functional languages.

  • Fear, Uncertainty and Doubt

    Software development probably lead on this one. IBM was justifiably famous for FUD to ensure that customers followed IBM’s lead and did what they were told. IBM had the market and customers were supposed to do what IBM told them was good for them. With Climate Change, large organizations that will have to change to preserve the climate that is suitable for our current civilization, are spreading as much doubt as possible to delay the realization that business as usual is no longer possible. In Software Development the threat of Open Source is currently the target of lots of FUD and large corporations that are seeing changes on the horizon are doing all they can to preserve their business as usual.

  • Nobody wants to listen to the people who know what is going on

    Software Developers are used to being overruled by people who do not really know what is going on. Sure sometimes it is genuinely a business decision, but often the business people are busy making technical decisions that they are not competent to make. In Climate Change, the scientists doing the work are being challenged by the political decision makers who do not have a clue about science. Realistically the political decision makers should accept the science and then start talking about the political choices that we need to make as a result of that science.

  • The results really matter

    There are two things that our civilization depends on, working software and a livable, stable climate. The news is occasionally enlivened by the story of a major software project that costs hundreds of millions of dollars that fails to deliver the expected benefits. The smaller day to day losses from smaller failures are hidden. Similarly the big storms that are increasing in frequency and severity due to climate change are making headlines, but the smaller impacts are never visible in the news.

Making sense of the parallels

Still working on this one, but I have a sense that the political power structures have a big impact. The techno geeks that do software or science are not part of the larger political power structures that govern our corporate dominated societies. As such the techno geeks are marginalized and can be safely ignored … at least for now … obligatory link to Tim Bray - Doing it Wrong.

Ruby on Rails Gotcha — cannot always omit parenthesis on a method call

Posted by Pete McBreen 03 Feb 2011 at 10:45

When trying to test out some legacy routes I got a really obtuse error from rake while running the following test case

  test "legacy routing OK" do
      assert_recognizes {:controller => 'index', :action => 'contact'},  '/contact.html'

All I was trying to do was make sure that any URLs saved in search engines from the old static site still worked in the new Rails based site, so no issue, just use assert_recognizes and that will make sure the routes are protected by test cases. It is not possible to use assert_routing since I did not want the new code to generate the .html style URLs. Anyway, this is what rake complained about

   `load_without_new_constant_marking': ./test/functional/index_controller_test.rb:25: 
            syntax error, unexpected tASSOC, expecting '}' (SyntaxError)
          assert_recognizes {:controller => 'index', :action => 'contact'},  '/contact.html'

         ./test/functional/index_controller_test.rb:25: syntax error, unexpected ',', expecting '}'
            assert_recognizes {:controller => 'index', :action => 'contact'},  '/contact.html'

It turns out that although the code documentation uses the normal rails convention of omitting the () around the method arguments, you cannot do that if the first parameter to a method is a hash. So the fix is to put in the parenthesis and the test runs as expected.

  test "legacy routing OK" do
      assert_recognizes( {:controller => 'index', :action => 'contact'},  '/contact.html' )

Fun with Ruby on Rails

Posted by Pete McBreen 01 Feb 2011 at 16:31

Rails can be for fun as well. Rather than going the usual Wordpress/Joomla or PHP route for my local Cochrane Red Rock Running and Tri Club website, I decided to have some fun with Rails and jQuery. Nothing really fancy but it does what it needs to without anyone having to learn about the joys of a CMS control panel.

Overall the site took less than 10 hours of spare time, time that would easily get eaten up by the usual requests for updates to a static site. There are still a few static parts to the site, but those are ones that do not change frequently, so not a big deal to maintain those parts.

Edited Dec-2018, site is now retired, had a long run but now replaced by a wordpress site maintained by someone else. Over the years spent about 40 hours maintaining the site, so not too bad, and zero breaches/security issues in that time

Kanban for Software Development?

Posted by Pete McBreen 31 Jan 2011 at 09:02

Gotta smile at this one, but recently was referred to an article on Kanban for Software Development and my immediate thought was but Kanban is all about pull… I could also make a comment that they seem to have re-invented the linear waterfall process, but I will not go there.

All of the diagrams in that article show the scrum style backlog and input queue at the start of the process. Is that not the very definition of PUSH? Admittedly I last looked in detail at Kanban in the early 1990’s (not much call for Manufacturing Systems experience where I currently live), but I’m sure that Kanban did not get redefined in the intervening period to be a PUSH system, the whole point of Kanban is that it PULLS work through the system in response to customer demand.

I was hoping that the Limited WIP Society, which claims to be The home of Kanban Systems for Software Engineering would correct the mistakes in the article, but all it seems to be is a place that links to a lot of different blogs. The overall consensus seems to be that Kanban for Software Development is just Scrum with limits between stages rather than the normal scrum which just limits the work per iteration.

Once you put limits between stages, interesting things happen, so there are lots of things to address and Goldratt’s Theory of Constraints seems to get thrown into the mix. But overall it sure looks like Scrum.

Understanding Kanban in a Manufacturing context

Kanban originally addressed the problem of too many old rusting parts scattered about the factory floor, with nothing much of value being shipped. In manufacturing, context switching incurs setup costs, so it pays to run large batches, but that means that you might take forever to get to making the one small volume part that is vital to being able to ship the final assembled goods.

So rather than allowing any work station to produce too much of any one part, the simple rule is implemented that a station cannot produce anything until their is a demand for the item from one of their consumers. So suddenly the performance metric goes from how much utilization did you get from that expensive machine to were you able to satisfy all your consumer demands in a timely manner.

This works because kanban was set up for batch production of standardized items. You know how many exhaust valves you need per engine, and there are whole systems dedicated to working out all of the list of parts that go to make a completed engine (parts explosion is a fun problem when looking at customer orders for an assembly plant). For every part there is a standard time to manufacture the item, and the setup times are also standardized, so an industrial engineer can work out if it makes sense to produce a weeks worth of exhaust valves in a single batch, or to just produce one days worth at a time of each of the sizes that are needed (because setup time to change over is low).

So in a manufacturing context, you can balance the overall plant by simple signals, engine assembly sends out a pull request for 800 inlet valves and 800 exhaust valves. Once your machine has pumped out the 800 exhaust valves you have to switch over to producing the inlet valves because you have no more demand for exhaust valves. Assembly can now start on building the engines since it has both types of valves it needs. Under the old machine utilization regime, changing the setup to produce the other type of valve would have meant downtime and lower utilization, so switchovers were rare. It was common to run out of necessary parts because someone upstream was busy making other parts.

Kanban in a Software Development context

Software development does not have any standardized items, so everything is a batch of one. This plays hell with the overall model, because the rate at which each station can process the items is not eventually knowable. You can make guesses or estimated averages, but some requirements are hard to elicit, but easy to develop and test for. Other changes are easy to say, easy to implement but really complex to test for.

Another issue is that a key part of Kanban was that it worked at the workstation level, but the “Kanban for software development” seems to be working at a team or subteam level. This might work for Scrum, but applying it that way for software development seems to make it much more complicated than it needs to be. To make Kanban work in software development the pull requests need to go to the software equivalent of workstations, so either to individuals or programing pairs. Scheduling at any larger level leads directly back to the problems that Kanban was designed to eliminate, the fact that there is half finished stuff lying all over the place.

Kanban for Software

If we are going to try to apply Kanban we have to first change the mindset.

Scrum and XP both have the idea that the backlog of stories is prioritized into the iterations (though they use their own terminology for this process.

Kanban for Software has requests for changes and new features in an Order Book, just like any factory has an order book, and the factory only builds to order. So having a large order book is a good thing.

Scrum/XP show the backlog at the start of the process, pushing on the development team.

Kanban for Software shows the Order Book on the user end of the process, emphasizing the pull of the orders placing a demand on the team.

Kanban for Software schedules from the Acceptance Test back into the development process. The rest of the changes flow naturally from there.

If the most valuable order in the order book is “Add social features to the website”, then the person responsible for Acceptance Test has to find out what it would mean to successfully test that item. Feature definition would occur through what Brian Marick calls Example Driven Development.

Once identified, the feature requests would be passed back up the chain of people who are necessary to deliver the feature. The integration tester who hands off the individual sub-features and tasks to specific developers.

Changing the Mindset to Pull

Rather than allowing developers to pull items into the current iteration when they are blocked, Kanban for Software suggests that developers should ask their downstream consumer for work. If at the end of the chain the Acceptance Test person is too busy to look at the order book for new work, then it is OK for the developers to be idle while waiting for the inevitable bugfix requests. After all the thing that is in Acceptance Test right now is the most valuable thing that can be produced, so why make that wait for developers who are working on less important tasks?

The idea of Stopping the line was one of the hardest ideas to get across in manufacturing assembly lines. And the idea that it is OK for a developer to realize that they do not have any customer requested work to do right now is going to be just as hard to accept when we start applying the idea of Kanban for Software.

If you wait until it’s obvious, it’s too late

Posted by Pete McBreen 28 Jan 2011 at 08:51

Matt Simmons wrote that Timing is Key

If you wait until it’s obvious, it’s too late… has become my defacto motto when it comes to a lot of things. I think the first time it occurred to me was when I started researching IPv6. The depletion of IPv4 is no surprise, and hasn’t been for quite a while, but it seems like most people are holding off even researching it until it becomes obvious that they need it. Again, by that time, if you’re in any kind of competitive company vying for market position, it’ll be too late. It won’t be obvious that it’s necessary until you see people financially punished for not taking those steps.

Lots of applicability to this idea, I need to ponder on it for a while. There are lots of changes coming, but which ones are going to matter and which ones do we need to take action on. Typically what happens is that larger corporations take longer to ponder on the new ideas, and then get hit hard by the market.

Fuel efficient cars are a good example of this. The profit margin for trucks and SUVs was so high that it did not make sense to switch to smaller engined cars, even though Peak Oil was going to push up the price of gasoline and Climate Change eventually going to put the price of emitting CO2 higher. Back in 2008 when the price of oil spiked for a few months, truck dealers where I live practically could not give the trucks away. Even with discounts that were most of the previous profit margin, the trucks were not moving off the dealers lots. Yes, the price of oil dropped again and most people tried to convince themselves that it was just speculators, but the car companies seem slowly to be waking up to the issue that fuel economy matters. The only problem is that the companies that held on to the idea of profitable trucks and SUVs the longest are being punished by the market now, since the lead time for getting a new design in the hands of the dealers is much longer than the time that companies have before the CDN$1.00/liter price of gasoline seems like the good old days.

Lots of applicability of this to software development as well, but that is for a later post.

Productivity and Compensation

Posted by Pete McBreen 27 Jan 2011 at 10:23

Steve McConnell responded to (In)Validating the 10X Productivity Difference Claim by saying 10x Productivity Myths: Where’s the 10x Difference in Compensation?

My overall conclusion is that paying for productivity on any more than a very-rough-approximation basis is a panacea that cannot practically be achieved.

As I’ve commented previously, the discrepancy between capability differences and compensation differences does create opportunities for companies that are willing to hire from the top of the talent pool to receive disproportionately greater levels of output in exchange for only modestly higher compensation.

This is not the answer I expected to find when I began asking the question almost 25 years ago, but I can see the reasons for it. Gerald Weinberg describes a pattern he describes as “Things are the way they are because they got that way.” I think this is one of those cases.

To a large degree I agree with McConnell on this, it is hard to measure productivity of software developers, so corporations tend to just have a narrow range of salaries for developers. The body of his article has good arguments for this, so there is no point repeating them here.

There is however another dimension to this conversation that has not yet been addressed.

Salary is not the only option

McConnell exemplifies the alternatives, after all he is more or less synonymous with Construx and several well read books like Code Complete. There is also the startup route, and as McConnell mentions, some developers end up contracting.

So overall the question of salary difference may be moot, except that maybe it means that within a single organization the 10X difference in productivity does not exist. The lower band will probably not pass the hiring filters, and the higher end of the band may self select out, either by not applying in the first place, or by choosing to leave for greener pastures.

Compensation is not the issue

Personally I’m more interested in Understanding Productivity and what drives it. My bias for explaining productivity is in the area of Software Craftsmanship, but there are many aspects of productivity that have not been explored …

Software Craftsmanship Revisited

Posted by Pete McBreen 20 Jan 2011 at 09:14

I’m not a fan of the manifesto, but have been watching the recent threads stirred by Dan North saying Programming is not a craft.

TL;DR version Software Craftsmanship risks putting the software at the centre rather than the benefit the software is supposed to deliver, mostly because we are romantics with big egos. Programming is about automating work like crunching data, processing and presenting information, or controlling and automating machines.

Liz Keogh highlighted the key aspect of Software Craftsmanship that I consider crucial, that although you can aspire to being a craftsman

… Software Craftsman is a status you should be awarded by someone else.

The reason that the old trades crafts focused so much on creating a masterpiece, was so that a person could be recognized by their peers as having become a master of their craft. The proof was in what was created, not just by someone saying that they are a craftsman.

Uncle Bob seems to be trying to conflate Agile and Software Craftsmanship, but I still see the two things as distinct. He has also drawn parallels between his Clean Code idea and Software Craftsmanship

Why is there a software craftsmanship movement? What motivated it? What drives it now? One thing; and one thing only.

We are tired of writing crap.

That’s it. The fat lady sang. Good nite Gracy. Over and out.

Again, for me this is too simplistic a view. The idea of Clean Code as an approach is a good start, but Software Craftsmanship goes far beyond the idea of just Clean Code.

Software Craftsmanship requires a complete reappraisal of what it means to develop software. As opposed to stopping writing crap as Clean Code suggests, Software Craftsmanship asks us to start creating great applications for our users and to stand behind that code and support it so that users who come to depend on it can trust that it will be available for them to use.

Software Craftsmanship includes the idea of software longevity. Code that is maintainable and can be maintained for long periods so that the investment our users put in learning to use the application and the hours they spend getting the data into the application is not lost when a capricious decision is made to abandon the software.

Rails Gotcha on Shared Hosting

Posted by Pete McBreen 18 Jan 2011 at 21:42

In a shared hosting environment, the system does not always have all of the gems you need, but when a gem is installed locally under your own account, mongrel or passenger does not see where the gems are and you get the great error message:

Missing these required gems

You get this message from mongrel even after running rake gems:install and having it report that the gem is successfully installed.

The Fix for Missing these required gems

Set the GEM_PATH to point to your local gems as well as the system gems, but you cannot just set it in .bashrc, since apache runs under a different user and does not see your scripts. Hence when mongrel runs, it does not see the locally installed gems.

The Rails preinitializer.rb that lives in the config directory can set the GEM_PATH and then Mongrel can see the locally installed gems.

The file config/preinitializer.rb needs to contain the following

ENV['GEM_PATH'] = '/home/pete/ruby/gems:/usr/lib/ruby/gems/1.8'

Obviously you will need to replace the path to my local rubygems home/pete/ruby/gems with your own local gem path.

A hat tip to Damien White for pointing me to this fix. Here’s hoping that this extra link moves his site up in the google listing for the query mongrel “Missing these required gems”

Resizing Images for the Web

Posted by Pete McBreen 15 Jan 2011 at 12:55

After losing count of the number of sites that use the browser to shrink massive images into the size of the thumbnails that the page needs (and waiting forever for these large images to load), it is time to say enough.

Shrinking images is easy, especially from the command line using ImageMagick. It is an open source tool, so there is no excuse not to use it, there are even precompiled binaries available for most platforms.

The most common use case people have for images is small thumbnails that are used as an image link to another page that has the full size image on it, or the page only has space for a small image and the original is too large to fit. Doing this with ImageMagick is easy, even if the original aspect ratio of the image was not right.

convert -thumbnail 200x100^ -gravity center -extent 200x100 original.jpg  thumbnail.jpg

convert is the name of the ImageMagick tool, -thumbnail 200x100^ creates an image that is at least 200 pixels wide and 100 pixels tall while preserving the aspect ratio of the original picture. This means that the resulting thumbnail can be larger than 200x100, but the image will not be distorted. The second part of the command -gravity center -extent 200x100 specifies that the resulting image should only be 200x100 and that the image should be picked from the center of the image. The gravity option can also be any compass direction with NorthWest being the top right of the image.

Processing multiple images is trivially easy, just specify a set of images using a wildcard like *.jpg and then the resulting images will be written out to a numbered set of files using thumb-%d.jpg, giving filenames like thumb-1.jpg, thumb-2.jpg and so on.

convert -thumbnail 200x100^ -gravity center -extent 200x100 *.jpg  thumb-%d.jpg

So no more excuses for distorted or over size images in web pages.

Setting up a Rails 3 development environment

Posted by Pete McBreen 13 Jan 2011 at 10:52

Setup instructions for using Vagrant, Oracle’s VirtualBox and Rails 3.

Getting Vagrant to install is easy, since it is just a gem, and it only requires the download of Oracle’s VirtualBox which is a simple OSX package. The bare bones instructions on the Vagrant site homepage actually work, but forget to tell you that you should do this from a new directory that you want to have shared with the VM.

gem install vagrant
vagrant box add lucid32
vagrant init

After the vagrant init there is a file called Vagrantfile that is just Ruby code for configuring the VM. Vagrant itself is set up to use Chef or Puppet to configure the VM with all of the software you need, but other than the basic apache install, I used apt-get to do the config.

The following changes to the bottom of the Vagrantfile use Chef Solo to install Apache2 into to VM (it sets up both port forwarding so that localhost:8080 points to apache inside the VM and sets up a local network address (only visible from within the laptop so that I can also set up a direct http connection to apache. = "lucid32"
config.vm.provisioner = :chef_solo
# Grab the cookbooks from the Vagrant files
config.chef.recipe_url = ""
# Tell chef what recipe to run. In this case, the `vagrant_main` recipe
# does all the magic.
config.vm.forward_port("http", 80, 8080)"")

With that in place the next thing to do is to start it up and then connect to the VM.

vagrant up
vagrant ssh

The last line creates an ssh session to the newly booted Ubuntu VM running inside the VirtualBox. No passwords or anything like firewalls, the VM is running wide open within OSX (but headless so ssh is necessary to connect and do anything useful). The /vagrant directory in the VM is shared with the directory that you are in when the init was done, so it is easy to move files into and out of the VM.

Once on the VM, the it was easy to use apt-get to install everything else that is needed for Rails 3.

sudo apt-get install mysql-server mysql-client libmysqlclient-dev
sudo gem install rails
sudo gem install mysql2
rails new /vagrant/R3/weblog -d=mysql -J

The -J in the rails command omits prototype so that the jQuery support can be added instead. Note also that the files are created in the /vagrant directory so they are available for editing directly on the laptop as well.

Then all that is needed to validate that it is working is to start rails up in development mode, so


rake db:create:all
script/rails server

This shows the usual startup messages

=> Booting WEBrick
=> Rails 3.0.3 application starting in development on
=> Call with -d to detach 
=> Ctrl-C to shutdown server
[2011-01-13 11:27:27] INFO  WEBrick 1.3.1
[2011-01-13 11:27:27] INFO  ruby 1.8.7 (2010-01-10) [i486-linux]
[2011-01-13 11:27:27] INFO  WEBrick::HTTPServer#start: pid=7784 port=3000

Then using the laptop browser visit and everything should be working correctly. vagrant suspend will then pause the VM and vagrant resume will restart it. Using vagrant halt will cleanly shut the machine down vagrant up will restart it, and vagrant destroy will shut down and delete the VM, so the next vagrant up restarts with a blank slate so you can test your provisioning process.

Edited to add The /vagrant directory is set to the root directory for Apache by the chef_solo recipe, so any files in the /vagrant directory on the running VM are visible at

Understanding Productivity

Posted by Pete McBreen 12 Jan 2011 at 07:47

What investigations would be useful to understand the claims about productivity of software developers? The existing studies are now old and come from an era when the technology was completely different from that available now. An era when one of the significant studies is into the difference between online and offline programming and debugging, McConnell’s 10X Software Development article refers to study titled “Exploratory Experimental Studies Comparing Online and Offline Programming Performance.” (emphasis mine):

The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.

An interesting point to start a modern investigation would be the last observation that they found no relationship between a programmer’s amount of experience and code quality or productivity. As a developer with 27+ years of experience in the field I have a vested interest in that observation being incorrect, but it would be interesting to see if the observation can be repeated. One reason for trying to repeat the experiment is that back in 1968 people with more than 10 years experience would have been exceedingly rare, but now 40+ years later, it should be easy to find people who have up to 10, 20, 30 and more years experience to see if there is any trend over the longer term.

Interestingly employers seem to have attached themselves to the idea that productivity is not that related to experience when they ask for team leads with 3 years of experience and consider senior developers to have 5 years of experience.

Other factors to investigate that seem to have some anecdotal evidence to support the idea that they may affect productivity

  • Breadth of experience — number of different programming languages that a developer has worked in.
  • Cross paradigm experience — does it make a difference how many different paradigms the developer has worked in?
  • Specific Paradigms — is there a paradigm that makes a difference - are the claims that functional programming improves general programming ability supportable from the data?
  • Specific experience — does it make a difference if a developer has spent a lot of time focused on one particular language? This might seem obvious, but we have been surprised by obvious ideas that turn out not to be true.
  • Similar experience — does it make a difference if the developer has experience in similar but not quite the same languages? Moving between C++, Java and C# could make a developer more aware of the subtle syntax differences between these languages and hence less likely to make mistakes that will slow down development.
  • Toolset experience — does the amount of time working in the toolset matter, independent of the experience with the particular language? There are several multi-language development environments that have been around for enough time for this to be a feasible investigation.
  • Domain experience — does experience in the problem domain make a difference? Employers seem to think so based on job postings, but does the data back the idea up and how significant is the difference?

There are probably more factors that could be investigated, but these will make for a good starting point.

(In)Validating the 10X Productivity Difference Claim

Posted by Pete McBreen 11 Jan 2011 at 09:04

Recently Laurent Bossavit wrote about Fact And Folklore In Software Engineering - (link is to the english translation). He pointed out that there are few hard numbers available about the productivity of software developers. Yes, there are plenty of anecdotes, but what studies that were done were a long time ago and weak methodologically compared to what we expect of current studies on human performance.

Bossavit then goes on to tackle the claims of 10X productivity

We can now circle back to this widely circulated “fact” of the software profession, according to which “programmer productivity varies by a factor of 10 (or 5, or 20) between the best and worst individuals”. This is a remarkable statement, not least because of its implications: for instance, programmer compensation does not vary accordingly.

Steve McConnell then gets dragged in for his review of the 10X literature. Bossavit claims that the research that has been done is not sufficient to validate the claim for 10X differences.

But “work” is the right term here: tracking down articles and sometimes entire books, some of them out of print, just to scan them for conclusions which turn out not to appear anywhere. It is a sad fact that critical examination of ill-supported assertions takes a lot more time than making the assertions in the first place; this asymmetry accounts for a lot of what is wrong with public perceptions of science in general, and possibly for the current state of our profession.

Although Bossavit’s article can read a bit like a personal attack, the problem of unsupported claims is something I have seen a lot in the “debate” over Global Warming. Indeed there is now a term for the generation of unsupported claims the Gish Gallop that was first used in attacks on Evolution and now against Climate Change.

As Bossavit rightly points out, it is much easier to make a claim than it is to refute the claim. Especially when the claim feels right and has had a long history of being accepted as common knowledge.

Steve McConnell replied to the article by Bossavit with another blog entry - Origins of 10X — How Valid is the Underlying Research? in which he revisits the same papers and summarizes with the conclusion that there is Strong Research Support for the 10x Conclusion.

McConnell acknowledges that there could be some methodological weaknesses with the original studies, but states that the body of research that supports the 10x claim is as solid as any research that’s been done in software engineering. Personally I think that falls into the category of damning with faint praise.

But is the difference as large as we think?

Bossavit did raise one point in his article that McConnell did not address - programmer compensation does not vary accordingly.

This is a telling point - if the difference is productivity can be 10X, why is it that salaries rarely fall outside the 2X range for experienced developers. Ignoring the lower starting pay issues, once a person had 3-5 years in the industry, salaries in North America are of the order of $50,000/year, Apart form a few outlier major cities with crazy cost of living expenses, it is hard to find anyone actively involved in software development (not a manager) who is earning more than $100,000/year.

It could be that the research is old - which was a criticism made for my use of the early studies in my Software Craftsmanship book, after all it is suspect when a book written in 2000 is referring to studies done back in 1972 or even 1988.

Unfortunately there have been no real studies of programmer productivity in the recent era. Yes there have been lots of claims made for the Agile approaches, but there are no real methodologically sound studies that are easily found. True there may have been studies that I cannot find, but I would guess that if any such study had been done then the authors would be making money off it by now and it would become known.

Overall it would seem that the software engineering community does not have any solid evidence to back up the 10X claim using current development tools and techniques. The anecdotal evidence we have would suggest that maybe there is a 3X difference between currently practicing software developers, and there may be some outliers who are much better than that but that those individuals are few and far between.

But that is just another unsupported claim. Yes, there is an obvious difference is capability between experienced developers, but there is no easy way to measure it, and what studies were published on the topic were from research done a long time ago, practically in the prehistory of software development.

All of the above is why I promote the idea of Software Craftsmanship over Software Engineering.