Improving Wetware

Because technology is never the issue

Seeking Simpler Explanations

Posted by Pete McBreen Wed, 03 Mar 2010 04:25:00 GMT

Yes, there is a fancy name for simpler explanations - Occam’s Razor - or Ockham if you prefer the old spelling, but I prefer to use plain english.

A common problem with beginners to software development is that when an error occurs, they manage to convince themselves that they have found an error in the compiler or computer. Yes, sometimes this is actually happens, compilers do have errors, but a much simpler, and more likely explanation is that they have made a normal beginner mistake in their code. Of the two, it makes more sense to investigate the simpler cause first and only then, if no problems can be found is it worth while investigating alternate explanations. Most of the time the simple explanation is the one to go with. If a program suddenly starts failing, and there have been some recent edits to the code, then the simplest explanation is that the error was surfaced by the changes, so that is the best place to start looking.

Climate science as a good example of simpler explanations

One explanation of the problem of climate change and rising CO2 levels is that there has been a conspiracy of scientists to raise the specter of anthropogenic global warming so that they get fame and fortune.

A simpler explanation is that the conspiracy is n the other side. That some large corporations with vested interests are leading a campaign to convince the general public that there is nothing strange going on with the climate.

One way of testing which of these is a more likely explanation is to look at past behavior. Can we find any evidence of scientists acting in concert to deceive anyone? No, sorry, nothing there. Sure there have been cases where an individual scientist or group of scientists have been enthusiastic about an idea that turned out to be incorrect, but these cases have never lasted for long and even early on there was the normal skepticism of scientists asking questions.

Looking to the other side, can we find any evidence of corporations acting in concert to deceive people? Yes, several times, often with significant deleterious effects on people and the environment. Car and oil companies managed to keep lead in petrol for a long time after the effects of low level of lead exposure were known to harm humans. Lead was only removed when catalytic converters were required to reduce smog and the lead was poisoning the catalytic converters.

Another example, early on the car companies bought up and dismantled many of the electric trolley companies thus forcing people to buy cars in order to get around in cities. Very few cities have effective light rail transit these days, even though in the 1930’s most major cities had these electric trolley lines. San Francisco is one of the few cities that still has the remnants of the old system still running.

Another example is the tobacco industry, managing to spread enough doubt about the effects of smoking so that for over forty years there was insufficient effort put into preventing people from becoming addicted to the nicotine in cigarettes. End result of this was a massive lawsuit and damages awarded against the industry, but even now, the public attitude is such that the tobacco companies can still sell very addictive substances and keep on addicting new generations of customers (aka addicts).

With these examples, the simplest explanation of the public debate over global warming is that there is a conspiracy among the major corporations who have a vested interest in the Coal and Oil sectors of industry to spread doubt and uncertainty. Very year the doubt proceeds, the corporations generate billions in profit. Following the money is always a simpler explanation.

How Can We Detect Slow Changes?

Posted by Pete McBreen Mon, 08 Feb 2010 01:26:00 GMT

Sometimes it seems that while we were not looking, things changed.

Not too many years ago -

  • Hardware was the largest part of any software project budget. Now, unless you are working at a massive scale, the cost of the computing hardware is a rounding error on the bottom line.
  • Scripting languages were too slow for use on real projects, but the web has well and truly demonstrated that this is false.
  • Javascript was only used for annoying irritating effects on web pages, but now AJAX and Web 2.0 have brought drag and drop functionality to the browser application (admittedly not everyone is using these capabilities but they exist).

Not too sure how this is happening, but it seems that when we first learn about something, those ideas stick and it is hard to change what we know to match the current reality. When I started commercial software development, it was common to build systems on a PDP-11 with under 512KB of RAM. These days a laptop comes with at least 2GB of RAM, an increase of main memory of a factor of 4,000, but sometimes I still catch myself trying to save a few bytes when designing some aspect of a system.

The open question for now is how to detect this type of slow change (even if the pace of technological change is not all that slow compared to other changes.) This is an important question because many societies and groups have been hit by surprises that in hindsight are obvious, and the consequences were catastrophic;

  • When cutting down trees in an area, when does the population realize that there is a serious problem with deforestation?
  • When does a drought become a climate shift that means the area is no longer amenable to the current mode of agriculture?
  • When does the exploitation of fish in a fishery result in the collapse of the stocks in that fishery?

On the technology side, when do the desktop application developers get hit overtaken by the web applications running in a browser? Functionality wise, we can deliver nearly equivalent functionality over the web provided we have the bandwidth, so maybe it is time to recreate departmental applications as web applications?

Good process vs. Bad process

Posted by Pete McBreen Sun, 17 Jan 2010 19:08:00 GMT

Interesting set on slideshare about the Netflix company culture. Process slide is number 61 - not quite figured out how to link directly to that slide - and the following slides…

Lesson: You don’t need detailed policies for everything

An interesting python project

Posted by Pete McBreen Thu, 14 Jan 2010 06:12:00 GMT

After all the fun and games in the press over the climate models, some developers decided to rewrite the climate models in python. So far their graphs seem to track pretty well to the fortran original code, but these are early days in this implementation of the code.

Looks like I’m going to have to update my python implementation as it is too old to run their code… I’m back at 2.5.1 and the code needs 2.5.4

Just because Zed is so awesome

Posted by Pete McBreen Thu, 14 Jan 2010 02:48:00 GMT

One of Zed’s earlier rants about why Programmers Need To Learn Statistics.

Finally, you should check out the R Project for the programming language used in this article. It is a great language for this, with some of the best plotting abilities in the world. Learning to use R will help you also learn statistics better.

Still Questioning Extreme Programming

Posted by Pete McBreen Sun, 03 Jan 2010 22:37:00 GMT

After reading Mark Wilden’s tale of Why he doesn’t like Pair Programming I have spent some time reconsidering my Questions about Extreme Programming.

In the book I let Pair Programming off lightly, not fully addressing the dark side of pair programming that Mark addressed. Yes, chapter 9 is called Working at this intensity is hard, but I did not point out that after a full day of pair programming most developers are not in a fit state to do more work. XP requires a 40 hour work week so that the developers can recover from the pair programming.

Other problems I have noticed with Pair Programming

  • Exploration of ideas is not encouraged, pairing makes a developer focus on writing the code, so unless there is time in the day for solo exploration the team gets a very superficial level of understanding of the code.
  • Developers can come to rely too much on the unit tests, assuming that if the tests pass then the code is OK. (This follows on from the lack of exploration.)
  • Corner cases and edge cases are not investigated in detail, especially if they are hard to write tests for.
  • Code that requires detail thinking about the design is hard to do when pairing unless one partner completely dominates the session. With the usual tradeoff between partners, it is hard to build technically complex designs unless they have been already been worked out in a solo session.
  • Personal styles matter when pairing, and not all pairings are as productive as others.
  • Pairs with different typing skills and proficiencies often result in the better typist doing all of the coding with the other partner being purely passive.

Having said all of that, pairing is still a good way to show someone else around a codebase, and pairing is definitely useful when debugging code - the second pair of eyes definitely helps.

Questions about Extreme Programming that are still open

Overall I still see XP as a fairly niche methodology, as there are few projects that it is really suited for. The constraints of XP are fairly rigid, and although many projects have tried to replace the onsite customer with systems analysts, the decision cycle is definitely longer when the analyst is in the loop.

The key problem that XP projects face is that there is no real compelling case for using XP. Yes, some developers like it, but more for the Unit Testing rather than any other part, and the testing aspects can be replicated in any other approach to software development. I still think that the most useful parts of XP can be applied in other contexts.

Overall, although my questioning of XP was not well received at the time, I think it has stood the test of time well in that eight years on,XP is approaching the status of a footnote in software development history. Yes, it helped to motivate the Agile Alliance, but these days I see more projects trying Scrum than XP, and while some of the XP practices are here to stay, it is hard to point to any software that was developed using XP and state that it will stand the test of time.

Yes, many of the tools that supported XP practices will stand the test of time, but most (all?) were not developed as part of an XP project, instead they were solo projects undertaken to assist an XP team in one way or another.

In summary, although Questioning Extreme Programming is now outdated in that it refers to an earlier incarnation of XP, I still stand behind the claim that while it was an interesting approach to software development, it is not applicable to many projects, and the current versions of XP have similar problems. The ultra-light weight approach to software development that tries to put developers first does not work all that well in a corporate or entrepreneurial setting.

Make Lists. Not Too Much. Mostly Do.

Posted by Pete McBreen Sat, 12 Dec 2009 17:59:00 GMT

Michael Pollan’s Eaters Manifesto - Eat food. Not too much. Mostly plants. has spawned another that looks at time management - Make lists. Not too much. Mostly do..

time managment is another totally overdone subject, wouldn’t it be great to have a similar credo to simplify all this hackneyed advice on to do lists, productivity, time management systems, and the like? Then, sent from the productivity heavens, it came to me:

Make lists. Not too much. Mostly do.

Nice parallels to software development here, planning is useful, but the key thing is actually crating the software.

An amusing speculation about the Waterfall

Posted by Pete McBreen Fri, 11 Dec 2009 04:36:00 GMT

Tarmo Toikkanen has an interesting speculation about Why people still believe in the Waterfall model, putting the blame on the Royce paper that was trying to say that waterfall was not the way to do software development.

OK, so why do people still advocate the waterfall? If you look at the scientific articles on software engineering that discuss the waterfall, they all cite Royce’s article. In other words, they’re saying something like “The waterfall is a proven method (Royce, 1970).” So they base their claims on an article that actually says the opposite: that the model does not work.

Tarmo was not the first to run across this idea, but the interpretation of the problem is different.

Don’t draw figures or diagrams of wrong models, because people will remember them. And in the worst case that could cause hundreds of thousands of failed projects and billions of euros and dollars wasted for nothing.

Other people has written about The Waterfall Accident and Waterfall Model; not what was intended.

Time to move beyond Agile?

Posted by Pete McBreen Mon, 30 Nov 2009 00:35:00 GMT

Just found a very interesting post from Luke Halliwell on The Agile Disease that looks at the fit between Scrum and the game industry, but many of his points are relevant to other development domains.

[Agile] was designed by consultants, aimed squarely at the world of in-house enterprise IT: a world where mediocre developers work for large corporations that don’t understand software development, but can afford to buy in expensive consultants to “save” their runaway projects.

Having daily stand-up meetings is ludicrous; it exists simply to protect against the dysfunction of team members that never talk to one another. … In anything resembling a normal, common-sense team, people will surely raise blockages with their manager as soon as they occur and not need to wait for a daily meeting!

After that rant Luke went on to describe what worked in his field, and even had a post after attending a Scrum Master course.

Software development as a scientific activity

Posted by Pete McBreen Mon, 23 Nov 2009 06:21:00 GMT

Given that I do not agree with the characterization of software development as software engineering, it was somewhat of a surprise to find that there are a lot of parallels between software development and science.

Debugging and testing are probably the most scientific activities, in that developers have to make guesses about what is happening and then devise experiments to prove those guesses wrong. Developers also have to make predictions about how the systems they are developing will behave and then defend those predictions against speculations made by uniformed observers and occasionally defend against misinformation conveyed by financially interested parties.

One conclusion I could make from this is that the politics of software development are very similar to the politics of science. Practitioners try to pretend that there is no politics involved as it is all perfectly rational and understandable, but because people are involved it is all about politics. As soon as we start making predictions, then how we interpret those possible futures has a big effect on the actions we might take. This then becomes the realm of politics and it is that part that many software developers forget about (and many scientists as well) - Politics Matters.

We probably could have saved ourselves , but we were too damned lazy to try very hard … and too damned cheap. – Kurt Vonnegut

To celebrate the politics of science I’ve included a graphic reminder in the sidebar that small changes over a long period can result in a very interesting future.