Improving Wetware

Because technology is never the issue

Productivity and Compensation

Posted by Pete McBreen 27 Jan 2011 at 10:23

Steve McConnell responded to (In)Validating the 10X Productivity Difference Claim by saying 10x Productivity Myths: Where’s the 10x Difference in Compensation?

My overall conclusion is that paying for productivity on any more than a very-rough-approximation basis is a panacea that cannot practically be achieved.

As I’ve commented previously, the discrepancy between capability differences and compensation differences does create opportunities for companies that are willing to hire from the top of the talent pool to receive disproportionately greater levels of output in exchange for only modestly higher compensation.

This is not the answer I expected to find when I began asking the question almost 25 years ago, but I can see the reasons for it. Gerald Weinberg describes a pattern he describes as “Things are the way they are because they got that way.” I think this is one of those cases.

To a large degree I agree with McConnell on this, it is hard to measure productivity of software developers, so corporations tend to just have a narrow range of salaries for developers. The body of his article has good arguments for this, so there is no point repeating them here.

There is however another dimension to this conversation that has not yet been addressed.

Salary is not the only option

McConnell exemplifies the alternatives, after all he is more or less synonymous with Construx and several well read books like Code Complete. There is also the startup route, and as McConnell mentions, some developers end up contracting.

So overall the question of salary difference may be moot, except that maybe it means that within a single organization the 10X difference in productivity does not exist. The lower band will probably not pass the hiring filters, and the higher end of the band may self select out, either by not applying in the first place, or by choosing to leave for greener pastures.

Compensation is not the issue

Personally I’m more interested in Understanding Productivity and what drives it. My bias for explaining productivity is in the area of Software Craftsmanship, but there are many aspects of productivity that have not been explored …

Software Craftsmanship Revisited

Posted by Pete McBreen 20 Jan 2011 at 09:14

I’m not a fan of the manifesto, but have been watching the recent threads stirred by Dan North saying Programming is not a craft.

TL;DR version Software Craftsmanship risks putting the software at the centre rather than the benefit the software is supposed to deliver, mostly because we are romantics with big egos. Programming is about automating work like crunching data, processing and presenting information, or controlling and automating machines.

Liz Keogh highlighted the key aspect of Software Craftsmanship that I consider crucial, that although you can aspire to being a craftsman

… Software Craftsman is a status you should be awarded by someone else.

The reason that the old trades crafts focused so much on creating a masterpiece, was so that a person could be recognized by their peers as having become a master of their craft. The proof was in what was created, not just by someone saying that they are a craftsman.

Uncle Bob seems to be trying to conflate Agile and Software Craftsmanship, but I still see the two things as distinct. He has also drawn parallels between his Clean Code idea and Software Craftsmanship

Why is there a software craftsmanship movement? What motivated it? What drives it now? One thing; and one thing only.

We are tired of writing crap.

That’s it. The fat lady sang. Good nite Gracy. Over and out.

Again, for me this is too simplistic a view. The idea of Clean Code as an approach is a good start, but Software Craftsmanship goes far beyond the idea of just Clean Code.

Software Craftsmanship requires a complete reappraisal of what it means to develop software. As opposed to stopping writing crap as Clean Code suggests, Software Craftsmanship asks us to start creating great applications for our users and to stand behind that code and support it so that users who come to depend on it can trust that it will be available for them to use.

Software Craftsmanship includes the idea of software longevity. Code that is maintainable and can be maintained for long periods so that the investment our users put in learning to use the application and the hours they spend getting the data into the application is not lost when a capricious decision is made to abandon the software.

Rails Gotcha on Shared Hosting

Posted by Pete McBreen 18 Jan 2011 at 21:42

In a shared hosting environment, the system does not always have all of the gems you need, but when a gem is installed locally under your own account, mongrel or passenger does not see where the gems are and you get the great error message:

Missing these required gems
    bluecloth

You get this message from mongrel even after running rake gems:install and having it report that the gem is successfully installed.

The Fix for Missing these required gems

Set the GEM_PATH to point to your local gems as well as the system gems, but you cannot just set it in .bashrc, since apache runs under a different user and does not see your scripts. Hence when mongrel runs, it does not see the locally installed gems.

The Rails preinitializer.rb that lives in the config directory can set the GEM_PATH and then Mongrel can see the locally installed gems.

The file config/preinitializer.rb needs to contain the following

ENV['GEM_PATH'] = '/home/pete/ruby/gems:/usr/lib/ruby/gems/1.8'
Gem.clear_paths

Obviously you will need to replace the path to my local rubygems home/pete/ruby/gems with your own local gem path.

A hat tip to Damien White for pointing me to this fix. Here’s hoping that this extra link moves his site up in the google listing for the query mongrel “Missing these required gems”

Resizing Images for the Web

Posted by Pete McBreen 15 Jan 2011 at 12:55

After losing count of the number of sites that use the browser to shrink massive images into the size of the thumbnails that the page needs (and waiting forever for these large images to load), it is time to say enough.

Shrinking images is easy, especially from the command line using ImageMagick. It is an open source tool, so there is no excuse not to use it, there are even precompiled binaries available for most platforms.

The most common use case people have for images is small thumbnails that are used as an image link to another page that has the full size image on it, or the page only has space for a small image and the original is too large to fit. Doing this with ImageMagick is easy, even if the original aspect ratio of the image was not right.

convert -thumbnail 200x100^ -gravity center -extent 200x100 original.jpg  thumbnail.jpg

convert is the name of the ImageMagick tool, -thumbnail 200x100^ creates an image that is at least 200 pixels wide and 100 pixels tall while preserving the aspect ratio of the original picture. This means that the resulting thumbnail can be larger than 200x100, but the image will not be distorted. The second part of the command -gravity center -extent 200x100 specifies that the resulting image should only be 200x100 and that the image should be picked from the center of the image. The gravity option can also be any compass direction with NorthWest being the top right of the image.

Processing multiple images is trivially easy, just specify a set of images using a wildcard like *.jpg and then the resulting images will be written out to a numbered set of files using thumb-%d.jpg, giving filenames like thumb-1.jpg, thumb-2.jpg and so on.

convert -thumbnail 200x100^ -gravity center -extent 200x100 *.jpg  thumb-%d.jpg

So no more excuses for distorted or over size images in web pages.

Setting up a Rails 3 development environment

Posted by Pete McBreen 13 Jan 2011 at 10:52

Setup instructions for using Vagrant, Oracle’s VirtualBox and Rails 3.

Getting Vagrant to install is easy, since it is just a gem, and it only requires the download of Oracle’s VirtualBox which is a simple OSX package. The bare bones instructions on the Vagrant site homepage actually work, but forget to tell you that you should do this from a new directory that you want to have shared with the VM.

gem install vagrant
vagrant box add lucid32 http://files.vagrantup.com/lucid32.box
vagrant init

After the vagrant init there is a file called Vagrantfile that is just Ruby code for configuring the VM. Vagrant itself is set up to use Chef or Puppet to configure the VM with all of the software you need, but other than the basic apache install, I used apt-get to do the config.

The following changes to the bottom of the Vagrantfile use Chef Solo to install Apache2 into to VM (it sets up both port forwarding so that localhost:8080 points to apache inside the VM and sets up a local network address (only visible from within the laptop 192.168.10.200 so that I can also set up a direct http connection to apache.

config.vm.box = "lucid32"
config.vm.provisioner = :chef_solo
# Grab the cookbooks from the Vagrant files
config.chef.recipe_url = "http://files.vagrantup.com/getting_started/cookbooks.tar.gz"
# Tell chef what recipe to run. In this case, the `vagrant_main` recipe
# does all the magic.
config.chef.add_recipe("vagrant_main")
config.vm.forward_port("http", 80, 8080)
config.vm.network("192.168.10.200")

With that in place the next thing to do is to start it up and then connect to the VM.

vagrant up
vagrant ssh

The last line creates an ssh session to the newly booted Ubuntu VM running inside the VirtualBox. No passwords or anything like firewalls, the VM is running wide open within OSX (but headless so ssh is necessary to connect and do anything useful). The /vagrant directory in the VM is shared with the directory that you are in when the init was done, so it is easy to move files into and out of the VM.

Once on the VM, the it was easy to use apt-get to install everything else that is needed for Rails 3.

sudo apt-get install mysql-server mysql-client libmysqlclient-dev
sudo gem install rails
sudo gem install mysql2
rails new /vagrant/R3/weblog -d=mysql -J

The -J in the rails command omits prototype so that the jQuery support can be added instead. Note also that the files are created in the /vagrant directory so they are available for editing directly on the laptop as well.

Then all that is needed to validate that it is working is to start rails up in development mode, so

/vagrant/R3/weblog

rake db:create:all
script/rails server

This shows the usual startup messages

=> Booting WEBrick
=> Rails 3.0.3 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach 
=> Ctrl-C to shutdown server
[2011-01-13 11:27:27] INFO  WEBrick 1.3.1
[2011-01-13 11:27:27] INFO  ruby 1.8.7 (2010-01-10) [i486-linux]
[2011-01-13 11:27:27] INFO  WEBrick::HTTPServer#start: pid=7784 port=3000

Then using the laptop browser visit http://192.168.10.200:3000/ and everything should be working correctly. vagrant suspend will then pause the VM and vagrant resume will restart it. Using vagrant halt will cleanly shut the machine down vagrant up will restart it, and vagrant destroy will shut down and delete the VM, so the next vagrant up restarts with a blank slate so you can test your provisioning process.

Edited to add The /vagrant directory is set to the root directory for Apache by the chef_solo recipe, so any files in the /vagrant directory on the running VM are visible at http://192.168.10.200/.

Understanding Productivity

Posted by Pete McBreen 12 Jan 2011 at 07:47

What investigations would be useful to understand the claims about productivity of software developers? The existing studies are now old and come from an era when the technology was completely different from that available now. An era when one of the significant studies is into the difference between online and offline programming and debugging, McConnell’s 10X Software Development article refers to study titled “Exploratory Experimental Studies Comparing Online and Offline Programming Performance.” (emphasis mine):

The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.

An interesting point to start a modern investigation would be the last observation that they found no relationship between a programmer’s amount of experience and code quality or productivity. As a developer with 27+ years of experience in the field I have a vested interest in that observation being incorrect, but it would be interesting to see if the observation can be repeated. One reason for trying to repeat the experiment is that back in 1968 people with more than 10 years experience would have been exceedingly rare, but now 40+ years later, it should be easy to find people who have up to 10, 20, 30 and more years experience to see if there is any trend over the longer term.

Interestingly employers seem to have attached themselves to the idea that productivity is not that related to experience when they ask for team leads with 3 years of experience and consider senior developers to have 5 years of experience.

Other factors to investigate that seem to have some anecdotal evidence to support the idea that they may affect productivity

  • Breadth of experience — number of different programming languages that a developer has worked in.
  • Cross paradigm experience — does it make a difference how many different paradigms the developer has worked in?
  • Specific Paradigms — is there a paradigm that makes a difference - are the claims that functional programming improves general programming ability supportable from the data?
  • Specific experience — does it make a difference if a developer has spent a lot of time focused on one particular language? This might seem obvious, but we have been surprised by obvious ideas that turn out not to be true.
  • Similar experience — does it make a difference if the developer has experience in similar but not quite the same languages? Moving between C++, Java and C# could make a developer more aware of the subtle syntax differences between these languages and hence less likely to make mistakes that will slow down development.
  • Toolset experience — does the amount of time working in the toolset matter, independent of the experience with the particular language? There are several multi-language development environments that have been around for enough time for this to be a feasible investigation.
  • Domain experience — does experience in the problem domain make a difference? Employers seem to think so based on job postings, but does the data back the idea up and how significant is the difference?

There are probably more factors that could be investigated, but these will make for a good starting point.

(In)Validating the 10X Productivity Difference Claim

Posted by Pete McBreen 11 Jan 2011 at 09:04

Recently Laurent Bossavit wrote about Fact And Folklore In Software Engineering - (link is to the english translation). He pointed out that there are few hard numbers available about the productivity of software developers. Yes, there are plenty of anecdotes, but what studies that were done were a long time ago and weak methodologically compared to what we expect of current studies on human performance.

Bossavit then goes on to tackle the claims of 10X productivity

We can now circle back to this widely circulated “fact” of the software profession, according to which “programmer productivity varies by a factor of 10 (or 5, or 20) between the best and worst individuals”. This is a remarkable statement, not least because of its implications: for instance, programmer compensation does not vary accordingly.

Steve McConnell then gets dragged in for his review of the 10X literature. Bossavit claims that the research that has been done is not sufficient to validate the claim for 10X differences.

But “work” is the right term here: tracking down articles and sometimes entire books, some of them out of print, just to scan them for conclusions which turn out not to appear anywhere. It is a sad fact that critical examination of ill-supported assertions takes a lot more time than making the assertions in the first place; this asymmetry accounts for a lot of what is wrong with public perceptions of science in general, and possibly for the current state of our profession.

Although Bossavit’s article can read a bit like a personal attack, the problem of unsupported claims is something I have seen a lot in the “debate” over Global Warming. Indeed there is now a term for the generation of unsupported claims the Gish Gallop that was first used in attacks on Evolution and now against Climate Change.

As Bossavit rightly points out, it is much easier to make a claim than it is to refute the claim. Especially when the claim feels right and has had a long history of being accepted as common knowledge.

Steve McConnell replied to the article by Bossavit with another blog entry - Origins of 10X — How Valid is the Underlying Research? in which he revisits the same papers and summarizes with the conclusion that there is Strong Research Support for the 10x Conclusion.

McConnell acknowledges that there could be some methodological weaknesses with the original studies, but states that the body of research that supports the 10x claim is as solid as any research that’s been done in software engineering. Personally I think that falls into the category of damning with faint praise.

But is the difference as large as we think?

Bossavit did raise one point in his article that McConnell did not address - programmer compensation does not vary accordingly.

This is a telling point - if the difference is productivity can be 10X, why is it that salaries rarely fall outside the 2X range for experienced developers. Ignoring the lower starting pay issues, once a person had 3-5 years in the industry, salaries in North America are of the order of $50,000/year, Apart form a few outlier major cities with crazy cost of living expenses, it is hard to find anyone actively involved in software development (not a manager) who is earning more than $100,000/year.

It could be that the research is old - which was a criticism made for my use of the early studies in my Software Craftsmanship book, after all it is suspect when a book written in 2000 is referring to studies done back in 1972 or even 1988.

Unfortunately there have been no real studies of programmer productivity in the recent era. Yes there have been lots of claims made for the Agile approaches, but there are no real methodologically sound studies that are easily found. True there may have been studies that I cannot find, but I would guess that if any such study had been done then the authors would be making money off it by now and it would become known.

Overall it would seem that the software engineering community does not have any solid evidence to back up the 10X claim using current development tools and techniques. The anecdotal evidence we have would suggest that maybe there is a 3X difference between currently practicing software developers, and there may be some outliers who are much better than that but that those individuals are few and far between.

But that is just another unsupported claim. Yes, there is an obvious difference is capability between experienced developers, but there is no easy way to measure it, and what studies were published on the topic were from research done a long time ago, practically in the prehistory of software development.

All of the above is why I promote the idea of Software Craftsmanship over Software Engineering.

The iPad does not appear to be a social medium

Posted by Pete McBreen 10 Jan 2011 at 10:14

The web is inherently a social medium because it is easy to share URLs. The common name for this is that sites became Slashdotted, named after the popular site Slashdot that was the first of the sites were posted links could generate massive traffic.

Website traffic is social in that while it has the usual day/night differences in traffic that follows a relatively predictable curve, if an article becomes popular, traffic can spike very rapidly to previously unseen levels. A site that typically gets 1 million page views/day may find that suddenly the usual daytime peak of 100,000 page views/hour (less than 30/second) has suddenly spiked to over 500/second (which if sustained would be 1.8 million page views in the hour). All it needs is an influential site to link to the smaller site, or for lots of different social sites to jump on the link.

In contrast, iPad and similar applications do not exhibit this type of traffic. Partly this is because the apps need to be installed, but also because the apps do not lend themselves to the social sharing of content. Yes, most apps have a way of sending out a URL, but that just feeds the social web, it does not add to the traffic on the servers feeding the application.

The nice thing about this is that it makes it easy to size the servers that the application uses. It also makes me think that the application developers are missing an opportunity …

Best in class efficiency — because we didn't import the fuel efficient vehicles

Posted by Pete McBreen 07 Jan 2011 at 16:32

Recently I had to replace a Volkswagen TDI Golf (after 300,000km it was well used), but was appalled at the lack of improvement in fuel efficiency over the past 10+ years.

Overall I normally averaged 5.1 l/100km in the TDI, normally managing 1000km between 51 liter fill ups. In Canada the Ford Fiesta is advertised as Best in class fuel efficiency. Well it might be, but only because nobody seems to be importing the really fuel efficient cars. Based on the Canadian figures, the Fiesta will probably end up somewhere around 6.0 to 6.5l/100km. On the european figures, it is listed as 5.9l/100km, for the 1.6L 120 HP version - the only engine spec that is available in Canada.

Read this and weep

The 1.6 Duratorq TDCi ECO version of the same vehicle that is NOT available in Canada gets 3.7l/100KM and still pumps out 90HP, there is another version listed at 95HP that gets similar fuel efficiency. For people who do not like diesel, there is a 1.25L version that still does 5.5l/100km, and another1.25L petrol engine that does 85HP that does 5.6l/100km.

Canadian figures for the Fiesta are 7.1 city, 5.3 highway. There is supposedly going to be an ECO version out later, but for now an average that we might be able to expect is 6.2 l/100km.

Current vehicle

After much looking around I ended up with a Honda Fit, (Jazz in europe). It claims 7.2 city. 5.7 highway for a combined 6.4, but in practice I’m averaging 6.6l/100KM, more than 2l/100km worse than I would be if I could have got one of the fuel efficient cars that are available in Europe.

A new TDI Golf was not on the cards since it is only available in the high “comfortline” spec, for CDN$28,000, and not very fuel efficient as the version available in Canada is 140HP, so 6.7l/100km city, 4.6l/100km highway for a combined 5.65l/100km. So in 10 years the car has more power and worse fuel economy than the previous model.

Time to keep on watching the CO2 level.

jQuery for easier static site maintenance

Posted by Pete McBreen 05 Jan 2011 at 14:49

The jQuery Javascript library has a neat feature that makes the maintenance of static sites a lot easier. It is possible to load a different page into a div (or any other defined area on a page.

The syntax for this is relatively simple

<div id="footer">
    <p>Page requires javascript to load the footer links into this area.</p>
</div>
<script src="scripts/jquery-1.4.4.min.js" type="text/javascript" ></script>
<script type="text/javascript">
    $('#footer').load('index.html #footer ul.menu');
</script>

This solves the problem of wanting to make sure that the footer is identical on all pages without having the problem of making sure that you edit all 20+ pages in a static site. Sure it would be a whole lot easier to just use a dynamic site and include whatever code was needed in the page, but some sites are still made out of static (X)HTML so this is a neat fix.

The $('#footer').load('index.html #footer ul.menu'); line is the key one, it loads the index.html page and then extracts the contents using a CSS selector #footer ul.menu and replaces the existing footer div on the current page with the specified content from the index page.

Yes, the obvious complaint is that it slows down the page load time, but for most static sites this is less of an issue than the maintenance hassle of ensuring that every page is updated whenever a change occurs. It also has the side effect of cutting down the total size of the pages for sites that have lots of boilerplate code in the footer or sidebars.

For completeness I should also show the footer from the index page

<div id="footer">
    <ul class="menu">
        <li class="menulinks">
         <div><a title="Home" href="index.html">Home</a></div>
       </li>
      ... lots of other links missed off here 
       </ul>
</div>

Yes, you can outsource too much

Posted by Pete McBreen 05 Jan 2011 at 09:01

Interesting article in the sloan review on outsourcing too much. Admittedly it is about the design process in the car world, and it is short on details, but the overall implications are clear.

It seems that the business world is now waking up to the fact that it is overall systems performance that matters, not just local optimization of a single point function or module. The problem seems to be that as you Separate the Design from the Implementation the local knowledge you lose is much worse than the small gain you make in financial efficiency of the outsourcing.

No, I don't agree with the manifesto

Posted by Pete McBreen 29 Dec 2010 at 13:41

Having just been reminded that there is a Manifesto for Software Craftsmanship I have to point out that although I wrote the Software Craftsmanship book I have nothing to do with the manifesto.

Part of my disagreement with it is that it is a not very well disguised take off of the Agile Manifesto. I can leave aside the fact that they did not even keep the sequence the same, but to suggest that Software Craftsmanship is something beyond Agile is taking the idea to places where I do not think it belongs. Craftsmanship is about our individual relationship to the work that we do, it is not tied to a particular way of doing the work.

For me, Software Craftsmanship is about putting the individual back into the activity of delivering software. I have no interest at all in a community of professionals, the passionate amateur is much more likely to create interesting and valuable software. Professionals are too serious, amateurs get the idea that Software development is meant to be fun. One now very famous amateur has since written about something being Just For Fun.

In part my book was a rant against Software Engineering, mainly because several institutions were trying to take ideas from mechanical engineering and manufacturing practices and apply them to software development. But it was also a rant against the idea of professionalism. Rather than try to emulate the buttoned down professionalism that kills productivity and creativity, I wanted software development to become more skill and experience based. Yes, there are some practices that help in certain circumstances, but not in all. The professionals who spout about Best Practices and Certification do us all a disservice since they lock us into things that worked one time in one circumstance.

In the end, Software Craftsmanship is about producing great software. In the old traditions of craftsmanship, to be accepted a journeyman had to produce a masterpiece. Something that their fellow craftsmen would acknowledge as being worthy of the craft. For me, this is what Software Craftsmanship means, the ability to create Great Software and have fun while doing so.

Introductory Programming Texts Need Work

Posted by Pete McBreen 10 Dec 2010 at 21:55

Had a need to look at Python textbook recently and got confused several times by the examples that were needlessly obfuscated (slightly paraphrased code):

>>> def actions():
...     acts = []
...     for i in range(5):
...         acts.append(lambda x, i=i: i **x)
...     return acts
... 
>>> acts = actions() 
>>> acts[3](2)
9
  1. For a newbie, it would not be obvious that acts inside the function is NOT the same variable as the same variable acts outside of the function.
  2. The i=i is designed to be confusing, the first one is a new name in the scope of the lambda, the second is from the for loop. The first should have had a different name to make it obvious what was happening.
  3. The name of the function actions just does not communicate anything.

I’m going to have to think on this more, but maybe my next project is going to involve creating some material for newbies.

Very interesting Chemist blog

Posted by Pete McBreen 16 Nov 2010 at 10:30

Lots of interesting articles about the life of a chemist. Particularly interesting are the series of posts about Things I Won’t Work With

Tetrazole derivatives have featured several times here in “Things I Won’t Work With”, which might give you the impression that they’re invariably explosive. Not so - most of them are perfectly reasonable things. A tetrazole-for-carboxyl switch is one of the standard med-chem tricks, standard enough to have appeared in several marketed drugs. And that should be recommendation enough, since the FDA takes a dim view of exploding pharmaceuticals (nitroglycerine notwithstanding; that one was grandfathered in). No, tetrazoles are good citizens. Most of the time.

Well, the authors prepared a whole series of salts of the parent compound, using the bangiest counterions they could think of. And it makes for quite a party tray: the resulting compounds range from the merely explosive (the guanidinium salts) to the very explosive indeed (the ammonium and hydroxyammonium ones). They checked out the thermal stabilities with a differential scanning calorimeter (DSC), and the that latter two blew up so violently that they ruptured the pans on the apparatus while testing 1.5 milligrams of sample. No, I’m going to have to reluctantly give this class of compounds a miss, as appealing as they do sound.

Global warming humor

Posted by Pete McBreen 14 Nov 2010 at 19:22

Playing Devils advocate to win

Approximate calculations

Posted by Pete McBreen 25 Oct 2010 at 21:26

Sometimes we do not need to be exact in order to understand what is feasible, sometimes ballpark estimates and back of the envelope calculations are sufficient.

For a set of calculations and supporting data about the climate and energy issues, look at the Without Hot Air site that has the full PDF of a book available that is full of the calculations and approximate data, allowing the ethical considerations to be addressed after we understand the physical constraints.

Redefining the conversation

Posted by Pete McBreen 24 Oct 2010 at 17:56

Introducing Climate Hawks as an alternative way of framing the conversation.

it becomes about values, about how hard to fight and how much to sacrifice to defend [the] future.

Affordances in Unit Testing

Posted by Pete McBreen 07 Oct 2010 at 22:22

PHPUnit has the concept of Incomplete and Skipped Tests something that is missing in the Ruby TestUnit framework. Yes, technically TestUnit has flunk(), but that is not the same as TestUnit only reports failures and errors, whereas PHPUnit reports failures, errors, skipped and incompletes.

In Ruby, the idea of skipped and incomplete tests is not really supported, but since much of the Ruby world supports Test Driven Development, it is not really necessary. From what I have seen of the PHP world, Test Driven Development is not normally practiced, so it is useful to have the ability to mark tests as skipped or incomplete, and have PHPUnit report the test suite as passing, with a number of skipped or incomplete tests. It gives a visual reminder that more testing is needed, but it does not cause the test suite to report an error, so the next stage in the deployment process can continue without manual intervention (assuming that there are no failures or errors from the rest of the test suite.)

Inmates are running the Asylum?

Posted by Pete McBreen 27 Sep 2010 at 21:09

It has been a while since I read The Inmates Are Running The Asylum, but I was still intrigued to se that Zed has an article on Products For People Who Make Products For People.

While it is true that Alan Cooper blames the programmer for the appalling state of many (most?) applications, there is a disconnect between how I read the Inmates book and the way the Zed reports it. We agree that programmers do not really have much control over the way that applications are designed, the “Product Manager” types in organizations control that these days., but in my view programmers still have a lot of influence.

The problem as I see it is that most programmers build applications with the toolkit that is ready to hand, as opposed to taking the time to create a better toolkit for the job. So when it comes to design the UI, even if an interaction designer has come up with a great idea for a widget, and the product manager wants to use the widget, the programmers influence this by claiming that it will be hard to do with the existing toolset. So what gets built in most cases is what can easily be achieved by the toolset.

Zed lives at the opposite extreme, having written both Mongrel and Mongrel2, so his experience is totally different - if the tools don’t do what they need to do, Zed writes a new tool. The rise of open source proves that Zed is not alone, and there has been a massive improvement in the usability of software being written by the better programmers, so things are changing.

The long bearded programmers are coming into their own and the designers like Alan are going to have to adjust their expectations, because experience programmers can do a good job with design, so the interaction designers are going to have to earn their money in the future.

Not All Error Messages Are Informative

Posted by Pete McBreen 13 Aug 2010 at 18:17

I ran across a really annoying error message from Adobe Acrobat in a corporate setting this week. It was refusing to print anything and the explanation message it gave was

there were no pages selected to print

Nothing much out there on the net was helpful in fixing this, but eventually I discovered that the print queue I was using had been deleted from the print server. So technically there were no pages selected to print because Acrobat could not find the printer so did not know what it was supposed to be printing on.

It would have been much easier and faster to resolve this issue if the developers had considered this as a possible failure mode and rather than put out the not very descriptive “no pages selected to print” they had reported the fact that Acrobat could not contact the printer because it no longer existed on the target print server.

So if I ever see this again I’m going to confirm that the printer queue still exists before trying anything else. Fix in the end was trivial, add the printer again under its new name and delete the old printer.