Posted by
Pete McBreen
04 Feb 2011 at 20:51
Although lots of software is used in studying climate change, for example a python re-implementation of a climate model, there are many issues in climate change that relate to software development.
Parallels between Climate Change and Software Development
Things change slowly and then suddenly things are different
An extra 2ppm of CO2 might not seem much, but over a working lifetime, an extra 60-70ppm has changed lots of things. In software development Moore’s Law was slowly making computers faster and cheaper. Suddenly things changed when we realized that hardware was no longer an expensive item in projects. When I started software development, the developer costs on a project were swamped by the cost of the hardware. Now using cloud machines, the hardware costs of a project can be less than the cost of gas to commute to the office.
What we are sure we know is not necessarily true
In climate change, the distinction between weather and worldwide climate is not well understood. Also it is hard to figure out how a 2C change could matter that much when locally the weather can range over 60C from the depths of winter to the height of summer. In software development, historically it really mattered that the code was as efficient as possible because the machines were slow. So everyone knows that scripting languages are not useful for large scale development. Enterprise systems are built in traditional languages, but most web companies are using some form of a scripting language, or one of the newer functional languages.
Fear, Uncertainty and Doubt
Software development probably lead on this one. IBM was justifiably famous for FUD to ensure that customers followed IBM’s lead and did what they were told. IBM had the market and customers were supposed to do what IBM told them was good for them. With Climate Change, large organizations that will have to change to preserve the climate that is suitable for our current civilization, are spreading as much doubt as possible to delay the realization that business as usual is no longer possible. In Software Development the threat of Open Source is currently the target of lots of FUD and large corporations that are seeing changes on the horizon are doing all they can to preserve their business as usual.
Nobody wants to listen to the people who know what is going on
Software Developers are used to being overruled by people who do not really know what is going on. Sure sometimes it is genuinely a business decision, but often the business people are busy making technical decisions that they are not competent to make. In Climate Change, the scientists doing the work are being challenged by the political decision makers who do not have a clue about science. Realistically the political decision makers should accept the science and then start talking about the political choices that we need to make as a result of that science.
The results really matter
There are two things that our civilization depends on, working software and a livable, stable climate. The news is occasionally enlivened by the story of a major software project that costs hundreds of millions of dollars that fails to deliver the expected benefits. The smaller day to day losses from smaller failures are hidden. Similarly the big storms that are increasing in frequency and severity due to climate change are making headlines, but the smaller impacts are never visible in the news.
Making sense of the parallels
Still working on this one, but I have a sense that the political power structures have a big impact. The techno geeks that do software or science are not part of the larger political power structures that govern our corporate dominated societies. As such the techno geeks are marginalized and can be safely ignored … at least for now … obligatory link to Tim Bray - Doing it Wrong.
Posted by
Pete McBreen
03 Feb 2011 at 10:45
When trying to test out some legacy routes I got a really obtuse error from rake while running the following test case
test "legacy routing OK" do
assert_recognizes {:controller => 'index', :action => 'contact'}, '/contact.html'
end
All I was trying to do was make sure that any URLs saved in search engines from the old static site still worked in the new Rails based site, so no issue, just use assert_recognizes
and that will make sure the routes are protected by test cases. It is not possible to use assert_routing
since I did not want the new code to generate the .html style URLs. Anyway, this is what rake complained about
/usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:147:in
`load_without_new_constant_marking': ./test/functional/index_controller_test.rb:25:
syntax error, unexpected tASSOC, expecting '}' (SyntaxError)
assert_recognizes {:controller => 'index', :action => 'contact'}, '/contact.html'
./test/functional/index_controller_test.rb:25: syntax error, unexpected ',', expecting '}'
assert_recognizes {:controller => 'index', :action => 'contact'}, '/contact.html'
It turns out that although the code documentation uses the normal rails convention of omitting the ()
around the method arguments, you cannot do that if the first parameter to a method is a hash. So the fix is to put in the parenthesis and the test runs as expected.
test "legacy routing OK" do
assert_recognizes( {:controller => 'index', :action => 'contact'}, '/contact.html' )
end
Posted by
Pete McBreen
01 Feb 2011 at 16:31
Rails can be for fun as well. Rather than going the usual Wordpress/Joomla or PHP route for my local Cochrane Red Rock Running and Tri Club website, I decided to have some fun with Rails and jQuery. Nothing really fancy but it does what it needs to without anyone having to learn about the joys of a CMS control panel.
Overall the site took less than 10 hours of spare time, time that would easily get eaten up by the usual requests for updates to a static site. There are still a few static parts to the site, but those are ones that do not change frequently, so not a big deal to maintain those parts.
Edited Dec-2018, site is now retired, had a long run but now replaced by a wordpress site maintained by someone else. Over the years spent about 40 hours maintaining the site, so not too bad, and zero breaches/security issues in that time
Posted by
Pete McBreen
31 Jan 2011 at 09:02
Gotta smile at this one, but recently was referred to an article on Kanban for Software Development and my immediate thought was but Kanban is all about pull… I could also make a comment that they seem to have re-invented the linear waterfall process, but I will not go there.
All of the diagrams in that article show the scrum style backlog and input queue at the start of the process. Is that not the very definition of PUSH? Admittedly I last looked in detail at Kanban in the early 1990’s (not much call for Manufacturing Systems experience where I currently live), but I’m sure that Kanban did not get redefined in the intervening period to be a PUSH system, the whole point of Kanban is that it PULLS work through the system in response to customer demand.
I was hoping that the Limited WIP Society, which claims to be The home of Kanban Systems for Software Engineering would correct the mistakes in the article, but all it seems to be is a place that links to a lot of different blogs. The overall consensus seems to be that Kanban for Software Development is just Scrum with limits between stages rather than the normal scrum which just limits the work per iteration.
Once you put limits between stages, interesting things happen, so there are lots of things to address and Goldratt’s Theory of Constraints seems to get thrown into the mix. But overall it sure looks like Scrum.
Understanding Kanban in a Manufacturing context
Kanban originally addressed the problem of too many old rusting parts scattered about the factory floor, with nothing much of value being shipped. In manufacturing, context switching incurs setup costs, so it pays to run large batches, but that means that you might take forever to get to making the one small volume part that is vital to being able to ship the final assembled goods.
So rather than allowing any work station to produce too much of any one part, the simple rule is implemented that a station cannot produce anything until their is a demand for the item from one of their consumers. So suddenly the performance metric goes from how much utilization did you get from that expensive machine to were you able to satisfy all your consumer demands in a timely manner.
This works because kanban was set up for batch production of standardized items. You know how many exhaust valves you need per engine, and there are whole systems dedicated to working out all of the list of parts that go to make a completed engine (parts explosion is a fun problem when looking at customer orders for an assembly plant). For every part there is a standard time to manufacture the item, and the setup times are also standardized, so an industrial engineer can work out if it makes sense to produce a weeks worth of exhaust valves in a single batch, or to just produce one days worth at a time of each of the sizes that are needed (because setup time to change over is low).
So in a manufacturing context, you can balance the overall plant by simple signals, engine assembly sends out a pull request for 800 inlet valves and 800 exhaust valves. Once your machine has pumped out the 800 exhaust valves you have to switch over to producing the inlet valves because you have no more demand for exhaust valves. Assembly can now start on building the engines since it has both types of valves it needs. Under the old machine utilization regime, changing the setup to produce the other type of valve would have meant downtime and lower utilization, so switchovers were rare. It was common to run out of necessary parts because someone upstream was busy making other parts.
Kanban in a Software Development context
Software development does not have any standardized items, so everything is a batch of one. This plays hell with the overall model, because the rate at which each station can process the items is not eventually knowable. You can make guesses or estimated averages, but some requirements are hard to elicit, but easy to develop and test for. Other changes are easy to say, easy to implement but really complex to test for.
Another issue is that a key part of Kanban was that it worked at the workstation level, but the “Kanban for software development” seems to be working at a team or subteam level. This might work for Scrum, but applying it that way for software development seems to make it much more complicated than it needs to be. To make Kanban work in software development the pull requests need to go to the software equivalent of workstations, so either to individuals or programing pairs. Scheduling at any larger level leads directly back to the problems that Kanban was designed to eliminate, the fact that there is half finished stuff lying all over the place.
Kanban for Software
If we are going to try to apply Kanban we have to first change the mindset.
Scrum and XP both have the idea that the backlog of stories is prioritized into the iterations (though they use their own terminology for this process.
Kanban for Software has requests for changes and new features in an Order Book, just like any factory has an order book, and the factory only builds to order. So having a large order book is a good thing.
Scrum/XP show the backlog at the start of the process, pushing on the development team.
Kanban for Software shows the Order Book on the user end of the process, emphasizing the pull of the orders placing a demand on the team.
Kanban for Software schedules from the Acceptance Test back into the development process. The rest of the changes flow naturally from there.
If the most valuable order in the order book is “Add social features to the website”, then the person responsible for Acceptance Test has to find out what it would mean to successfully test that item. Feature definition would occur through what Brian Marick calls Example Driven Development.
Once identified, the feature requests would be passed back up the chain of people who are necessary to deliver the feature. The integration tester who hands off the individual sub-features and tasks to specific developers.
Changing the Mindset to Pull
Rather than allowing developers to pull items into the current iteration when they are blocked, Kanban for Software suggests that developers should ask their downstream consumer for work. If at the end of the chain the Acceptance Test person is too busy to look at the order book for new work, then it is OK for the developers to be idle while waiting for the inevitable bugfix requests. After all the thing that is in Acceptance Test right now is the most valuable thing that can be produced, so why make that wait for developers who are working on less important tasks?
The idea of Stopping the line was one of the hardest ideas to get across in manufacturing assembly lines. And the idea that it is OK for a developer to realize that they do not have any customer requested work to do right now is going to be just as hard to accept when we start applying the idea of Kanban for Software.
Posted by
Pete McBreen
28 Jan 2011 at 08:51
Matt Simmons wrote that Timing is Key …
If you wait until it’s obvious, it’s too late… has become my defacto motto when it comes to a lot of things. I think the first time it occurred to me was when I started researching IPv6. The depletion of IPv4 is no surprise, and hasn’t been for quite a while, but it seems like most people are holding off even researching it until it becomes obvious that they need it. Again, by that time, if you’re in any kind of competitive company vying for market position, it’ll be too late. It won’t be obvious that it’s necessary until you see people financially punished for not taking those steps.
Lots of applicability to this idea, I need to ponder on it for a while. There are lots of changes coming, but which ones are going to matter and which ones do we need to take action on. Typically what happens is that larger corporations take longer to ponder on the new ideas, and then get hit hard by the market.
Fuel efficient cars are a good example of this. The profit margin for trucks and SUVs was so high that it did not make sense to switch to smaller engined cars, even though Peak Oil was going to push up the price of gasoline and Climate Change eventually going to put the price of emitting CO2 higher. Back in 2008 when the price of oil spiked for a few months, truck dealers where I live practically could not give the trucks away. Even with discounts that were most of the previous profit margin, the trucks were not moving off the dealers lots. Yes, the price of oil dropped again and most people tried to convince themselves that it was just speculators, but the car companies seem slowly to be waking up to the issue that fuel economy matters. The only problem is that the companies that held on to the idea of profitable trucks and SUVs the longest are being punished by the market now, since the lead time for getting a new design in the hands of the dealers is much longer than the time that companies have before the CDN$1.00/liter price of gasoline seems like the good old days.
Lots of applicability of this to software development as well, but that is for a later post.
Posted by
Pete McBreen
27 Jan 2011 at 10:23
Steve McConnell responded to (In)Validating the 10X Productivity Difference Claim by saying 10x Productivity Myths: Where’s the 10x Difference in Compensation?
My overall conclusion is that paying for productivity on any more than a very-rough-approximation basis is a panacea that cannot practically be achieved.
As I’ve commented previously, the discrepancy between capability differences and compensation differences does create opportunities for companies that are willing to hire from the top of the talent pool to receive disproportionately greater levels of output in exchange for only modestly higher compensation.
This is not the answer I expected to find when I began asking the question almost 25 years ago, but I can see the reasons for it. Gerald Weinberg describes a pattern he describes as “Things are the way they are because they got that way.” I think this is one of those cases.
To a large degree I agree with McConnell on this, it is hard to measure productivity of software developers, so corporations tend to just have a narrow range of salaries for developers. The body of his article has good arguments for this, so there is no point repeating them here.
There is however another dimension to this conversation that has not yet been addressed.
Salary is not the only option
McConnell exemplifies the alternatives, after all he is more or less synonymous with Construx and several well read books like Code Complete. There is also the startup route, and as McConnell mentions, some developers end up contracting.
So overall the question of salary difference may be moot, except that maybe it means that within a single organization the 10X difference in productivity does not exist. The lower band will probably not pass the hiring filters, and the higher end of the band may self select out, either by not applying in the first place, or by choosing to leave for greener pastures.
Compensation is not the issue
Personally I’m more interested in Understanding Productivity and what drives it. My bias for explaining productivity is in the area of Software Craftsmanship, but there are many aspects of productivity that have not been explored …
Posted by
Pete McBreen
20 Jan 2011 at 09:14
I’m not a fan of the manifesto, but have been watching the recent threads stirred by Dan North saying Programming is not a craft.
TL;DR version Software Craftsmanship risks putting the software at the centre rather than the benefit the software is supposed to deliver, mostly because we are romantics with big egos. Programming is about automating work like crunching data, processing and presenting information, or controlling and automating machines.
Liz Keogh highlighted the key aspect of Software Craftsmanship that I consider crucial, that although you can aspire to being a craftsman
… Software Craftsman is a status you should be awarded by someone else.
The reason that the old trades crafts focused so much on creating a masterpiece, was so that a person could be recognized by their peers as having become a master of their craft. The proof was in what was created, not just by someone saying that they are a craftsman.
Uncle Bob seems to be trying to conflate Agile and Software Craftsmanship, but I still see the two things as distinct. He has also drawn parallels between his Clean Code idea and Software Craftsmanship
Why is there a software craftsmanship movement? What motivated it? What drives it now? One thing; and one thing only.
We are tired of writing crap.
That’s it. The fat lady sang. Good nite Gracy. Over and out.
Again, for me this is too simplistic a view. The idea of Clean Code as an approach is a good start, but Software Craftsmanship goes far beyond the idea of just Clean Code.
Software Craftsmanship requires a complete reappraisal of what it means to develop software. As opposed to stopping writing crap as Clean Code suggests, Software Craftsmanship asks us to start creating great applications for our users and to stand behind that code and support it so that users who come to depend on it can trust that it will be available for them to use.
Software Craftsmanship includes the idea of software longevity. Code that is maintainable and can be maintained for long periods so that the investment our users put in learning to use the application and the hours they spend getting the data into the application is not lost when a capricious decision is made to abandon the software.
Posted by
Pete McBreen
18 Jan 2011 at 21:42
In a shared hosting environment, the system does not always have all of the gems you need, but when a gem is installed locally under your own account, mongrel or passenger does not see where the gems are and you get the great error message:
Missing these required gems
bluecloth
You get this message from mongrel even after running rake gems:install
and having it report that the gem is successfully installed.
The Fix for Missing these required gems
Set the GEM_PATH
to point to your local gems as well as the system gems, but you cannot just set it in .bashrc
, since apache runs under a different user and does not see your scripts. Hence when mongrel runs, it does not see the locally installed gems.
The Rails preinitializer.rb
that lives in the config
directory can set the GEM_PATH
and then Mongrel can see the locally installed gems.
The file config/preinitializer.rb
needs to contain the following
ENV['GEM_PATH'] = '/home/pete/ruby/gems:/usr/lib/ruby/gems/1.8'
Gem.clear_paths
Obviously you will need to replace the path to my local rubygems home/pete/ruby/gems
with your own local gem path.
A hat tip to Damien White for pointing me to this fix. Here’s hoping that this extra link moves his site up in the google listing for the query mongrel “Missing these required gems”
Posted by
Pete McBreen
15 Jan 2011 at 12:55
After losing count of the number of sites that use the browser to shrink massive images into the size of the thumbnails that the page needs (and waiting forever for these large images to load), it is time to say enough.
Shrinking images is easy, especially from the command line using ImageMagick. It is an open source tool, so there is no excuse not to use it, there are even precompiled binaries available for most platforms.
The most common use case people have for images is small thumbnails that are used as an image link to another page that has the full size image on it, or the page only has space for a small image and the original is too large to fit. Doing this with ImageMagick is easy, even if the original aspect ratio of the image was not right.
convert -thumbnail 200x100^ -gravity center -extent 200x100 original.jpg thumbnail.jpg
convert
is the name of the ImageMagick tool, -thumbnail 200x100^
creates an image that is at least 200 pixels wide and 100 pixels tall while preserving the aspect ratio of the original picture. This means that the resulting thumbnail can be larger than 200x100, but the image will not be distorted. The second part of the command -gravity center -extent 200x100
specifies that the resulting image should only be 200x100 and that the image should be picked from the center of the image. The gravity option can also be any compass direction with NorthWest being the top right of the image.
Processing multiple images is trivially easy, just specify a set of images using a wildcard like *.jpg
and then the resulting images will be written out to a numbered set of files using thumb-%d.jpg
, giving filenames like thumb-1.jpg
, thumb-2.jpg
and so on.
convert -thumbnail 200x100^ -gravity center -extent 200x100 *.jpg thumb-%d.jpg
So no more excuses for distorted or over size images in web pages.
Posted by
Pete McBreen
13 Jan 2011 at 10:52
Setup instructions for using Vagrant, Oracle’s VirtualBox and Rails 3.
Getting Vagrant to install is easy, since it is just a gem, and it only requires the download of Oracle’s VirtualBox which is a simple OSX package. The bare bones instructions on the Vagrant site homepage actually work, but forget to tell you that you should do this from a new directory that you want to have shared with the VM.
gem install vagrant
vagrant box add lucid32 http://files.vagrantup.com/lucid32.box
vagrant init
After the vagrant init
there is a file called Vagrantfile
that is just Ruby code for configuring the VM. Vagrant itself is set up to use Chef or Puppet to configure the VM with all of the software you need, but other than the basic apache install, I used apt-get to do the config.
The following changes to the bottom of the Vagrantfile use Chef Solo to install Apache2 into to VM (it sets up both port forwarding so that localhost:8080 points to apache inside the VM and sets up a local network address (only visible from within the laptop 192.168.10.200 so that I can also set up a direct http connection to apache.
config.vm.box = "lucid32"
config.vm.provisioner = :chef_solo
# Grab the cookbooks from the Vagrant files
config.chef.recipe_url = "http://files.vagrantup.com/getting_started/cookbooks.tar.gz"
# Tell chef what recipe to run. In this case, the `vagrant_main` recipe
# does all the magic.
config.chef.add_recipe("vagrant_main")
config.vm.forward_port("http", 80, 8080)
config.vm.network("192.168.10.200")
With that in place the next thing to do is to start it up and then connect to the VM.
vagrant up
vagrant ssh
The last line creates an ssh session to the newly booted Ubuntu VM running inside the VirtualBox. No passwords or anything like firewalls, the VM is running wide open within OSX (but headless so ssh is necessary to connect and do anything useful). The /vagrant
directory in the VM is shared with the directory that you are in when the init was done, so it is easy to move files into and out of the VM.
Once on the VM, the it was easy to use apt-get
to install everything else that is needed for Rails 3.
sudo apt-get install mysql-server mysql-client libmysqlclient-dev
sudo gem install rails
sudo gem install mysql2
rails new /vagrant/R3/weblog -d=mysql -J
The -J in the rails command omits prototype so that the jQuery support can be added instead. Note also that the files are created in the /vagrant
directory so they are available for editing directly on the laptop as well.
Then all that is needed to validate that it is working is to start rails up in development mode, so
/vagrant/R3/weblog
rake db:create:all
script/rails server
This shows the usual startup messages
=> Booting WEBrick
=> Rails 3.0.3 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
[2011-01-13 11:27:27] INFO WEBrick 1.3.1
[2011-01-13 11:27:27] INFO ruby 1.8.7 (2010-01-10) [i486-linux]
[2011-01-13 11:27:27] INFO WEBrick::HTTPServer#start: pid=7784 port=3000
Then using the laptop browser visit http://192.168.10.200:3000/
and everything should be working correctly. vagrant suspend
will then pause the VM and vagrant resume
will restart it. Using vagrant halt
will cleanly shut the machine down vagrant up
will restart it, and vagrant destroy
will shut down and delete the VM, so the next vagrant up
restarts with a blank slate so you can test your provisioning process.
Edited to add The /vagrant
directory is set to the root directory for Apache by the chef_solo recipe, so any files in the /vagrant
directory on the running VM are visible at http://192.168.10.200/
.
Posted by
Pete McBreen
12 Jan 2011 at 07:47
What investigations would be useful to understand the claims about productivity of software developers? The existing studies are now old and come from an era when the technology was completely different from that available now. An era when one of the significant studies is into the difference between online and offline programming and debugging, McConnell’s 10X Software Development article refers to study titled “Exploratory Experimental Studies Comparing Online and Offline Programming Performance.” (emphasis mine):
The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.
An interesting point to start a modern investigation would be the last observation that they found no relationship between a programmer’s amount of experience and code quality or productivity. As a developer with 27+ years of experience in the field I have a vested interest in that observation being incorrect, but it would be interesting to see if the observation can be repeated. One reason for trying to repeat the experiment is that back in 1968 people with more than 10 years experience would have been exceedingly rare, but now 40+ years later, it should be easy to find people who have up to 10, 20, 30 and more years experience to see if there is any trend over the longer term.
Interestingly employers seem to have attached themselves to the idea that productivity is not that related to experience when they ask for team leads with 3 years of experience and consider senior developers to have 5 years of experience.
Other factors to investigate that seem to have some anecdotal evidence to support the idea that they may affect productivity
- Breadth of experience — number of different programming languages that a developer has worked in.
- Cross paradigm experience — does it make a difference how many different paradigms the developer has worked in?
- Specific Paradigms — is there a paradigm that makes a difference - are the claims that functional programming improves general programming ability supportable from the data?
- Specific experience — does it make a difference if a developer has spent a lot of time focused on one particular language? This might seem obvious, but we have been surprised by obvious ideas that turn out not to be true.
- Similar experience — does it make a difference if the developer has experience in similar but not quite the same languages? Moving between C++, Java and C# could make a developer more aware of the subtle syntax differences between these languages and hence less likely to make mistakes that will slow down development.
- Toolset experience — does the amount of time working in the toolset matter, independent of the experience with the particular language? There are several multi-language development environments that have been around for enough time for this to be a feasible investigation.
- Domain experience — does experience in the problem domain make a difference? Employers seem to think so based on job postings, but does the data back the idea up and how significant is the difference?
There are probably more factors that could be investigated, but these will make for a good starting point.
Posted by
Pete McBreen
11 Jan 2011 at 09:04
Recently Laurent Bossavit wrote about Fact And Folklore In Software Engineering - (link is to the english translation). He pointed out that there are few hard numbers available about the productivity of software developers. Yes, there are plenty of anecdotes, but what studies that were done were a long time ago and weak methodologically compared to what we expect of current studies on human performance.
Bossavit then goes on to tackle the claims of 10X productivity
We can now circle back to this widely circulated “fact†of the software profession, according to which “programmer productivity varies by a factor of 10 (or 5, or 20) between the best and worst individualsâ€. This is a remarkable statement, not least because of its implications: for instance, programmer compensation does not vary accordingly.
Steve McConnell then gets dragged in for his review of the 10X literature. Bossavit claims that the research that has been done is not sufficient to validate the claim for 10X differences.
But “work†is the right term here: tracking down articles and sometimes entire books, some of them out of print, just to scan them for conclusions which turn out not to appear anywhere. It is a sad fact that critical examination of ill-supported assertions takes a lot more time than making the assertions in the first place; this asymmetry accounts for a lot of what is wrong with public perceptions of science in general, and possibly for the current state of our profession.
Although Bossavit’s article can read a bit like a personal attack, the problem of unsupported claims is something I have seen a lot in the “debate” over Global Warming. Indeed there is now a term for the generation of unsupported claims the Gish Gallop that was first used in attacks on Evolution and now against Climate Change.
As Bossavit rightly points out, it is much easier to make a claim than it is to refute the claim. Especially when the claim feels right and has had a long history of being accepted as common knowledge.
Steve McConnell replied to the article by Bossavit with another blog entry - Origins of 10X — How Valid is the Underlying Research? in which he revisits the same papers and summarizes with the conclusion that there is Strong Research Support for the 10x Conclusion.
McConnell acknowledges that there could be some methodological weaknesses with the original studies, but states that the body of research that supports the 10x claim is as solid as any research that’s been done in software engineering. Personally I think that falls into the category of damning with faint praise.
But is the difference as large as we think?
Bossavit did raise one point in his article that McConnell did not address - programmer compensation does not vary accordingly.
This is a telling point - if the difference is productivity can be 10X, why is it that salaries rarely fall outside the 2X range for experienced developers. Ignoring the lower starting pay issues, once a person had 3-5 years in the industry, salaries in North America are of the order of $50,000/year, Apart form a few outlier major cities with crazy cost of living expenses, it is hard to find anyone actively involved in software development (not a manager) who is earning more than $100,000/year.
It could be that the research is old - which was a criticism made for my use of the early studies in my Software Craftsmanship book, after all it is suspect when a book written in 2000 is referring to studies done back in 1972 or even 1988.
Unfortunately there have been no real studies of programmer productivity in the recent era. Yes there have been lots of claims made for the Agile approaches, but there are no real methodologically sound studies that are easily found. True there may have been studies that I cannot find, but I would guess that if any such study had been done then the authors would be making money off it by now and it would become known.
Overall it would seem that the software engineering community does not have any solid evidence to back up the 10X claim using current development tools and techniques. The anecdotal evidence we have would suggest that maybe there is a 3X difference between currently practicing software developers, and there may be some outliers who are much better than that but that those individuals are few and far between.
But that is just another unsupported claim. Yes, there is an obvious difference is capability between experienced developers, but there is no easy way to measure it, and what studies were published on the topic were from research done a long time ago, practically in the prehistory of software development.
All of the above is why I promote the idea of Software Craftsmanship over Software Engineering.
Posted by
Pete McBreen
10 Jan 2011 at 10:14
The web is inherently a social medium because it is easy to share URLs. The common name for this is that sites became Slashdotted, named after the popular site Slashdot that was the first of the sites were posted links could generate massive traffic.
Website traffic is social in that while it has the usual day/night differences in traffic that follows a relatively predictable curve, if an article becomes popular, traffic can spike very rapidly to previously unseen levels. A site that typically gets 1 million page views/day may find that suddenly the usual daytime peak of 100,000 page views/hour (less than 30/second) has suddenly spiked to over 500/second (which if sustained would be 1.8 million page views in the hour). All it needs is an influential site to link to the smaller site, or for lots of different social sites to jump on the link.
In contrast, iPad and similar applications do not exhibit this type of traffic. Partly this is because the apps need to be installed, but also because the apps do not lend themselves to the social sharing of content. Yes, most apps have a way of sending out a URL, but that just feeds the social web, it does not add to the traffic on the servers feeding the application.
The nice thing about this is that it makes it easy to size the servers that the application uses. It also makes me think that the application developers are missing an opportunity …
Posted by
Pete McBreen
07 Jan 2011 at 16:32
Recently I had to replace a Volkswagen TDI Golf (after 300,000km it was well used), but was appalled at the lack of improvement in fuel efficiency over the past 10+ years.
Overall I normally averaged 5.1 l/100km in the TDI, normally managing 1000km between 51 liter fill ups. In Canada the Ford Fiesta is advertised as Best in class fuel efficiency
. Well it might be, but only because nobody seems to be importing the really fuel efficient cars. Based on the Canadian figures, the Fiesta will probably end up somewhere around 6.0 to 6.5l/100km. On the european figures, it is listed as 5.9l/100km, for the 1.6L 120 HP version - the only engine spec that is available in Canada.
Read this and weep
The 1.6 Duratorq TDCi ECO version of the same vehicle that is NOT available in Canada gets 3.7l/100KM and still pumps out 90HP, there is another version listed at 95HP that gets similar fuel efficiency. For people who do not like diesel, there is a 1.25L version that still does 5.5l/100km, and another1.25L petrol engine that does 85HP that does 5.6l/100km.
Canadian figures for the Fiesta are 7.1 city, 5.3 highway. There is supposedly going to be an ECO version out later, but for now an average that we might be able to expect is 6.2 l/100km.
Current vehicle
After much looking around I ended up with a Honda Fit, (Jazz in europe). It claims 7.2 city. 5.7 highway for a combined 6.4, but in practice I’m averaging 6.6l/100KM, more than 2l/100km worse than I would be if I could have got one of the fuel efficient cars that are available in Europe.
A new TDI Golf was not on the cards since it is only available in the high “comfortline” spec, for CDN$28,000, and not very fuel efficient as the version available in Canada is 140HP, so 6.7l/100km city, 4.6l/100km highway for a combined 5.65l/100km. So in 10 years the car has more power and worse fuel economy than the previous model.
Time to keep on watching the CO2 level.
Posted by
Pete McBreen
05 Jan 2011 at 14:49
The jQuery Javascript library has a neat feature that makes the maintenance of static sites a lot easier. It is possible to load a different page into a div (or any other defined area on a page.
The syntax for this is relatively simple
<div id="footer">
<p>Page requires javascript to load the footer links into this area.</p>
</div>
<script src="scripts/jquery-1.4.4.min.js" type="text/javascript" ></script>
<script type="text/javascript">
$('#footer').load('index.html #footer ul.menu');
</script>
This solves the problem of wanting to make sure that the footer is identical on all pages without having the problem of making sure that you edit all 20+ pages in a static site. Sure it would be a whole lot easier to just use a dynamic site and include whatever code was needed in the page, but some sites are still made out of static (X)HTML so this is a neat fix.
The $('#footer').load('index.html #footer ul.menu');
line is the key one, it loads the index.html page and then extracts the contents using a CSS selector #footer ul.menu and replaces the existing footer div on the current page with the specified content from the index page.
Yes, the obvious complaint is that it slows down the page load time, but for most static sites this is less of an issue than the maintenance hassle of ensuring that every page is updated whenever a change occurs. It also has the side effect of cutting down the total size of the pages for sites that have lots of boilerplate code in the footer or sidebars.
For completeness I should also show the footer from the index page
<div id="footer">
<ul class="menu">
<li class="menulinks">
<div><a title="Home" href="index.html">Home</a></div>
</li>
... lots of other links missed off here
</ul>
</div>
Posted by
Pete McBreen
05 Jan 2011 at 09:01
Interesting article in the sloan review on outsourcing too much. Admittedly it is about the design process in the car world, and it is short on details, but the overall implications are clear.
It seems that the business world is now waking up to the fact that it is overall systems performance that matters, not just local optimization of a single point function or module. The problem seems to be that as you Separate the Design from the Implementation the local knowledge you lose is much worse than the small gain you make in financial efficiency of the outsourcing.
Posted by
Pete McBreen
29 Dec 2010 at 13:41
Having just been reminded that there is a Manifesto for Software Craftsmanship I have to point out that although I wrote the Software Craftsmanship book I have nothing to do with the manifesto.
Part of my disagreement with it is that it is a not very well disguised take off of the Agile Manifesto. I can leave aside the fact that they did not even keep the sequence the same, but to suggest that Software Craftsmanship is something beyond Agile is taking the idea to places where I do not think it belongs. Craftsmanship is about our individual relationship to the work that we do, it is not tied to a particular way of doing the work.
For me, Software Craftsmanship is about putting the individual back into the activity of delivering software. I have no interest at all in a community of professionals, the passionate amateur is much more likely to create interesting and valuable software. Professionals are too serious, amateurs get the idea that Software development is meant to be fun. One now very famous amateur has since written about something being Just For Fun.
In part my book was a rant against Software Engineering, mainly because several institutions were trying to take ideas from mechanical engineering and manufacturing practices and apply them to software development. But it was also a rant against the idea of professionalism. Rather than try to emulate the buttoned down professionalism that kills productivity and creativity, I wanted software development to become more skill and experience based. Yes, there are some practices that help in certain circumstances, but not in all. The professionals who spout about Best Practices and Certification do us all a disservice since they lock us into things that worked one time in one circumstance.
In the end, Software Craftsmanship is about producing great software. In the old traditions of craftsmanship, to be accepted a journeyman had to produce a masterpiece. Something that their fellow craftsmen would acknowledge as being worthy of the craft. For me, this is what Software Craftsmanship means, the ability to create Great Software and have fun while doing so.
Posted by
Pete McBreen
10 Dec 2010 at 21:55
Had a need to look at Python textbook recently and got confused several times by the examples that were needlessly obfuscated (slightly paraphrased code):
>>> def actions():
... acts = []
... for i in range(5):
... acts.append(lambda x, i=i: i **x)
... return acts
...
>>> acts = actions()
>>> acts[3](2)
9
- For a newbie, it would not be obvious that
acts
inside the function is NOT the same variable as the same variable acts
outside of the function.
- The
i=i
is designed to be confusing, the first one is a new name in the scope of the lambda, the second is from the for
loop. The first should have had a different name to make it obvious what was happening.
- The name of the function
actions
just does not communicate anything.
I’m going to have to think on this more, but maybe my next project is going to involve creating some material for newbies.
Posted by
Pete McBreen
16 Nov 2010 at 10:30
Lots of interesting articles about the life of a chemist. Particularly interesting are the series of posts about Things I Won’t Work With
Tetrazole derivatives have featured several times here in “Things I Won’t Work With”, which might give you the impression that they’re invariably explosive. Not so - most of them are perfectly reasonable things. A tetrazole-for-carboxyl switch is one of the standard med-chem tricks, standard enough to have appeared in several marketed drugs. And that should be recommendation enough, since the FDA takes a dim view of exploding pharmaceuticals (nitroglycerine notwithstanding; that one was grandfathered in). No, tetrazoles are good citizens. Most of the time.
…
Well, the authors prepared a whole series of salts of the parent compound, using the bangiest counterions they could think of. And it makes for quite a party tray: the resulting compounds range from the merely explosive (the guanidinium salts) to the very explosive indeed (the ammonium and hydroxyammonium ones). They checked out the thermal stabilities with a differential scanning calorimeter (DSC), and the that latter two blew up so violently that they ruptured the pans on the apparatus while testing 1.5 milligrams of sample. No, I’m going to have to reluctantly give this class of compounds a miss, as appealing as they do sound.
Posted by
Pete McBreen
14 Nov 2010 at 19:22
Playing Devils advocate to win