Improving Wetware

Because technology is never the issue

Introductory Programming Texts Need Work

Posted by Pete McBreen Sat, 11 Dec 2010 05:55:00 GMT

Had a need to look at Python textbook recently and got confused several times by the examples that were needlessly obfuscated (slightly paraphrased code):

>>> def actions():
...     acts = []
...     for i in range(5):
...         acts.append(lambda x, i=i: i **x)
...     return acts
... 
>>> acts = actions() 
>>> acts[3](2)
9
  1. For a newbie, it would not be obvious that acts inside the function is NOT the same variable as the same variable acts outside of the function.
  2. The i=i is designed g=to be confused, the first one is a new name in the scope of the lambda, the second is from the for loop. The first should have had a different name to make it obvious what was happening.
  3. The name of the function actions just does not communicate anything.

I’m going to have to think on this more, but maybe my next project is going to involve creating some material for newbies.

Affordances in Unit Testing

Posted by Pete McBreen Fri, 08 Oct 2010 05:22:00 GMT

PHPUnit has the concept of Incomplete and Skipped Tests something that is missing in the Ruby TestUnit framework. Yes, technically TestUnit has flunk(), but that is not the same as TestUnit only reports failures and errors, whereas PHPUnit reports failures, errors, skipped and incompletes.

In Ruby, the idea of skipped and incomplete tests is not really supported, but since much of the Ruby world supports Test Driven Development, it is not really necessary. From what I have seen of the PHP world, Test Driven Development is not normally practiced, so it is useful to have the ability to mark tests as skipped or incomplete, and have PHPUnit report the test suite as passing, with a number of skipped or incomplete tests. It gives a visual reminder that more testing is needed, but it does not cause the test suite to report an error, so the next stage in the deployment process can continue without manual intervention (assuming that there are no failures or errors from the rest of the test suite.)

Inmates are running the Asylum?

Posted by Pete McBreen Tue, 28 Sep 2010 04:09:00 GMT

It has been a while since I read The Inmates Are Running The Asylum, but I was still intrigued to se that Zed has an article on Products For People Who Make Products For People.

While it is true that Alan Cooper blames the programmer for the appalling state of many (most?) applications, there is a disconnect between how I read the Inmates book and the way the Zed reports it. We agree that programmers do not really have much control over the way that applications are designed, the “Product Manager” types in organizations control that these days., but in my view programmers still have a lot of influence.

The problem as I see it is that most programmers build applications with the toolkit that is ready to hand, as opposed to taking the time to create a better toolkit for the job. So when it comes to design the UI, even if an interaction designer has come up with a great idea for a widget, and the product manager wants to use the widget, the programmers influence this by claiming that it will be hard to do with the existing toolset. So what gets built in most cases is what can easily be achieved by the toolset.

Zed lives at the opposite extreme, having written both Mongrel and Mongrel2, so his experience is totally different - if the tools don’t do what they need to do, Zed writes a new tool. The rise of open source proves that Zed is not alone, and there has been a massive improvement in the usability of software being written by the better programmers, so things are changing.

The long bearded programmers are coming into their own and the designers like Alan are going to have to adjust their expectations, because experience programmers can do a good job with design, so the interaction designers are going to have to earn their money in the future.

Not All Error Messages Are Informative

Posted by Pete McBreen Sat, 14 Aug 2010 01:17:00 GMT

I ran across a really annoying error message from Adobe Acrobat in a corporate setting this week. It was refusing to print anything and the explanation message it gave was

there were no pages selected to print

Nothing much out there on the net was helpful in fixing this, but eventually I discovered that the print queue I was using had been deleted from the print server. So technically there were no pages selected to print because Acrobat could not find the printer so did not know what it was supposed to be printing on.

It would have been much easier and faster to resolve this issue if the developers had considered this as a possible failure mode and rather than put out the not very descriptive “no pages selected to print” they had reported the fact that Acrobat could not contact the printer because it no longer existed on the target print server.

So if I ever see this again I’m going to confirm that the printer queue still exists before trying anything else. Fix in the end was trivial, add the printer again under its new name and delete the old printer.

Separating Design From Implementation is a Bad Idea

Posted by Pete McBreen Fri, 02 Jul 2010 20:05:00 GMT

There has been a lot written about the hazards of outsourcing, I even covered it in Software Craftsmanship, but it is nice to see that others are thinking about the long term implications of outsourcing manufacturing. Andy Grove has noticed that scaling is hard, and outsourcing scaling means a loss of expertise and jobs.

A new industry needs an effective ecosystem in which technology knowhow accumulates, experience builds on experience, and close relationships develop between supplier and customer. The U.S. lost its lead in batteries 30 years ago when it stopped making consumer electronics devices. Whoever made batteries then gained the exposure and relationships needed to learn to supply batteries for the more demanding laptop PC market, and after that, for the even more demanding automobile market. U.S. companies did not participate in the first phase and consequently were not in the running for all that followed.

An addition from the US viewpoint Harold Meyerson in the Washington Post points out that the outsourcing was not necessary since Germany has managed to prosper in high tech manufacturing at the same time that the US shuttered factories:

So even as Germany and China have been busily building, and selling us, high-speed trains, photovoltaic cells and lithium-ion batteries, we’ve spent the past decade, at the direction of our CEOs and bankers, shuttering 50,000 factories and springing credit-default swaps on an unsuspecting world.

The Rugged Software Manifesto is not a Parody

Posted by Pete McBreen Tue, 29 Jun 2010 01:14:00 GMT

It turns out that I was mistaken, the Onion did not write the Rugged Software Manifesto, or at least if they did InfoQ got taken in as well.

But it still sounds like a parody even if they are serious…

Rugged takes it a step further. The idea is that before the code can be made secure, the developers themselves must be toughened up.

The InfoQ article is not a complete waste of time though, there has been some conversation sparked by the parody, but Andrew Fried seems to have taken the idea in a new direction with his condensed version with just three points:

  • The software should do what it’s advertised to do.
  • The software shouldn’t create a portal into my system via every Chinese and Russian malware package that hits the Internet virtually every minute of every day.
  • The software should protect the users from themselves.

The first point is obvious and does not really require stating except to those developers who are not aware of authors like Gerald Weinberg.

His second point is again obvious, if you are building any software, you should know what the libraries you are including do. Well, Duh!

His third point is just plain wrong, and shows the usual arrogance of developers. Protect users from themselves is not an attitude I would like to see on my teams. Yes, protect users from stupid programmer mistakes, but the kind of arrogance shown in protect users from themselves leads to the kind of problems that fly by wire aircraft have had, notably the Airbus that did a low level pass and continued in level flight into trees and more recently shutting off a warning alarm that released the brakes causing the plane to slam into a wall.

Overall, it still sounds like a parody, but it is getting to sound like a very sick parody, more like graveyard humor.

Software Development, Planning and Timescales

Posted by Pete McBreen Fri, 28 May 2010 01:48:00 GMT

In Fast or Secure Software Development Jim Bird points out that many software organizations are going down the road of releasing the software very frequently.

Some people are going so far as to eliminate review gates and release overhead as waste: “iterations are muda”. The idea of Continuous Deployment takes this to the extreme: developers push software changes and fixes out immediately to production, with all the good and bad that you would expect from doing this.

There is a growing gap between these ‘Agile Approaches’ and mission critical software that has to be secure. Jim highlights many aspects of it in his article, but there is another wider aspect to consider.

Humans are bad at planning for the long term

The really bad news with this is that in this context the long term can be days or weeks, but as a society we need to be planning in terms of lifetimes.

For ‘Agile Approaches’ it makes sense to automate the release process so that it is easy to do with minimal impact. There are lots of technologies that help us achieve painless releases, and most of them are useful and effective. But just because we have a tool does not mean we should rely on it. As Jim points out, the frequent release mantra is based on the idea that it is easy to see if a release is broken, but it’s not always that simple. Security problems don’t show up like that, they show up later as successful exploits and attacks and bad press.

Some problems require careful planning, appropriate investment and preparation. One domain I am familiar with — credit card processing — has been fraught with problems because developers have not dealt with all of the corner cases that make software development so interesting and painful. Reportedly there have been exploits from server logs that included the http post parameters, which just so happened to be credit card numbers. Of course no organization will repeat any of the known mistakes — but without the long term focus on planning, investment and preparation that mission critical software requires, some other problem is bound to occur.

Failing to address the long term

Several years ago now, the Club of Rome study on the Limits to Growth was met with disbelief in most quarters, but less than half lifetime later we are hitting up against the limits that were identified back then. Peak Oil, Climate Change and the current financial meltdown are all related to these limits. Admittedly we are not good at dealing with exponentials, but reality has a way of making sure we cannot forget them. As the demand for oil reached the existing supply levels, the lack of successful investment in other supplies or alternatives meant that many economies crunched when we got near to peak oil supply volumes.

To think, a relatively simplistic computer model from nearly 40 years ago was able to produce a crude simulation of where we are now. Yes, many details were wrong, and the actual dates they were predicting were out, but what they were predicting was inevitable without policy changes and we failed to make the necessary changes. It was always easier to just keep on doing what we have been doing until suddenly oil is over US$100/barrel and then suddenly we act all surprised and horrified about the price of everything.

Software is Infrastructure

Software lasts a long time unless you make really bad mistakes, so we need to start treating it as infrastructure. But not like we are treating our current physical infrastructure. It is nice to be able to release software easily with minimal impact and costs, but we need to make sure that in doing so we are not exposing ourselves to longer term pain. Yes, make it easy to release, but only release when you know through intense and rigorous evaluation that what you release is better than what you have out there already.

Core systems do not get replaced quickly. Our 32 bit IP addresses are starting to hurt, but rolling out IP6 is a slow process. Just like our roads and bridges we have patched the existing system up lots of times, but it needs to be replaced and soon. Unfortunately, just like our roads and bridges, the necessary investment is daunting and we still want to put it off as long as possible.

As we enter the post cheap energy era, we need to reevaluate our planning horizon. We need to rediscover how to plan for the long term. Somehow or other, forty years after NASA put a man on the moon, NASA is now reduced to hitching rides with the Russian Space Program …

Fossil ignores files by default

Posted by Pete McBreen Sun, 16 May 2010 03:31:00 GMT

One difference between fossil and the other version control tools I use (git and svn) is that by default it ignores files that are not in the repository. So rather than having to set up a .gitignore to have it ignore stuff, fossil does that automatically

fossil status

To see the other files, you need to look for the extra files

fossil extra

And the really nice thing about extra, is that it does not matter where you are in the project directory structure when you issue the command, it searches the entire local-root looking for the unexpected files. This also means that fossil has a way of cleaning out any unwanted temporary files:

fossil clean

Clean prompts you to confirm every delete, so it is safe to run, but it is a nice feature when the tools you are using generate a lot of temporary or intermediate files as part of their processing.

Moving to Fossil

Posted by Pete McBreen Sat, 15 May 2010 17:23:00 GMT

After a few years of using Git as my version control system, it is now time to move on to a distributed version control system that is more suited to projects with a smaller number of contributors Fossil.

Main advantage I see that fossil has over git is the ease of setup for remote servers. A 2 line cgi script is sufficient for fossil to operate as a server over http. Git has something similar, but after several years of using git setting it up on a server is still not an easy task, which is why many people choose to use a service like Github or Gitorious. But the existence of these services points to a problem, why choose to use a distributed version control system and then adopt a centralized hosting service to use it. Surely the point of using a distributed system is to make it distributed — but we have created a system where the key repositories are all stored in a central place with a single point of failure.

Yes, I know that with a distributed version control system the clones have all the information that the original has, but the concept of centralizing the infrastructure seems strange to me. I much prefer the ease with which I can create a new fossil repository and then get it served out from an available web server relatively easily. Admittedly fossil does not integrate with any other identity management system, so the creator of the fossil repository has to manage the users (other than the anonymous user), but for small teams this is not an issue.

The other key feature for me is the autosynchronize on commit, so whenever a local commit is made, the changes are pushed to the original repository. Automatic, offsite backup without having to remember to do anything beyond the normal checkin of completed work.

Zed Shaw and Learning Python The Hard Way

Posted by Pete McBreen Sun, 09 May 2010 22:57:00 GMT

Zed has started an interesting experiment on Learning Python the Hard Way.

Looks to be an effective approach to learning Python, but I’m more intrigued by the tools used. From a set of text files in a fossil repository there is a make file that builds the complete book as a PDF file. Seems like it could be something I could use for other projects, the only awkward bit about it being the size of the TeX installation needed to support the pdflatex generation process.