01 Jul 2014 at 19:52
Software development is a hard problem.
Books like The Mythical Man Month, Set Phasers on Stun and The Inmates are running the Asylum have all pointed out in their own way that creating software is hard. Fred Brooks focused on the problem of large complex projects, and the problems that face project managers, the other two remind us that even small projects can fail because we still are not able to create software that is both easy to use and powerful enough to do that tasks that we want to do with software.
Until we are able to understand why software development is such a hard problem, we are not going to make much beyond incremental improvement. There will always be a few projects that through the operation of blind luck across millions of projects that results in seemingly reproducible improvement, but the normal regression to the mean will correct that eventually.
29 Jun 2014 at 20:17
I didn’t see this when it was first written, but it matches with my recent experiences.
… most programmers simply don’t know where the quality bar is. They don’t know what disciplines they should adopt. They don’t know the difference between good and bad code. And, most importantly, they have not learned that writing good clean code in a disciplined manner is the fastest and best way get the job done well. – Robert Martin
04 Jun 2014 at 19:47
Many software developers do not seem to understand the basics of our craft. Recently I’ve seen
- SQL queries that were massively more complex than they needed to be - that when simplified, without any database changes ran more than 10 times faster
- Client server applications that issue nearly 1000 SQL queries while refreshing what is supposed to be an interactive screen - the end result being that the poor user has to wait 5 to 10 seconds for the screen to refresh after conceptually simple actions
- Supposedly secure web applications that sent Active Directory usernames and passwords in cleartext across HTTP connections
- Code that created connections to external resources but forgot to free them - made for a very effective rate limiting mechanism since the external resource freed unused handles about an hour after they were last used
There have been lots more examples, but most of them fall into the category of being unbelievable if you were not a direct witness to the utter ignorance of the basics of software development that brought them to my attention.
Maybe it is time that we started to focus on the basics of the craft of coding before we get too far into creating overly complex systems that nobody can understand or fix.
21 Nov 2013 at 20:10
But I’m not sure we will learn the right lessons.
In the early 1980’s I worked on applications that had to process 300 transactions per minute. At the time that was considered a heavy load for a Dec Vax to deal with. By the late 1980’s the same applications were dealing with 1,000 transactions per minute because the hardware had got a lot faster. In the mid-1990’s I worked on a small scale credit card processing application, running on a later incarnation of the Dec Vax that was able to handle nearly 20 transactions per second. In the late 1990’s I worked on a stock exchange system that had to deal with what we thought at the time was a stupidly large number of messages per second … little did I know. Fast forward to the early 2000’s and I worked on consumer facing web applications that had to withstand 10,000 requests per second. By 2010 I had the fun of being on a real web scale project, experiencing the joys of being linked to by Digg and CNN and hoping that the ensuing millions of requests per hour would not being the system down.
All this is to say that dealing with internet scale applications is a solved problem, but it seems that whoever was involved in the Healthcare.gov fiasco did not realize that. Dave Winer pointed out that the Government develops software differently, but there is no excuse for building a site that cannot handle the traffic.
I disagree with Bob Goodwin - there is no software engineering crisis - OK I have to say that because I wrote the Software Craftsmanship book. But that is not the real reason I have to disagree - I have to disagree because whoever built the site went about it the wrong way. Dave Winer parodied the approach that big consulting companies take
They’d fully specify the software, the user interface, its internal workings, file formats, even write the user documentation, before a single line of code was written. Then they’d hand the parts off to development teams who would independently of each other create the components. Another team would do the integration.
The sad fact is that the big corporations that are awarded these big government contracts do not have a clue how to build web scale applications that work. They over promise and massively underdeliver. All too often large companies are awarded contracts to build large systems and fail to deliver anything of value except to their own shareholders.
01 Nov 2013 at 17:05
Just had to help someone with Apple’s Pages, not sure what version, but they were not able to save a separate version of the file to keep two slightly different versions of a document. Turns out that Pages no longer has a Save As… file menu option, instead it has a Duplicate and Rename menu option.
Which designer in their right mind thinks that it is a good idea to change a thirty year old idiom that their entire userbase is familiar with?
Best bit of this insanity is that the shortcut key for Duplicate is the same as what used to invoke the Save As… dialog box, but of course the behavior is different.
20 Sep 2013 at 19:24
Hot off the press, there is now a Simplified Chinese Edition of Software Craftsmanship - amazon.cn link. ISBN is 978-7-115-28068-8 for anyone who is interested.
The link is also proof that international alphabets are now supported in URLs - http://www.amazon.cn/è½¯ä»¶å·¥è‰º-Pete-McBreen/dp/B00AAQXL28
27 Feb 2013 at 21:21
Dave Winer calls it like it is - the wrong people are behind code.org, and their pitch is not even wrong.
But I don’t like the way people at code.org are pitching it. And I don’t like who is doing the pitching, and who isn’t. Out of the 83 people they quote, I doubt if many of them have written code recently, and most of them have never done it, and have no idea what they’re talking about.
These people don’t themselves know how to do what they want you to do. So what they say makes no sense. It won’t make you rich, but it will make them rich. And if you do it, they won’t listen to you. And even worse, if you do what they want you to do, you’ll be tossed out on the street without any way to earn a living when you turn 35 or 40. Even though you’re still a perfectly good programmer.
20 Jan 2013 at 11:26
From Difference Engine: Edison’s revenge
It is true, and was the basis of Edison’s showmanship, that low-frequency alternating current can be more hazardous than an equivalent direct current. By oscillating at a similar (ie, close enough) frequency to the human heart, a sufficiently strong alternating current can cause that organ to beat arhythmically and thereby induce ventricular fibrillationâ€”a potentially deadly condition that needs to be corrected immediately.
This is the improved, edited version. How can a journalist equate 50 to 60 Hz to be close to the frequency of the human heart 60 to 120 beats/minute (1 to 2 Hz).
With corrections like this, I remember why I stopped reading the Economist.
07 Dec 2012 at 16:44
Gave a talk at Calgary Agile Methods User Group, slides from the talk are now available - Applying Craftsmanship - 20MB PDF
16 Nov 2012 at 20:29
The Atlantic has an article called When the nerds go marching in that tells a story about the comparative approaches of the Obama and Romney teams and how they built and tested their systems in the run up to the 2012 US presidential election.
Obama team had an interesting approach to the planning - Making it a game
Hatch was playing the role of dungeon master, calling out devilishly complex scenarios that were designed to test each and every piece of their system as they entered the exponential traffic-growth phase of the election. Mark Trammell, an engineer who Reed hired after he left Twitter, saw a couple game days. He said they reminded him of his time in the Navy. “You ran firefighting drills over and over and over, to make sure that you not just know what you’re doing,” he said, “but you’re calm because you know you can handle your shit.”
13 Nov 2012 at 20:44
As usual, The Codist is slightly controversial and bluntly states that What Programmers Want is Less Stupid and More Programming.
So no matter what you do the best programmers will motivate themselves if you give them challenging code to write or problems to solve, and keep the stupid as far away as you can. Give them a work environment that makes this possible and consistent. Manage them with this understanding. Rewards are nice but the ultimate motivator is still opportunity.
In the end Andrew comes down to the Free Game theory of programmer motivation that was first popularized in Tracy Kidder’s book The Soul of a New Machine, but that does not detract from the overall thrust that you have to keep the stupid away from your developers.
12 Nov 2012 at 14:53
A political piece from last week’s election - We Need a Programmer for President.
It has an interesting take on the need for more emphasis on teaching programming in schools at an early age, rather than the more normal Computer Literacy which focuses on how to use the standard suite of applications.
21 Oct 2012 at 18:25
Just noticed that CNC machines are getting to be cheap as well. A sample guide to CNC machines looks at how they can be used in conjunction with moulding techniques to fabricate moulds for plastic parts as well as produce metal parts.
These CNC machines are not quite as cheap as the 3D printers, but they are in the ballpark - plus if you create the moulds correctly, can be used to scale up small scale manufacturing of plastic parts much better than you could with a 3D printer.
20 Oct 2012 at 20:20
Looks like we are starting to live in what could be called Interesting Times.
Although Moore’s Law still seems to be holding out a bit longer, individually the cores in CPUs are not that much faster than they used to be. We have been stuck near 3GHz for nearly 10 years now, and a common occurrence on servers and laptops now is to see a process taking 100% of the available core but overall the machine is running at 25% or 16% loading (depending on whether it is a 4 or 8 core machine). In order to get processes to run faster we are going to have to learn how to program with multicore CPUs in mind.
Peak Oil seems to have occurred in the 2004-2007 timeframe, so the days of cheap fuel are behind us. In Canada fuel is still cheap, but $1/L is not something we have seen for a while. How society handles the transition to $2/L is going to be interesting. The effect of higher prices will have a double impact with the expected wild fluctuations in price that many analysts in the Peak Oil field are predicting. It is amusing however to watch local dealers having to do massive truck sales at the end of each year to get rid of their excess inventory of gas guzzling vehicles.
As we track towards 400ppm CO2 the thought that maybe Global Warming would be nice in a country with cold winters is turning out to be mistaken. A better term would have been Anthropogenic Climate Change and the changes that are resulting in more extreme weather with a tendency to more arid conditions on the western edge of the prairies is beginning to make things interesting.
The convergence of computers, open source and manufacturing will be having ramifications soon. The Maker Faire phenomenon of 3D printers and low cost CNC machines has been very instructive and soon may become disruptive when the costs of these technologies falls further. Already a 3D printer can be obtained for $1,000 with a resolution that rivals that of commercial machines that cost 30X more. A good bet would be that this is likely to have a bigger impact than did the arrival of low cost microcomputers that lead to the PC era and subsequently our current Internet era.
11 Oct 2012 at 21:17
Looking to chemistry this time, here are six proposed Rules of Reproducibility.
- Were studies blinded?
- Were all results shown?
- Were experiments repeated?
- Were positive and negative controls shown?
- Were reagents validated?
- Were the statistical tests appropriate?
Many science papers are unfortunately weak when it comes to these rules, and in many fields #2 is a real problem - only the positive results are shown, the rest are hidden away and never seen.
05 Oct 2012 at 20:29
Dreamhost upgraded their servers to Rails 3.0.3 but this blog runs on a much older version.
I really need to upgrade this blog software when I get the chance
16 Mar 2012 at 16:53
Found another interesting parallel between software development and running. The field of running and exercise is full of lots of claims about special ideas that will drastically improve performance of athletes. The Science of Sport site has a blog post on How to spot bad science and fads- Determining whether an idea is worthwhile
At a recent track meet I was having a conversation with a friend in college, who made the astute observation that if the coaches inserted random scientific terms to explain things, even if they were totally wrong, the runners seemed to buy into it more enthusiastically. That’s a very common reaction, we all do it. We associate science and complexity with being smart or correct. As I’ve said before…people trying to fool you go from simple to complex…good coaches translate complex things into simple understandable ideas.
In another post the same site talks about the value of research, theory and practice
… I often rely on what one of my Professor’s, Jason Winchester, called the three stool leg test. You have research, theory, and practice. If you have all three, it’s almost certainly a good idea to implement it. If you have 2 of 3, it’s fairly likely that it works and it depends on the strength of the 2. If you’ve only got 1 of 3 going for it, it probably doesn’t work. The beauty of using the 3 stool leg test is it blends science and practice, and compliments it with theory which in itself is a blend of science and practice.
17 Feb 2012 at 16:34
Jim Bird has taken a look at how much is technical debt costing you. Nice to see that he ignores the dollar estimates per line of code that some authors use and just uses a simple $$$ through to $ notation.
$$$ Making a fundamental mistake in architecture or the platform technology – you don’t find out until too late, until you have real customers using the system, that a key piece of technology like the database or messaging fabric doesn’t scale or isn’t reliable, or …
$ Missing or poor error handling and exception handling. It will constantly bite you …
06 Feb 2012 at 14:44
Recently Jim Bird had to point out that Source Code is an Asset, Not a Liability. Unfortunately it means that there are people in the software development community that are not aware of the literature - specifically Howard Baetjer Jr.’s Software as Capital.
15 Jan 2012 at 18:46
Some interesting lessons for Software Development can be obtained form outside our field. I was reminded of this while reading a running blog that looked at what lessons could be gained from outside of the field of running coaching…
Rules of Everything
- When something is new or gains popularity, it is overemphasized until it eventually falls into it’s rightful place. How long that process takes varies greatly.
- Research is only as good as the measurement being used is.
- We overemphasize the importance of what we can measure and what we already know, ignoring that which we can not measure and know little about.
- We think in absolutes and either/ors instead of the spectrum that is really present.
Point 1. helps explain a lot of the original hype/hope surrounding the agile approaches to software development.
Lessons from outside the running world
We go through a cycle of forgetting and remembering what’s been done before us. You see this in the reintroduction or rememphasis in certain training methods in the coaching world. That’s why it is incredibly important to know your history. And if you can, know your history from a primary source where you attempt to look at it through their eyes during that time period. For example, going back and reading Lydiard’s original work gives a greater appreciation of what he was trying to do, then reading someones summary now, 50 years later. We lose a little bit of the original message.
Sometimes there is useful information available from looking back at what worked in the past. Although many on the software field seem to try to forget the past, the pioneers in the field learned a lot, some of which is still applicable to our present circumstances.