The Cause of Poor Code

I said in my last post that I felt that the various codebases I’ve worked on were junk. I however do not think that the various programmers I’ve worked with were poor programmers. While every programmer surly makes mistakes once in a while the truth of the matter isn’t so black and white. There are a couple reasons our precious code base devolves into a disorganized mess. While there are undeniably many reasons for poor code quality, I’ll address the two that I think are the most predominant.

Handling Dynamic Requirements

When a new project is started, some understanding of what it will do must be delivered to the developers who will be working on it. This is a deliverable from the customers to the developers. It is an unfortunate truth, however, that users don’t know what they want. We can’t blame them for this, it’s often hard to describe, technical know-how is lacking, and even if things are exactly as planned they often don’t hold up to the thing you envisioned in your head. This enumeration is usually true for developers as well. But, even if we don’t blame this problem, it still causes issues (you have to understand a problem to solve it!).

But it doesn’t stop there. As will invariably happen, even if the requirements were perfectly understood, once a user (or his/her replacement) has had a chance to use the tool you built or after a few weeks, months, or years, a new requirement will pop up. Working with updated requirements will cause difficulties in our code base if handled incorrectly. I am not implying that changing requirements are a problem, it’s what developers do with the changing requirements that can be a problem.  It is perfectly natural for requirements to change and it should be part of any Agile project. The issue arises when a coder looks at the new requirement and (without doing a proper refactor) tries to make it work the same way another feature works when it wasn’t ever intended to work that way.

The Solution

This is hardly the worst offender, but it is an issue. As eluded to above the solution for this problem is relatively simple (at least on paper): relentless refactoring.

One of the reasons Relentless Refactoring is prized so much in Extreme Programming is because requirements are continually evolving and changing. Dynamic requirements demand dynamic code, a fresh look at the answer to yesterday’s problem and a conscience effort to morph it into the answer for today’s problem. This does not imply that we will do the minimum necessary to get it to work and it certainly doesn’t imply that we will copy and paste some code and then make a few tweaks to it here and there. It means that we will iteratively take the code we have and change it into the code we need. Sometimes the answer is easy, sometimes a refactor involves substantially new code.


The next issue seems to often be the cause for a lack of proper refactoring: deadlines. Below I’ve quoted the best description I’ve ever seen that describes the next problem:

A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.

—Brian Foote and Joseph Yoder, Big Ball of Mud. Fourth Conference on Patterns Languages of Programs (PLoP ’97/EuroPLoP ’97) Monticello, Illinois, September 1997 (emphasis added, this quote was copied from the Wikipedia entry on Big Ball of Mud)

Software engineers are often between a rock and a firearm when a new feature is requested. Every once in a while a manager will say that X (very large) Project needs to be done by Y (way to soon) date. I’ll let you in on a little secret: this is counter productive. Few developers work better under pressure. Developers (and I’d argue everyone) work best with enough work to keep us busy, but not so much that we don’t get to see our families or have a weekend.

What is often most frustrating in these situations (it’s happened this way to me before) is that the manager knew about this request from the CEO for months but didn’t think to discuss it with the developers. And because of the delay requirements weren’t developed properly and everyone will be scrambling to understand this huge problem in a very short time and will be delivering their poorest work because they aren’t getting enough sleep.

Okay, rant over.

The Solution

A sustainable pace in software development requires more than just time. Planning is just as crucial, but not just any kind of planning: smart planning with the developers who will be working on the feature. What constitutes smart planning? A careful consideration of the features you want, with feedback from developers taken into consideration.

I’m going to borrow an analogy from a former boss. Imagine, if you will, a triangle. Each point represents an aspect of software development in practice. One represents Time, another represents Features, and the last represents Quality. As with most things in life, you can’t have everything, you must choose which two points are most valuable to you. By imposing a deadline by which all of these features must be finished you are choosing Time and Features and neglecting arguably the most important element on our diagram.

A more sensible approach is to choose one of the other two options. If you choose Time and Quality, you’ll have to prioritize which features are most important (which probably means reducing scope and scaling back) and choose only what the developers are comfortable committing to complete within the allotted time. This is the option I would opt for most of the time.

You may also choose Quality and Features which means it will get done when it get’s done, but you’ll know that what you have is the right product with good quality. The drawback I see in this is that by ignoring the time element you often (though not always) lose valuable feedback you get through iterative designs.

Am I quixotic?

Probably. I’m that way a lot. But I before you dismiss my musings I hope you will consider the value each developer brings. In my next post in this series I’ll address this. As always, post your thoughts below.

A Better Way

Since the onset of my programming career it seemed to me that there must be a better way.

In college I really enjoyed learning how to “think like a computer” (as one teacher put it). I got very good at dissecting each problem and tuning the gears and syntax of a language to make the programs work well. I felt destined to be a great coder and I was excited for the destiny that surely awaited.

In the more advanced courses, shortly before graduation, I felt a resistance to some of the more complex patterns and practices, but I didn’t question them because I knew that I was just some silly undergraduate with no real experience—and besides, all I needed was time to understand the new practices that seemed a bit odd and I would be able to master those too. Despite my hesitance, I did very well in my classes. I really enjoyed my professors and it seemed that they were impressed by me as well.

And then I graduated.

I had a rude awakening in my first job when I realized how little I really knew and when I discovered how very wrong the code I inherited and was expected to curate was.

In each job I’ve begun I’ve made the same realizations—I don’t know enough and I’ll be working on some ugly code.

I’ve attempted to address the first issue, and I continue to educate myself continually (as well as can be expected for a full-time worker who also has a two-year-old).

As for the second point, however, I’m almost beginning to wonder if the software project exists that I wouldn’t classify as abysmal in its architecture, organization, and/or general messed-up-ness. Perhaps the odds have been stacked against me and I’ve just been unlucky with the jobs that I’ve had, but the way other programmers talk there are even worse faits than the piles of spaghetti code I’ve worked on.

This isn’t to say that all the code I’ve worked on has been completely bad, there was certainly strong points in each code base that I’ve worked on. It’s also certainly true that I’m not nearly as experienced as other software engineers and perhaps I wouldn’t recognize good code if I saw it. But, well, I’m pretty arrogant and I don’t think that’s the case.

I’m starting to wonder if what Tolstoy says of happy and unhappy families can be said of software as well:

All good code is alike; all bad code is bad in its own way.

But one thing bothers me more than anything else: What is good code?

Is there one true way to code that is superior to all other methods? Is there a language that is better than all the rest? If there is a better way to organize, write, create, and author code; does anyone know what it is?

To be sure there are a lot of opinions out there. I’m not interested in opinions! I want facts. Something that can be proven and is measurable. There is a litmus test for this better way:

  • Are fewer bugs recorded?
  • Is the code easy to follow and understand?
  • Is there added complexity and overhead required that isn’t necessary (rather than the pomp and circumstance you see with a good number of frameworks)?
  • How easy it is to respond to errors?

I don’t purport to have all the answers, I may be close to understanding some of them, but there is also the danger of finding more questions along the way….

(This is the first post in a series: A Better Way.)

An Apologist’s Defense of Trunk-based Development

There are two prevailing thoughts about source code management in contemporary software development with multi-member teams: trunk-based development and the feature branch model (or pull-request model). Looking at GitHub alone will surly lead you to believe that the only way to develop with a distributed source control system is using the pull-request model, but then there are technology pundits out there who opine that the only way to use git (or one of its relatives) is using the trunk based development model.

This disparity (the masses using one model, but the role-models prescribing another) has understandably led to confusion in the tech industry. After reading Martin Fowler’s excellent description of Trunk Based Development (which he refers to as it’s more classical, but now overloaded, terminology: Continuous Integration) I found a few bloggers that didn’t like Martin Fowler’s suggestions. I’m going to address the article found here by James McKay (whom I will refer to as JM). I will attempt to assuage some of their concerns and answer some of the questions they have in this post.

Continuous Integration is At Odds With Feature Branching

The first point that JM brings up is, “[Martin Fowler and Mike Mason] are saying that Feature Branching is incompatible with Continuous Integration.” I believe that the source of the confusion here is simply a case of semantics and history.

Continuous Integration (as I eluded to previously) has multiple meanings today. The history of the term is described very well on Wikipedia, but it would do well to quote the opening line, “Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day.” As you can see its original meaning had nothing to do with build servers, but various vendors coopted the term (quite successfully) and  the original meaning is lost on some developers. Continuous Integration originally meant just that—integrating continuously. Integrating all the new code you wrote as frequently as you can with what is the latest code on a single branch. Everyone has the same version of the latest code. With Feature Branching you don’t share this promise. Everyone may have the latest code from a shared branch, but they also have their own dirty little secrets which none of the other branches share.

Git and Mercurial often show a representation of a commit history something like this for feature branches:

The blue branch in the middle is the shared, common, dev, develop, whatever-you-want-to-call-it branch. The plan is to release whatever is on the blue branch. The red and yellow branches are feature branches that have branched off of the blue branch. Unfortunately this image is misleading. The red and yellow branches may be merging frequently with the blue branch, but they aren’t as close to each other as they may seem. The red and yellow branches are actually drifting further and further apart. Something like the next image:

feature-branches-realityThe blue branch, which is still getting commits (presumably from other branches) is still the same distance from both red and yellow, but red and yellow are much further apart., The more code that is added to red but not to yellow or to yellow but not to red the more different they become. The longer time goes without merging the code from red and yellow together (as well as any other feature branches) the further apart they two branches will drift, even if they are continually merging with blue.

This leads us to a natural question about what happens when one of these branches merges with the shared blue branch. I’m glad you asked, I’ve got an image for that:


In the third image you see that the yellow branch is nowhere near where the blue branch is anymore. In fact it is further from blue than when it started because now the blue branch is also the red branch and so yellow is actually as far away from its parent as it was from the red branch.

This will remain the case until a merge occurs which will likely cause a huge merge conflict and while merge conflicts would happen even when using trunk based development, they occur are more regulated and more manageable intervals (more on that later).

Obviously this is an image and I haven’t calculated the actual difference between the red and yellow branches, theoretically this could be done with an algorithm that tracked differences, but the point is clear: when using feature branches you are often in danger of working on a branch that is very different from the other feature branches and when one of them merges before your branch merges then you will have to deal with the problem of finding out how to merge the two very different code bases together.

The problem highlighted is known as delayed integration which, just like it sounds, is exactly the opposite of Continuous Integration and means that the developers involved have waited to integrate their code with others’ code. That’s why feature branches are at odds with Continuous Integration, because they are nothing alike.

Merging Isn’t So Bad

JM declares that they didn’t feel that merging was so bad (maybe not is so many words). They’re right. But it is dangerous, and in more ways than one. In fact there are three ways that merging is dangerous. The first is obvious, the developer preforming the merge may make a mistake. Maybe they misunderstood the other developer’s code or they could have forgotten exactly what that piece of code was supposed to do. It’s true that this is a problem that can be retroactively rectified, but it’s still inconvenient.

The second is a little less obvious and doesn’t have anything to do with the actual merge process, but in the fact that merging must wait. The problem arrises when you have to wait to share code! Just the other day I overheard two developers talking. One of them needed some code that the other had written, but they were working in two different branches. They spent probably ten minutes thinking of ways to get Git to share parts of one branch with the other, but not merging the whole branch.

The last way that merging is dangerous is because it gets more difficult with time. I like to think of it as gum on a sidewalk. If someone spits gum on the sidewalk (it wasn’t you, of course, because it is a nasty habit) it’s really quite easy to get it off the ground and into a trash bin. But if you wait a week, chances are that it has been stepped on and trampled and it will take a long time to get it off (unless you have a high-powered presser washer handy).

Merging little changes (like you do with trunk based development) is usually painless, but the longer you let changes go without merging them the greater the chance that you code will be more difficult to merge. This is one of the best features of Trunk Based Development: small merges, frequently.

Feature Toggles

There seems to be a lot of fear about feature toggles. Whether it done using branch by abstraction or permissions or some other method it’s basically a method to not call new code until the time comes when it is ready to turn them on. JM feels that feature toggling is actually more dangerous than keeping code completely isolated until it is ready to be used and I respect this fear. It’s true that there is a small amount of risk involved when toggling a feature that isn’t ready yet. But I want to point out a misunderstanding that he has and a benefit that feature toggles have that you wouldn’t get when using feature branches.

First, James McKay says that feature branches is releasing code that is untested. This is untrue. We must keep in mind that trunk based development isn’t just a pattern for the repository—it is a pattern for the way we code as well. In trunk based development one never pushes code that hasn’t passed every unit test and doesn’t have unit test written for it as well. If you are careful, end users should never be running the code that isn’t ready, but if by some small chance they do it should be tested. (I don’t have time to get into the classic unit tests vs the mockist approach, but there are differences of opinion about that in the software development world too. I believe that if unit tests are written using the classic approach, which is a manner that test more consistently with how a user may be using or misusing your system, then those tests will be more than adequate at preventing bugs in feature toggled software without manual tests of any kind.)

The other benefit to using feature toggles comes into play when everything is working as designed, but you want to turn a feature off for a business reason. Maybe you are using a social network’s Oauth 2 authentication to login to your site, but then a competitor acquires them and you want to turn off everything in your site associated with them. If you’ve been using feature toggles this becomes a simple matter of reenabling the toggle that was in place—if not, you may need to go in and manually remove all traces by hand (introducing the chance for bugs and errors that wouldn’t have happened if you used a configuration or abstraction to keep a feature from being released). Of course this means whatever mechanism you are using to do your feature toggles can be reenabled and hasn’t been removed, but chances are good that it’s easier to reenable a feature toggle than to remove and replace code by hand.

Whatever Merge Goes

I’ve compared and contrasted two version control models, Trunk Based Development and Feature Branches. There is, however, a third option that gets some usage. Unfortunately this model sometimes gets confused with Trunk Based Development, but the two are very different. There isn’t really an official name for it, but I like to call it Whatever Merge Goes meaning a haphazard, non-regulated method for software version control.

It’s confused with Trunk Based Development, because there is usually only one shared branch. The difference is that the trunk branch is treated carelessly. Developers aren’t required to run unit tests before committing and pushing code or to even write them. Stories are not polished before developers are expected to work with them (and the developers almost never helped define them) and so there is real risk that the features committed to for a coding cycle may only get half-way done by the deadline and won’t be able to be removed which will result in a traffic jam of last minute changes and half-tested code.

Please don’t confuse a team working on a single shared branch with Trunk Based Development. Trunk Based Development requires discipline and diligence. Adequate tests, discipline to run them and verify the build won’t break, and frequent pushes (not just when your code if finished but when it’s in a stable state) and pulls are all a vital part of Trunk Based Development. Between having no process and having Feature Branches, I’d choose Feature Branches too even though that will only go so far to improve the situation.

When to Choose Feature Branches

In all professionally developed projects I would use Trunk Based Development. With personal projects (where you or a small group of friends are working on an application) I’d use Trunk Based Development. The only time I would consider branching is when I was working with an open source project and didn’t know if I could trust the other developers contributing.

For an excellent resource regarding Trunk Based Development refer to Paul Hammant’s blog, he has several articles talking about what Trunk Based Development is, why it’s better, and about how companies like Facebook and Google use Trunk Based Development.

Angular Dependency Injection

I just spent two awesome days at ng-conf 2014 where were presented with a challenge to improve upon the already awesome Angular dependency injection. At first I didn’t really have any ideas. “The only way to improve the DI framework would require interfaces,” I thought. And I left it at that.

My subconscious mind, however, kept working on it and I woke up at five this morning with a few ideas. I admit that this solution needs some improvement, but I thought I’d at least write it down and see what comes of it.

One of the few weaknesses that Angular has in its dependency framework is because JavaScript lacks interfaces. This doesn’t mean however that Angular couldn’t build an interface system. I’m not sure if these ideas could be used without a framework like Angular, so I write this document with the understanding that it would naturally fit into Angular.

Before I begin, I should explain a little about how I personally use Angular, and more importantly, how the various pieces used in Angular come together to help the code stay organized. When I finally convinced my employer, to use Angular, the first thing we found is that it was very easy for everyone to have their own way to structure their code. It is for this reason that I came up with a coding standard that we now currently use.

There is, obviously, nothing in Angular that enforces this standard and I’ve seen a lot of good code that does it differently, but this is what makes most sense to me.

As you know, under the hood services and factories are the same thing in Angular. While I see no reason for this to change, I try to keep them and their purposes logically separated.

A factory is used only when dealing with things that need to be created or saved. In general this means that they use $resource (or sometimes $http) to interact with our API. Controllers never directly interact with a factory, rather they are always used by a service. Factories always have the postfix, “Factory”.

A service is used to keep logic for a certain thing together. This helps with code reuse and to keep the size of a controller to a minimum. Services are used directly from controllers and other services. Often services use factories to get or save off data. Services are always prefixed with a dollar sign and a two letter abbreviation for the company or project it is used for. (Example: If I’m building a task app called Task App a users service may be named: $taUsers.

With that out of the way, I’ll explain what I think could be a good idea for Angular. I hope that there are others who can improve upon this because it has some spots that are a bit clunky.

I consider myself pretty new in the software engineering world, but when I think of dependency injection I think of interfaces. Most of my knowledge about DI is colored by the C# dependency injection framework: Windsor. The general principle that Angular lacks when it comes to dependency injection is that there can be more than one dependency that meets the requirements of the dependency. The particular implementation used is usually chosen at runtime based on the DI frameworks configuration. The only problem is that to do this correctly: there really needs to be interfaces, or something that acts like an interface, in JavaScript. I’m not sure what the best practice would be for declaring the members of an interface, but this is one idea. (Using ECMAScript 5.)

For “Pet Application” this would be used to create an interface.

app.interface('iAnimalService', {
  species: angular.STRING,
  commonName: angular.STRING,
  numberOfLegs: angular.NUMBER,
  speak: function(duration){}

Implementing the interface.

app.service('$paDog', function(){
  angular.implements('iAnimalService', this);//will throw an error if the requirements are not met
  this.species = “Canis lupus”;
  this.speak = function(duration){

You can see that the $paDog service implements the iAnimalService interface and that any derivation from the contract will cause an error to be thrown.

I hope that this was helpful. I would love to hear feedback on this and how it could be improved or thoughts that others have.

How to Add a Widget to your WordPress Blog

My awesome wife asked me to do her a favor, “Can you write a tutorial on how to add an HTML widget to a WordPress blog?”

While I’m sure that there is already probably fifty explanations on how to do it, but well, I’ll write it anyway:

Step One: Smile. Think to yourself, “This is easy. I can do this.”

Step Two: Open your WordPress Admin page.

Step Three: Navigate to Appearance > Widgets

Step D: Select the widget type you want. If you’re doing an HTML widget, select “Text: Arbitrary text or HTML”

Step E: Drag the button to the sidebar (you may have more than one) that you want to have the widget on.

Step F: Stop worrying about the fact that I changed my enumeration system half way through.

Screen Shot 2013-10-28 at 9.17.38 PMStep G: Expand the widget editor by clicking on it and editing the title (optional) and the body of your widget (see image).

Step H: Click Save.

Step I: Check your changes by navigating to your blog.

Step J: If there are changes you want to make, go back to Step One, skip Step D, and continue until you have the widget in the way you want it.

Step K: That is it.