A log of articles I found for later reading. ...................................................... ..............................Not necessarily my point of view though.

Thursday, December 25, 2008

When Agile Projects Go Bad

November 18, 2008 — CIO — Agile project management has taken the software development community by storm, with terms like sprint and Scrum becoming part of everyday team conversations. But as Agile techniques are incorporated into company practices, there exists the very real danger that Agile will be adopted in name, but not in spirit. With this in mind, we turned to the original authors of the 2001 Agile Manifesto for advice on how Agile can be subverted.

Paint-by-Numbers Innovation

A typical developer pain point is when Agile techniques are applied dogmatically, without thinking through whether a specific practice makes sense for a given task. They confuse checklists with real Agile adoption.

According to Alistair Cockburn, one of the original 17 signatories to the manifesto, some of this weakness can be traced back to how people acquire new skills. People tend to have three stages of skill acquisition, he says. "In the first stage, people need to follow a recipe. In the second stage, you say 'That's all very nice, but I need more,' so you go off and collect recipes and ideas and techniques. And in the final stage—if you ever get there—is a level of fluency and intuitiveness where you can't say what you're doing, but you kind of borrow and blend."

As a result, says Cockburn, level-one thinking leads to the mentality of "I have my checklist, I have my certificate." He says, "In general, we tend to deplore it, because Agile is really about level two and level three." The experienced developers and team leads should be paying attention to how things are going. However, adds Cockburn, "Anyone who comes into the business will naturally, perhaps necessarily, ask for a checklist. So now you get the Scrum-master certification and the Agile checklist and the Scrum checklist and the XP [extreme programming] checklist; everybody wants a checklist. We need to get people out of that box and into an arena where they're thinking about principles and feelings, and not [about] a checklist."

Kent Beck, another Agile manifesto signatory, says it can be a challenge for organizations to take on the new challenges of Agile development. "Lately we've been talking about Agile in terms of words like 'self-awareness' and 'self-discipline.' And somehow it was an unnamed presupposition or an unnamed characteristic of a group of people who were successful using lightweight processes that they were pretty self-aware people, and that they would have a lot of self-discipline." But now, the Agile community is asking organizations to take on those characteristics. "They think they can just go by rote and issue some edicts," he says, and people will magically take on those attributes.

When companies apply Agile practices without self-examination, Beck says, peril lies ahead. When companies try to "do" Agile mechanically, he says, "We ask, 'Well, aren't you talking about it? About what's working and not working in the quality of your communications and your community?'" Because the initial community was self-motivated, those issues didn't need to be addressed early on. "These are the things we didn't think to say back in 2001, and now we're seeing people applying it very mechanically, we're seeing what's missing," Beck says.

If You Don't Know Where You're Going, Agile Won't Get You There

Cockburn also sees teams use Agile as an excuse not to engage their customers at a detailed level. Developers don't have a plan, don't get requirements—and may not even be very good designers to start with.

Cockburn says, "They start going around in circles, and the customers or users try to tell them what they want, and the developers say 'No, no, no! That's too much information. Just give me the first sentence out of what you said and I'll go build it.' And then the users say 'But no, it's more complicated than that, let me tell you more.' The developers say 'That's all I need, we're doing increments, let me just build that.'" The result, of course, is that "They go around in circles, and they get lost, and they're over-budget," he says.

It's really a pretty predictable mess, says Cockburn. The people driving the process keep being told, we don't need this much thinking. "I actually saw someone write on a [discussion] list, 'If you have to plan, you're not Agile enough.' And unfortunately, that really does bad things to projects," he adds.

Alternately, the team may have a good rapport with their customers, but be unable to control the project scope. Another problem, says Cockburn, is when people don't get the size and shape of the system they're supposed to build; they only work from user stories. "The user stories multiply like rabbits. If you were to do a burn-up chart, where the line goes toward the ceiling, instead of down toward zero, you'd find that the ceiling is going up faster than the progress being made." That's depressing for everybody involved, Cockburn says.

"I'm not sure where this has come out of the manifesto," he wonders. "But what happens is that [developers] say 'We don't need to know how big the problem is, because we'll give you great stuff as we go along.' So they never figure out the shape of what it is they're building, which means they don't have a clue how much it's going to cost." The 'Agile' team might say, "This looks like 170 story points," but they don't have any basis for that estimate. It turns out to be 700 story points, "But they only discover that as they go along, much to the depression of everyone involved," Cockburn adds.

When All You Have Is A Hammer, You Want to Bang on Too Many Nails

It's also possible for developers to insist on using every tool in the Agile/Scrum tool-bag, even when it doesn't make sense. "I struggle at times to determine how to apply the principles to a particular task that I'm doing," Beck confesses. "It often comes down to a practice that makes sense in general, but that doesn't make sense in a particular situation." Even though he's being "doing Agile" for years, he can sometimes have a hard time figuring out how to apply it.

"I don't think that's a problem with Agile development per se," he says. "If I was a dyed in the wool waterfallist, I think the same thing would apply. You need to be able to get to a context where you can see that it doesn't apply to this situation."

But even when the techniques don't apply, the underlying principles of Agile—such as transparency and accountability—still do, according to Beck. "That makes sense whether I'm doing a technology spike or production programming or a research paper, or doing some kind of highly risky blue sky research. But how they apply is different, in different contexts." This can be especially true in early project stages.

It's good for teams to call their shots, but Beck says it's important to make commitments. That isn't easy when you don't know a lot about the user's needs, and it can be hard to know what to say with any certainty. Still, Beck says, you can explain to management and users, "At the end of the week, we're going to know something we didn't know before," even if you can't say that at the end of the week you'll have a specific feature. "Because you might not know it that feature is even possible, but you can still call your shots as far as what it is you expect to be able to learn," he says. "The principle of transparency holds even at an early stage. You might not expect to see a bunch of tests or some production-ready code. But it takes a kind of flexibility to apply that without clinging rigidly to the practice."

That's why the manifesto has a line about individuals and interactions over processes and tools, Beck says. It's not that processes and tools are not valuable. But in fluid situations, Beck's experience has shown, you're better off relying on people who can recognize the situation and respond to it creatively. That's far better than saying "Here's the point in the process where I'm supposed to write a test, so I'll write a test."

It's also possible for an individual to lose sight of the Agile principles and become focused solely on process. Beck met one team at a conference dinner, several years ago, which had been working under a very dogmatic coach who insisted that everything had to be done in exactly "this" way. "The management had said that they had to do [Agile] or they would be fired, and they were furious—understandably so," says Beck. It's not that Agile was inappropriate, he points out. They could write tests, they could deliver something at the end of every week. All that stuff was physically possible. But, Beck says, "If you're going to ask someone to do something difficult or challenging, you need to build a relationship to begin with. You need to spend time maintaining that relationship through the difficult situation, and the coach hadn't done that."

It can also be very difficult to take when a well-functioning team suddenly finds themselves stuck with what they see as an unneeded burden. "It's certainly frustrating," says Beck, "if you have a process that's working well for you, and somebody comes along and says 'You have to change it, because everyone else is changing.' At that point, it's getting in the way of your progress."

Broken Engagements

Agile development also depends upon the engagement of the sponsors or customers in the process. That's a difficult transition for some, according to Cockburn. "It looked like for a while that we were pushing all the power down to the programmers, but in fact we were evening out the power and responsibilities. Everyone gets to feel awkward about that." In some organizations, he says, programmers ignored their bosses and built whatever they wanted.

At the other extreme, programmers expected the bosses or managers or sponsors to tell them accurately what the priorities were—not something the managers were used to. "So you get a breakdown on both sides," says Cockburn. "The sponsors aren't used to being asked to show up and care about the project, even [about] the requirements. ... They say 'No, that's what we hired you to do.'" The programmers respond, 'We don't know; we need you to help us figure it out,'" he adds. And the sponsors say, 'We don't have time. Work it out yourselves.'"

Agile's Not Always Appropriate

Then there are places where Agile just isn't the right fit at all, says Cockburn, such as organizations with high turnover (i.e. franchises like McDonald's). The processes are really critical, and staff doesn't have the time to learn by apprenticeship because people are coming in and being trained all the time. Other venues where Agile isn't workable, he says, include situations where you need comprehensive documentation, life-critical systems or certain types of legal situations—when that's more important than the working software. "

Finally, remember that Agile is intended for processes that will go through multiple iterations, reminds Cockburn. "Agile is built on incremental development," he points out. Not for projects that are one-shots, like a Bar Mitzvah or wedding. ("You know you don't get more than one iteration on a wedding," he says.) To apply Agile principles in those scenarios, you have to plan very, very carefully. "You might still borrow a burn-down chart, you might borrow some techniques; but you're not going to get the advantage of multiple iterations [or] reflective improvement in that."

Agile as Religion: When Even Founders are Heathen

Finally: once individuals become familiar with Agile, either through training or practice, they can become inflexible and intolerant of people new to the process. Cockburn has seen this in action. "I'm one of the authors of the manifesto, so if I say something 'weird,' they can't tell me I don't understand Agile. But if someone else— and it doesn't matter how many years of experience they have—says something funny, they get told they don't understand Agile." That makes Cockburn gnash his teeth. "They can't yank my club card, but they can pretend you don't have one," he says.

Sometimes Agile principles are grossly misinterpreted, according to Cockburn. "I get called in by a CIO, CTO, any CXO, and they're suffering because their programmers are telling them 'You don't know anything about Agile. Agile means we don't have to give you a plan, Agile means we don't need an architecture'—a whole bunch of rubbish that isn't in the Agile manifesto." So Cockburn, or people like him, have to come in and tell the CIO that it's okay to ask for a plan and an architecture. "But it takes me to come and do it," Cockburn says. "If anyone else says it, they get told that they're just an old fuddy-duddy, [and that] they don't know anything about Agile."

via http://www.cio.com/article/464169/When_Agile_Projects_Go_Bad

The 12 Best Questions for Team Members

Fbotr In their book First, Break All the Rules: What the World's Greatest Managers Do Differently Marcus Buckingham and Curt Coffman present the twelve best questions you can ask your team members to determine how they feel about their jobs.

These questions are the result of one of the largest research efforts ever performed on the topic of people management.

In order to get a feel of your team’s motivation, you would do well to use these questions as the basis for an in-depth conversation. You might want to do this in a formal way, with an objective interviewer and a linear scale of ratings, so that you can carry out some statistical analysis after the interviews. Or you can just do this informally, by pasting the list onto the coffee machine. And whenever you “coincidentally” meet someone while getting a cup of coffee, you make sure to point at one of the questions on the list and have a little chat about it.

These are the 12 topics, and how you could tackle them:

1. Do you know what is expected of you at work?
Do you know what are wrong ways and right ways of doing your work? Or do you think nobody would notice the difference if you’d switched to coding on your office walls with a crayon marker?

2. Do you have the materials and equipment you need to do your work right?
Are tools and processes supplied and customized for you? Or do you feel that buying and carrying your own stone tablets for writing source code would be an improvement over your current situation?

3. Do you get the opportunity to do what you do best every day?
Are your talents being used to their fullest potential? Or will your mother and her Chihuahua have just about the same chance at success when they’d attempt to take over the work that you do?

4. Did someone recently give you recognition or praise for doing good work?
Is anyone noticing you’re making a difference, in a positive way? Or are they more eager to point at that minor error that almost completely blew up the company, but actually didn’t?

5. Do your colleagues seem to care about you as a person?
Are they interested in your hobbies, your spouse, your friends and family? Or are you not comfortable enough to reveal that you have this fascination for bonsai trees, and are planning to marry one?

6. Are you encouraged to work on your (self-)development?
Has someone discussed with you how you can further improve as a person? Or do you think they won’t even care if you changed into Captain Code, saving the world from bugs and bad formatting?

7. Do people make your opinion count?
Are your colleagues listening to you? Or might you just as well talk to the receptionist’s hair dresser, for all the good it would bring you?

8. Do you feel that your job is important?
Do you understand how your work is a significant part of the value your organization tries to create? Or do you feel that your holidays have about the same amount of impact on the success of your organization?

9. Are your colleagues committed to doing quality work?
Do you feel that doing the best work possible is important to your co-workers? Or do they care more about their working times, their dogs, their collections of beer bottles, and the value of their stock?

10. Do you (or would you like to) consider some colleagues as friends?
Do you enjoy meals, movies, games, or even holidays with some of your colleagues? Or do you intentionally live at the other side of the country, staying as far away from them as possible?

11. Does someone care about the progress of your work?
Is someone interested in what you do at work, and how your work is coming along? Or do they prefer to talk about the weather, or their own heroic stories of saving the company?

12. Are you given the opportunity (time/resources) to learn and grow?
Do you have the means and ability to improve the way you do your work? Or are you expected to learn and grow on your own time, while walking the dog, or taking a shower?

These were the 12 best topics for in-depth discussions with your team about their work. Feel free to change the wording of the questions, and experiment with the optimal setup for your interviews (which may vary per person).

Just make sure you talk about this stuff…

via http://agilesoftwaredevelopment.com/blog/jurgenappelo/12-best-questions-team-members

The Software Product Myth

Most developers start as salaried employees, slogging through code and loving it because they never imagined a job could be challenging, educational, and downright fun. Where else can you learn new things every day, play around with computers, and get paid for it? Aside from working at Best Buy.

A certain percentage of developers become unhappy with salaried development over time (typically it’s shortly after they’re asked to manage people, or maintain legacy code), and they dream of breaking out of the cube walls and running their own show. Some choose consulting, but many more inevitably decide to build a software product.

“After all,” they think “you code it up and sell it a thousand times - it’s like printing your own money! I build apps all the time, how hard could it be to launch a product?”

Against All Odds
And most often the developer who chooses to become a consultant (whether as a freelancer or working for a company), does okay. She doesn’t have a ton of risk and she gets paid for the hours she works, so as long as she has consulting gigs she can live high on the hog

But developers who make the leap to develop a product are another story. Building a product involves a large up-front time investment, and as a result is far riskier than becoming a consultant because you have to wait months to find out if your effort will generate revenue. In addition, growing a product to the point of providing substantial income is a long, arduous road.

But let’s say, for the sake of argument, that you spend 6 months of your spare time and you now own a web-based car key locator that sells 100 copies per month at $25 a pop. At long last, after months of working nights and weekends, spending every waking moment poring over your code, marketing, selling, and burning the midnight oil, you’re living the dream of a MicroISV.

Except for one thing.

The Inmates are Running the Asylum
In our completely un-contrived scenario you’re now making $2500/month from your product, but since you make $60k as a salaried developer you’re not going to move back in with your parents so you can quit your day job. So you work 8-10 hours during the day writing code for someone else, and come home each night to a slow but steady stream of support emails. And the worst part is that if you’ve built your software right the majority of the issues will not be problems with your product, but degraded OS installations, crazy configurations, a customer who doesn’t know how to double-click, etc…

The next step is to figure out, between the 5-10 hours per week you’re spending on support, and the 40-50 hours per week you spend at work, how you’re going to find time to add new features. And the kicker is that support burden actually worsens with time because your customer base grows. After 1 month you have 100 customers with potential problems, after a year, 1,200.

And yes, the person you decided to sell to even though they complained about the high price ($25) and then couldn’t get it installed on their Win95 machine so you spent 3 hours on the phone with them and finally got it working only through an act of ritual sacrifice is still hanging around, emailing you weekly wondering when the next release is coming out with his feature requests included (requests that not a single one of your other 1199 customers have conceived of).

But you persevere, and manage to slog your way through the incoming support requests and get started on new features.

What you find is that ongoing development, as with any legacy system, is much slower than green field development. You’re now tied to legacy code and design decisions, and you soon realize this isn’t what you signed up for when you had that brilliant flash of insight that people need web-based help locating their keys.

It’s about this time that support emails start going unanswered, releases stop, and the product withers on the vine. It may wind up for sale on SitePoint, or it may be relegated to the bone yard of failed software products.

The Upside
The flip side to all of this is what you’ve already heard on the blogs of successful product developers.

Once a product hits critical mass you’ve conquered the hardest part of the equation. After that the exponential leverage of software products kicks in and you can live large on your empire of web-based unlocking-device locator applications. It’s a recurring revenue stream that can grow far beyond what you would make as a consultant, all the while creating balance sheet value meaning one day you can sell it for stacks of proverbial cash and retire.

This is unlike your consultant buddy, whose consulting firm is worth about 42 cents (he had an unused stamp on his desk) once he decides to retire.

But there is a dip before you get to this place of exponential leverage and proverbial cash. A big dip. And if you can get through it once, it’s more likely that you’ll be able to get through it with your next product. And the one after that.

Once you make it to the other side, you’ve learned what it takes to launch and maintain a product, and next time you will have a monumentally better chance of success because you are now a more savvy software entrepreneur.

Congratulations! Go buy yourself a nice bottle of wine and sit back, relax…and enjoy answering your support emails.

via http://www.softwarebyrob.com/2008/11/18/the-software-product-myth/

10 Fail Proof Tips for Delivering a Powerful Speech

We've all heard the statistic that says people fear public speaking more than just about anything else. The good news is if you can focus on these 10 tips you'll be on your way to breaking past the fear and onto delivering a powerful speech that engages your audience.

1. Condense Your Main Message. Ideally you should get it down to a 30 second blurb. How do you do that? Start with the goal of your speech. Is it to convey knowledge? If so what is the main thrust? Perhaps your goal is to inspire your audience to take action. If so, what action do you want them to take Maybe your goal is to make your audience feel something. This is your overarching message. Write your first draft without attention to length at first. Once you've done that, then condense it. When you deliver your speech, touch on it at least 3 times or whenever appropriate, but be sure to include it in the beginning, middle, and especially at the end.

2. Have Three Main Points. Even if you need to cover many different ideas, try to categorize them into 3 main points that all tie back to your Main Message. Let your audience know you'll be talking about those 3 main points in the beginning. This will help them to follow along, especially if you are not using visual aids. And keep in mind that this holds true regardless of the length of your speech.

3. Include only the Most Powerful Data and Facts. Like preparing your main message, collect all the data you think you might want to include in your speech. Then go through it all and include only the data that helps you dramatically drive home your main message and your 3 main points. Less is more. If the data doesn't pack a punch don't include it.

4. Visual Aids. Keep Powerpoint presentations as concise as possible. Use as few words as you can and whenever possible use pictures and graphics instead.

5. Speech Outline Cue Cards. If you must use a prompt, use flash cards that only contain an outline of your speech with the main topics and facts. Reading from a script will sound like just that and will most likely not engage your audience.

6. Practice. Show of hands: how many of you prefer to "wing it" when making a speech? Ok, you're not alone. Now another show of hands: how many of you get up the podium and think "Oh crap, I should have practiced?" Yeah, ok, so you know where I'm headed with this. Practice. The goal is not to be able to deliver the exact words verbatim. The goal is to be able to memorize your outline, to sound natural, and feel relaxed while delivering your speech. Practice your speech at least 5 times with at least one of those times recording yourself. This will help you to edit your delivery.

7. Release the Nerves. Before giving your speech release some of your nervous energy and pump up your confidence with some physical body movements that show strength. Do some jumping jacks. Raise your arms high. Jog around the hallway listening to some music. Stop about 5 minutes before your speech to do one last brief review of your notes.

8. The Pep Talk. Before you go "on stage" give your self a pep talk. You could say something like: "I am going to deliver a powerful speech today. People will understand my powerful message and will be inspired to take action. And I am going to have fun doing this! I can't wait!"

9. Smile and Have Fun. Make it a point to enjoy giving the speech. Have fun. What's the worst that could happen? You could fall down, sneeze, get dry mouth, have shallow breathing, and so on. Who cares? If it happens just keep smiling and if appropriate refer to it to produce an easy laugh for your audience. But then move on. Just like an ice skater in a competition, keep smiling no matter what and if you fall down, just get back up and keep going.

10. When You Can't Smile. Ok, the exception to smiling and having fun is if you need to deliver a very serious or solemn message. If that's the case, then cultivate the proper tone by taking a few moments before the speech to close your eyes and visualize how you want to sound and look. Visualization is a powerful form of practice.

via http://www.dumblittleman.com/2008/01/delivering-powerful-speech.html

Agile Development Projects and Usability

 

Summary:
Agile methods aim to overcome usability barriers in traditional development, but pose new threats to user experience quality. By modifying Agile approaches, however, many companies have realized the benefits without the pain.

Depending on how they're handled, Rapid Application Development (RAD) processes such as Agile and Scrum can enhance or threaten user experience quality.

The Promise of Agile Methods

Agile methods address three issues that have long vexed usability professionals:

  • For 50 years, almost all experiences have shown that traditional waterfall development methods result in a poor user experience. The reason is simple: requirement specifications are always wrong.
    • At best — when derived with care — the requirements might reflect what users want. More commonly, however, they reflect the desires of user "representatives" who are too far removed from the coalface to know the details of the real work. In any case, what users want and what users need are two different things, which is why it's long been a primary usability guideline to watch what users do, rather than listen to what they say.
    • If years go by between writing the requirements and delivering the product, the users' needs will have likely changed, putting even more distance between the requirements and the needs.
    • Over the past 25 years, work in usability has shown that one of the best ways to evaluate a design's quality is by watching users interact with it (through either a functional or mocked-up screen). Again, if years go by before the developers do this, most of their development effort will have been spent producing the wrong design.
  • Documents further down the waterfall also cause problems. The only thing worse than having developers deviate from the design specs is having developers implement the design specs to the letter. Many issues arise during an interaction design's detailed implementation; developers can't resolve these issues in ways that enhance usability when the design work was completed long ago and the user experience professionals have long since left the building.
  • Since 1989, the "discount usability engineering" movement has demonstrated that fast and cheap usability methods are the best way to increase user experience quality because developers can use them frequently throughout the development process. However, this doesn't happen if there's only one milestone in the process that calls for usability input.

Agile methods hold promise for addressing the many ways in which traditional development methodologies erect systematic barriers to good usability practice.

The Threat of Agile Methods

Agile's biggest threat to system quality stems from the fact that it's a method proposed by programmers and mainly addresses the implementation side of system development. As a result, it often overlooks interaction design and usability, which are left to happen as a side effect of the coding. This, of course, contradicts all experience of the last 30 years, in which user experience's importance in system development has steadily increased as we moved from mainframes to PCs to the Web. As the user base and the use cases have expanded, the need for top-notch usability has grown.

To construct a quality user experience, development teams need interaction design and usability methods. For smaller teams, this doesn't necessarily require dedicated designers and usability professionals. It's perfectly feasible for developers to do interaction design and usability. But a team must recognize these two activities as explicit development methodology components, whether the people doing them have design or usability as their main job or simply as one of several roles they perform.

For a project to take interaction design and usability seriously, it must assign them "story points" (i.e., resources) on an equal footing with the coding.

Another issue is that, with Agile, a product's development is broken down into smaller parts that are completed one at a time. Such an approach risks undermining the concept of an integrated total user experience, where the different features work consistently and help users build a coherent conceptual model of the system. At worst, the user interface can end up resembling a patchwork.

To address this, teams can design storyboards and prototypes that embed the user interface architecture and use these tools as reference points for designing individual features. To avoid spending too much time up front, teams can design low-fidelity prototypes — such as paper prototypes — that don't require coding. Just like we've always advocated.

Agile teams typically build features during fairly brief "sprints" that usually last around 3 weeks. With such tight deadlines, developers might bypass usability because they assume there's no time to do testing or other user research.

The solutions here are threefold:

  • Perform usability activities, such as user testing, in a few days. One fruitful approach is to plan for testing before you know exactly what will be available for testing. Weekly tests are completely feasible and give you a surefire way to integrate several user feedback rounds within even the shortest sprint.
    • We have a 3-day course on how to perform a complete round of user testing by actually testing the team on its own project. You can do this type of quick testing in a day. And, it takes less than a day to both prepare the test and analyze its findings.
  • Most successful teams have adopted a parallel track approach, where the user experience work is continuously done one step ahead of the implementation work. So, by the time such teams start to develop a feature, the initial user experience work on it has just completed.
  • Finally, we need foundational user research that goes beyond feature development. Ideally, organizations should conduct this work before a development project even starts. Also, bigger companies should house basic knowledge about user work flows, personas, and usability guidelines outside individual projects so it can be reused for years across many projects.

Making Agile and Usability Work

There are good reasons to believe that usability and Agile development methods can work together and improve user experience quality:

  • Agile offers many opportunities for overcoming problems with traditional development methods that have long impeded usability.
  • Approaching Agile narrowly, as a programming methodology rather than a system development methodology, threatens to destroy the last decade's progress in integrating usability and development. But, as outlined above, there are ways around each of these threats. So long as teams recognize the threats as explicit issues, they need not harm product quality.
  • Finally, we know from our research that many companies have made things work swimmingly — once they adapted the Agile methodology to suit quality-focused system development.

This assessment of the opportunities, threats, and empirical facts about what works is based on two rounds of research:

  1. A survey of 105 design and development professionals.
  2. In-depth case studies on Agile projects at 12 companies.

Although the data shows much promise, we also found several cases of poor outcomes, emphasizing the need for companies to take explicit steps to integrate Agile and usability.

For user experience practitioners who support Agile teams, the main change is in mindset. Having good, general user experience knowledge will help you understand how to change traditional design and evaluation methods to meet your Agile team's different focus. Ultimately, however, you must both believe in yourself and embrace Agile development concepts if you want to succeed.

If you're prepared to change your practices and take on the responsibility, there are great opportunities to improve your effectiveness and your impact on the teams you support.

Learn More

95-page report on Best Practices for User Experience on Agile Development Projects is available for download.

 

via http://www.useit.com/alertbox/agile-methods.html

Things Caches Do

There are different kinds of HTTP caches that are useful for different kinds of things. I want to talk about gateway caches — or, “reverse proxy caches” — and consider their effects on modern, dynamic web application design.

Draw an imaginary vertical line, situated between Alice and Cache, from the very top of the diagram to the very bottom. That line is your public, internet facing interface. In other words, everything from Cache back is “your site” as far as Alice is concerned.

Alice is actually Alice’s web browser, or perhaps some other kind of HTTP user-agent. There’s also Bob and Carol. Gateway caches are primarily interesting when you consider their effects across multiple clients.

Cache is an HTTP gateway cache, like Varnish, Squid in reverse proxy mode, Django’s cache framework, or my personal favorite: rack-cache. In theory, this could also be a CDN, like Akamai.

And that brings us to Backend, a dynamic web application built with only the most modern and sophisticated web framework. Interpreted language, convenient routing, an ORM, slick template language, and various other crap — all adding up to amazing developer productivity. In other words, it’s horribly slow and bloated… and awesome! There’s probably many of these processes, possibly running on multiple machines.

(One would typically have a separate web server -- like Nginx, Apache or lighttpd -- and maybe a load balancer sitting in here as well but that's largely irrelevant to this discussion and has been omitted from the diagrams.)

Expiration

Most people understand the expiration model well enough. You specify how long a response should be considered “fresh” by including either or both of the Cache-Control: max-age=N or Expires headers. Caches that understand expiration will not make the same request until the cached version reaches its expiration time and becomes “stale”.

A gateway cache dramatically increases the benefits of providing expiration information in dynamically generated responses. To illustrate, let’s suppose Alice requests a welcome page:

Since the cache has no previous knowledge of the welcome page, it forwards the request to the backend. The backend generates the response, including a Cache-Control header that indicates the response should be considered fresh for ten minutes. The cache then shoots the response back to Alice while storing a copy for itself.

Thirty seconds later, Bob comes along and requests the same welcome page:

The cache recognizes the request, pulls up the stored response, sees that it’s still fresh, and sends the cached response back to Bob, ignoring the backend entirely.

Note that we've experienced no significant bandwidth savings here — the entire response was delivered to both Alice and Bob. We see savings in CPU usage, database round trips, and the various other resources required to generate the response at the backend.

Validation

Expiration is ideal when you can get away with it. Unfortunately, there are many situations where it doesn’t make sense, and this is especially true for heavily dynamic web apps where changes in resource state can occur frequently and unpredictably. The validation model is designed to support these cases.

Again, we'll suppose Alice makes the initial request for the welcome page:

The Last-Modified and ETag header values are called “cache validators” because they can be used by the cache on subsequent requests to validate the freshness of the stored response without requiring the backend to generate or transmit the response body. You don’t need both validators — either one will do, though both have pros and cons, the details of which are outside the scope of this document.

So Bob comes along at some point after Alice and requests the welcome page:

The cache sees that it has a copy of the welcome page but can’t be sure of its freshness so it needs to pass the request to the backend. But, before doing so, the cache adds the If-Modified-Since and If-None-Match headers to the request, setting them to the original response’s Last-Modified and ETag values, respectively. These headers make the request conditional. Once the backend receives the request, it generates the current cache validators, checks them against the values provided in the request, and immediately shoots back a 304 Not Modified response without generating the response body. The cache, having validated the freshness of its copy, is now free to respond to Bob.

This requires a round-trip with the backend, but if the backend generates cache validators up front and in an efficient manner, it can avoid generating the response body. This can be extremely significant. A backend that takes advantage of validation need not generate the same response twice.

Combining Expiration and Validation

The expiration and validation models form the basic foundation of HTTP caching. A response may include expiration information, validation information, both, or neither. So far we've seen what each looks like independently. It’s also worth looking at how things work when they're combined.

Suppose, again, that Alice makes the initial request:

The backend specifies that the response should be considered fresh for sixty seconds and also includes the Last-Modified cache validator.

Bob comes along thirty seconds later. Since the response is still fresh, validation is not required; he’s served directly from cache:

But then Carol makes the same request, thirty seconds after Bob:

The cache relies on expiration if at all possible before falling back on validation. Note also that the 304 Not Modified response includes updated expiration information, so the cache knows that it has another sixty seconds before it needs to perform another validation request.

More

The basic mechanisms shown here form the conceptual foundation of caching in HTTP — not to mention the Cache architectural constraint as defined by REST. There’s more to it, of course: a cache’s behavior can be further constrained with additional Cache-Control directives, and the Vary header narrows a response’s cache suitability based on headers of subsequent requests. For a more thorough look at HTTP caching, I suggest Mark Nottingham’s excellent Caching Tutorial for Web Authors and Webmasters. Paul James’s HTTP Caching is also quite good and bit shorter. And, of course, the relevant sections of RFC 2616 are highly recommended.

(Oh, and the diagrams were made using websequencediagrams.com, a very simple, text-based sequence diagram generating web service thingy.)

This entry has been tagged web, rest, http, coding, diagrams, caching, rack-cache — follow a tag for related essays, articles, and bookmarks.

via http://tomayko.com/writings/things-caches-do

What we learn from the dying

A doctor shares what his patients’ last moments have taught him

 

In medicine even the skillful ones, surgeons and physicians, themselves from Death all turn and flee — Fear of Death unhinges me.
William Dunbar (1465–1530), translation by T.E. Holt, M.D.

"Dude! You totally Melvined Death!"
Bill & Ted's Bogus Journey (1991)

My first day of medical school was a series of inspirational talks. The tone, set by the anesthesiologist who led off, was lighthearted. His subject was "Everything you will ever need to know about medicine." This turned out to be just three things, which he had us all recite: Air goes in and out. Blood goes round and round. Oxygen is good. Just keep these in mind, he said, and you'll be okay.

By the end of the day, we were as blank as the huge whiteboards at the front of the room. Within the next 24 hours, these would start filling up with diagrams of cell-transport mechanisms, cartoons of developing embryos, maps of the brachial plexus. But on that first day, the lectures were so inconsequential that only one speaker bothered to write anything down. This was a pathologist who also wanted to reduce medicine to its essentials. He scrawled a single word on the board: DEATH.

Just avoid this one thing, he said, and we'd be okay.

The word stayed up there on the whiteboard the rest of the day. I waited for someone to notice and wipe it away, but no one did. It was gone the next morning, replaced by the Krebs cycle, that happy intracellular Rube Goldberg mechanism that keeps us all alive, whether you can diagram it from memory or not, thank God.

Whoever scribbled the Krebs cycle in place of that single stark word gave us our real orientation to medicine. Despite death's modest appearance that first day, what we were really learning wasn't "Don't Fear the Reaper" so much as "Don't See the Reaper."

We don't like to find that word staring down at us from the wall. If we do, we'll hang it on somebody else, shrouding it behind a screen of medical abbreviations, and then we'll be gone. The word's still there — it follows us, of course, as the moon follows a moving car — but as long as we don't have to keep looking at it, we're okay.

The problem is, death keeps looking at us. When I'm forced to think about this, what I see most clearly are the faces of patients at the moment they recognized the incredible fact that they were going to die soon. This is what I can't forget: the look they had as they read the writing on the wall like Belshazzar did at his feast in the Bible story, faced at the height of his power with the message that he was about to die. Just what people see as they read that message is, I suspect, the most important fact about death. I know that fact escapes my grasp, but I keep reaching for it, all the same.

He was 18 years old with cystic fibrosis. By unspoken agreement, we had left him until last on morning rounds, because overnight the lab had analyzed his blood and cultured Burkholderia cepacia — an organism that flourishes in the pus that overwhelms the lungs in end-stage cystic fibrosis. It's notoriously resistant to antibiotics. (It's been found growing on penicillin.) Once B. cepacia escapes the lungs and enters the bloodstream, death is inevitable: sepsis, circulatory collapse, multiorgan system failure, the end.

After a muttered conversation in the hallway, we edged into the room. I was nervous: I was going to have to tell this kid he was dying. He was awake, sitting up in bed. The room was dark. It had that lived-in look CFers cultivate — posters, clothes strewn everywhere, a game console flickering on idle. A wasted-looking father slumped in the corner chair. The patient watched us file in. When I saw the expression on his face, my anxiety about what I was going to say seemed suddenly unimportant.

He knew. He already knew. He barely listened as I reported what we had learned from the lab. Then there was silence. He looked back at me as if I weren't there and said, "I'm going to die, aren't I?"

It wasn't really a question, the way he said it. My answer was as irrelevant as everything else that we had left to offer him. The attending stepped in and started talking, but I could tell the patient wasn't listening.

A year or so later, I was the resident on the oncology service, responsible for two dozen or more patients, all of whom were doing badly. Doing badly with cancer means terrible things: organs malfunctioning as tumors squeeze them off, pain that soaks up morphine like water, treatments with a list of possible side effects that includes death.

Into this substation of hell one day walked a strong man in his early 40s, looking about as healthy as a man can look, though perhaps a little pale. Earlier that day, a blood test had revealed a swarm of misshapen, blue-stained cells that should have been functioning parts of his immune system but instead were leukemia. He was in what they call "blast crisis"; our job was to help him survive the night so he could start chemotherapy in the morning.

Over the course of that night, his blood levels of oxygen started to drop, his left eyelid developed a droop, and I had to explain to him that if I didn't insert this honking big catheter into his femoral vein, he wasn't going to live to see the morning.

I could see him change. He had walked in as a functioning adult. He had asked intelligent questions before signing the consent form. He had been calm, helpful, determined. He had a pleasant smile. That was until about 4 p.m. As things started to unravel, he became at first bewildered, then querulous, and then, as the leukocytes started clogging the capillaries of his brain, confused. He tried not to groan as I probed for that vein in his groin, but despite the lidocaine, when I sliced into his skin to widen the opening for the catheter, he screamed. After that he settled into a silence that deepened throughout the night.

CONTINUED : 'Let him go'

A Double Handful of Programming Quotes

I’m busy tidying up a few loose ends with work at the moment, before family arrive for Xmas - and I just haven’t have any time for in-depth articles. So instead of my own words, here’s a few of my favourites from other people:

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”

- Brian Kernighan

“There are only two kinds of languages: the ones people complain about and the ones nobody uses.”

- Bjarne Stroustrup

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

- Martin Fowler

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”

- C.A.R. Hoare

“Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.”

-  Alan Kay

“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”

- Bill Gates

“If you want to set off and go develop some grand new thing, you don’t need millions of dollars of capitalization. You need enough pizza and Diet Coke to stick in your refrigerator, a cheap PC to work on and the dedication to go through with it.”

- John Carmack

“Programs must be written for people to read, and only incidentally for machines to execute.”

- Abelson / Sussman

“Question: How does a large software project get to be one year late? Answer: One day at a time!”

- Fred Brooks

“Nobody should start to undertake a large project. You start with a small trivial project, and you should never expect it to get large. If you do, you’ll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision. So start small, and think about the details. Don’t think about some big picture and fancy design. If it doesn’t solve some fairly immediate need, it’s almost certainly over-designed. And don’t expect people to jump in and help you. That’s not how these things work. You need to get something half-way useful first, and then others will say “hey, that almost works for me”, and they’ll get involved in the project.”

- Linus Torvalds

via http://www.hackification.com/2008/12/23/a-double-handful-of-programming-quotes/

The Programmer's Bill of Rights

 

It's unbelievable to me that a company would pay a developer $60-$100k in salary, yet cripple him or her with terrible working conditions and crusty hand-me-down hardware. This makes no business sense whatsoever. And yet I see it all the time. It's shocking how many companies still don't provide software developers with the essential things they need to succeed.

I propose we adopt a Programmer's Bill of Rights, protecting the rights of programmers by preventing companies from denying them the fundamentals they need to be successful.

The Bill of Rights

  1. Every programmer shall have two monitors

    With the crashing prices of LCDs and the ubiquity of dual-output video cards, you'd be crazy to limit your developers to a single screen. The productivity benefits of doubling your desktop are well documented by now. If you want to maximize developer productivity, make sure each developer has two monitors.

  2. Every programmer shall have a fast PC

    Developers are required to run a lot of software to get their jobs done: development environments, database engines, web servers, virtual machines, and so forth. Running all this software requires a fast PC with lots of memory. The faster a developer's PC is, the faster they can cycle through debug and compile cycles. You'd be foolish to pay the extortionist prices for the extreme top of the current performance heap-- but always make sure you're buying near the top end. Outfit your developers with fast PCs that have lots of memory. Time spent staring at a progress bar is wasted time.

  3. Every programmer shall have their choice of mouse and keyboard

    In college, I ran a painting business. Every painter I hired had to buy their own brushes. This was one of the first things I learned. Throwing a standard brush at new painters didn't work. The "company" brushes were quickly neglected and degenerated into a state of disrepair. But painters who bought their own brushes took care of them. Painters who bought their own brushes learned to appreciate the difference between the professional $20 brush they owned and cheap disposable dollar store brushes. Having their own brush engendered a sense of enduring responsibility and craftsmanship. Programmers should have the same relationship with their mouse and keyboard-- they are the essential, workaday tools we use to practice our craft and should be treated as such.

  4. Every programmer shall have a comfortable chair

    Let's face it. We make our livings largely by sitting on our butts for 8 hours a day. Why not spend that 8 hours in a comfortable, well-designed chair? Give developers chairs that make sitting for 8 hours not just tolerable, but enjoyable. Sure, you hire developers primarily for their giant brains, but don't forget your developers' other assets.

  5. Every programmer shall have a fast internet connection

    Good programmers never write what they can steal. And the internet is the best conduit for stolen material ever invented. I'm all for books, but it's hard to imagine getting any work done without fast, responsive internet searches at my fingertips.

  6. Every programmer shall have quiet working conditions

    Programming requires focused mental concentration. Programmers cannot work effectively in an interrupt-driven environment. Make sure your working environment protects your programmers' flow state, otherwise they'll waste most of their time bouncing back and forth between distractions.

The few basic rights we're asking for are easy. They aren't extravagant demands. They're fundamental to the quality of work life for a software developer. If the company you work for isn't getting it right, making it right is neither expensive nor difficult. Demand your rights as a programmer! And remember: you can either change your company, or you can change your company.

via http://www.codinghorror.com/blog/archives/000666.html

Unit Testing, TDD and the Shuttle Disaster

I was reading the Feynman report about the Shuttle disaster: “Appendix F - Personal observations on the reliability of the Shuttle” and I was freaked out by the similarities of military engine development and bottom-up, test driven development. There is a small passage in the report about how military engines are built:

The usual way that such engines are designed (for military or civilian aircraft) may be called the component system, or bottom-up design. First it is necessary to thoroughly understand the properties and limitations of the materials to be used (for turbine blades, for example), and tests are begun in experimental rigs to determine those. With this knowledge larger component parts (such as bearings) are designed and tested individually. As deficiencies and design errors are noted they are corrected and verified with further testing. Since one tests only parts at a time these tests and modifications are not overly expensive. Finally one works up to the final design of the entire engine, to the necessary specifications. There is a good chance, by this time that the engine will generally succeed, or that any failures are easily isolated and analyzed because the failure modes, limitations of materials, etc., are so well understood. There is a very good chance that the modifications to the engine to get around the final difficulties are not very hard to make, for most of the serious problems have already been discovered and dealt with in the earlier, less expensive, stages of the process.

This sounds a lot like Unit Testing to me. Writing small parts of an application, testing the part, then integrating it. And even if this is not TDD (not possible with hardware?), then it sound similar, contrary to writing all code first and writing the tests last.

Compare this approach with the way NASA desigened the Shuttle Main Engine:

The Space Shuttle Main Engine was handled in a different manner, top down, we might say. The engine was designed and put together all at once with relatively little detailed preliminary study of the material and components. Then when troubles are found in the bearings, turbine blades, coolant pipes, etc., it is more expensive and difficult to discover the causes and make changes. For example, cracks have been found in the turbine blades of the high pressure oxygen turbopump. Are they caused by flaws in the material, the effect of the oxygen atmosphere on the properties of the material, the thermal stresses of startup or shutdown, the vibration and stresses of steady running, or mainly at some resonance at certain speeds, etc.? How long can we run from crack initiation to crack failure, and how does this depend on power level? Using the completed engine as a test bed to resolve such questions is extremely expensive. One does not wish to lose an entire engine in order to find out where and how failure occurs. Yet, an accurate knowledge of this information is essential to acquire a confidence in the engine reliability in use. Without detailed understanding, confidence can not be attained.

A further disadvantage of the top-down method is that, if an understanding of a fault is obtained, a simple fix, such as a new shape for the turbine housing, may be impossible to implement without a redesign of the entire engine.”

This sounds a lot like traditional, up front software development. With the same problems. When errors occure, “are they caused by flaws in the material [...]“ or where do they come from? It’s hard to decide which component is the root cause of an error in a complex system. Astonishingly Feynman sees another corresponding disadvantage with top-down versus bottom-up. Problems that arise may be too big to fix in a conventional way, the engine architecture needs to be redesigned. This happens with software too. If you do too much up front architecture, you may end with an architecture which doesn’t fit your problems (usually this means a long and difficult rewrite - something you should only do as a last resort). Going bottom up, best with Test Driven Development (TDD), you can’t end with a wrong architecture (with merciless small refactorings and path adjustments on the way of course). And usually you’re flexible enough with an architecture which was driven by unit testing to react to all changes on your way (scalability, performance etc.)

The engine development success and the shuttle problems compared show convincingly how developing in small steps with components and merciless testing results in easy to debug components with a low error rate. You should test more.

Thanks for listening. As ever, please do share your thoughts and additional tips in the comments below, or on your own blog (I have trackbacks enabled).

via http://stephan.reposita.org/archives/2008/11/17/unit-testing-tdd-and-the-shuttle-disaster/

Wednesday, December 24, 2008

Frequently Forgotten Fundamental Facts about Software Engineering

This month's column is simply a collection of what I consider to be facts—truths, if you will—about software engineering. I'm presenting this software engineering laundry list because far too many people who call themselves software engineers, or computer scientists, or programmers, or whatever nom du jour you prefer, either aren't familiar with these facts or have forgotten them.

I don't expect you to agree with all these facts; some of them might even upset you. Great! Then we can begin a dialog about which facts really are facts and which are merely figments of my vivid loyal opposition imagination! Enough preliminaries. Here are the most frequently forgotten fundamental facts about software engineering. Some are of vital importance—we forget them at considerable risk.

Complexity

C1. For every 10-percent increase in problem complexity, there is a 100-percent increase in the software solution�s complexity. That's not a condition to try to change (even though reducing complexity is always desirable); that's just the way it is. (For one explanation of why this is so, see RD2 in the section "Requirements and design.")

People

P1. The most important factor in attacking complexity is not the tools and techniques that programmers use but rather the quality of the programmers themselves.

P2. Good programmers are up to 30 times better than mediocre programmers, according to "individual differences" research. Given that their pay is never commensurate, they are the biggest bargains in the software field.

Tools and techniques

T1. Most software tool and technique improvements account for about a 5- to 30-percent increase in productivity and quality. But at one time or another, most of these improvements have been claimed by someone to have "order of magnitude" (factor of 10) benefits. Hype is the plague on the house of software.

T2. Learning a new tool or technique actually lowers programmer productivity and product quality initially. You achieve the eventual benefit only after overcoming this learning curve.

T3. Therefore, adopting new tools and techniques is worthwhile, but only if you (a) realistically view their value and (b) use patience in measuring their benefits.

Quality

Q1. Quality is a collection of attributes. Various people define those attributes differently, but a commonly accepted collection is portability, reliability, efficiency, human engineering, testability, understandability, and modifiability.

Q2. Quality is not the same as satisfying users, meeting requirements, or meeting cost and schedule targets. However, all these things have an interesting relationship: User satisfaction = quality product + meets requirements + delivered when needed + appropriate cost.

Q3. Because quality is not simply reliability, it is about much more than software defects.

Q4. Trying to improve one quality attribute often degrades another. For example, attempts to improve efficiency often degrade modifiability.

Reliability

RE1. Error detection and removal accounts for roughly 40 percent of development costs. Thus it is the most important phase of the development life cycle.

RE2. There are certain kinds of software errors that most programmers make frequently. These include off-by-one indexing, definition or reference inconsistency, and omitting deep design details. That is why, for example, N-version programming, which attempts to create multiple diverse solutions through multiple programmers, can never completely achieve its promise.

RE3. Software that a typical programmer believes to be thoroughly tested has often had only about 55 to 60 percent of its logic paths executed. Automated support, such as coverage analyzers, can raise that to roughly 85 to 90 percent. Testing at the 100-percent level is nearly impossible.

RE4. Even if 100-percent test coverage (see RE3) were possible, that criteria would be insufficient for testing. Roughly 35 percent of software defects emerge from missing logic paths, and another 40 percent are from the execution of a unique combination of logic paths. They will not be caught by 100-percent coverage (100-percent coverage can, therefore, potentially detect only about 25 percent of the errors!).

RE5. There is no single best approach to software error removal. A combination of several approaches, such as inspections and several kinds of testing and fault tolerance, is necessary.

RE6. (corollary to RE5) Software will always contain residual defects, after even the most rigorous error removal. The goal is to minimize the number and especially the severity of those defects.

Efficiency

EF1. Efficiency is more often a matter of good design than of good coding. So, if a project requires efficiency, efficiency must be considered early in the life cycle.

EF2. High-order language (HOL) code, with appropriate compiler optimizations, can be made about 90 percent as efficient as the comparable assembler code. But that statement is highly task dependent; some tasks are much harder than others to code efficiently in HOL.

EF3. There are trade-offs between size and time optimization. Often, improving one degrades the other.

Maintenance

M1. Quality and maintenance have an interesting relationship (see Q3 and Q4).

M2. Maintenance typically consumes about 40 to 80 percent (60 percent average) of software costs. Therefore, it is probably the most important life cycle phase.

M3. Enhancement is responsible for roughly 60 percent of software maintenance costs. Error correction is roughly 17 percent. So, software maintenance is largely about adding new capability to old software, not about fixing it.

M4. The previous two facts constitute what you could call the "60/60" rule of software.

M5. Most software development tasks and software maintenance tasks are the same—except for the additional maintenance task of "understanding the existing product." This task is the dominant maintenance activity, consuming roughly 30 percent of maintenance time. So, you could claim that maintenance is more difficult than development.

Requirements and design

RD1. One of the two most common causes of runaway projects is unstable requirements. (For the other, see ES1.)

RD2. When a project moves from requirements to design, the solution process's complexity causes an explosion of "derived requirements." The list of requirements for the design phase is often 50 times longer than the list of original requirements.

RD3. This requirements explosion is partly why it is difficult to implement requirements traceability (tracing the original requirements through the artifacts of the succeeding lifecycle phases), even though everyone agrees this is desirable.

RD4. A software problem seldom has one best design solution. (Bill Curtis has said that in a room full of expert software designers, if any two agree, that's a majority!) That's why, for example, trying to provide reusable design solutions has so long resisted significant progress.

Reviews and inspections

RI1. Rigorous reviews commonly remove up to 90 percent of errors from a software product before the first test case is run. (Many research findings support this; of course, it's extremely difficult to know when you've found 100 percent of a software product's errors!)

RI2. Rigorous reviews are more effective, and more cost effective, than any other error-removal strategy, including testing. But they cannot and should not replace testing (see RE5).

RI3. Rigorous reviews are extremely challenging to do well, and most organizations do not do them, at least not for 100 percent of their software artifacts.

RI4. Post-delivery reviews are generally acknowledged to be important, both for determining customer satisfaction and for process improvement, but most organizations do not perform them. By the time such reviews should be held (three to 12 months after delivery), potential review participants have generally scattered to other projects.

Reuse

REU1. Reuse-in-the-small (libraries of subroutines) began nearly 50 years ago and is a well-solved problem.

REU2. Reuse-in-the-large (components) remains largely unsolved, even though everyone agrees it is important and desirable.

REU3. Disagreement exists about why reuse-in-the-large is unsolved, although most agree that it is a management, not technology, problem (will, not skill). (Others say that finding sufficiently common subproblems across programming tasks is difficult. This would make reuse-in-the-large a problem inherent in the nature of software and the problems it solves, and thus relatively unsolvable).

REU4. Reuse-in-the-large works best in families of related systems, and thus is domain dependent. This narrows its potential applicability.

REU5. Pattern reuse is one solution to the problems inherent in code reuse.

Estimation

ES1. One of the two most common causes of runaway projects is optimistic estimation. (For the other, see RD1.)

ES2. Most software estimates are performed at the beginning of the life cycle. This makes sense until we realize that this occurs before the requirements phase and thus before the problem is understood. Estimation therefore usually occurs at the wrong time.

ES3. Most software estimates are made, according to several researchers, by either upper management or marketing, not by the people who will build the software or by their managers. Therefore, the wrong people are doing estimation.

ES4. Software estimates are rarely adjusted as the project proceeds. So, those estimates done at the wrong time by the wrong people are usually not corrected.

ES5. Because estimates are so faulty, there is little reason to be concerned when software projects do not meet cost or schedule targets. But everyone is concerned anyway!

ES6. In one study of a project that failed to meet its estimates, the management saw the project as a failure, but the technical participants saw it as the most successful project they had ever worked on! This illustrates the disconnect regarding the role of estimation, and project success, between management and technologists. Given the previous facts, that is hardly surprising.

ES7. Pressure to achieve estimation targets is common and tends to cause programmers to skip good software process. This constitutes an absurd result done for an absurd reason.

Research

RES1. Many software researchers advocate rather than investigate. As a result, (a) some advocated concepts are worth less than their advocates believe and (b) there is a shortage of evaluative research to help determine the actual value of new tools and techniques.

There, that's my two cents' worth of software engineering fundamental facts. What are yours? I expect, if we can get a dialog going here, that there are a lot of similar facts that I have forgotten—or am not aware of. I'm especially eager to hear what additional facts you can contribute.

And, of course, I realize that some will disagree (perhaps even violently!) with some of the facts I've presented. I want to hear about that as well.
Robert L. Glass is the editor of Elsevier's Journal of Systems and Software and the publisher and editor of The Software Practitioner newsletter. Contact him at rglass@indiana.edu; he'd be pleased to hear from you.

via http://www2.computer.org/portal/web/buildyourcareer/fa035

Reblog this post [with Zemanta]