View Single Post
Old 26th September 2018, 10:41 AM   #117
Dr. Keith
Not a doctor.
Dr. Keith's Avatar
Join Date: Jun 2009
Location: Texas
Posts: 22,574
Originally Posted by theprestige View Post
The basic idea is that there are two broad paradigms for software development. The older paradigm, prevalent for many years, went something like this:
1. Define requirements.
2. Spend a year developing software to meet those requirements.
3. Release the software all at once.
4. Discover all the bugs and all the missed requirements and all the implemented requirements that nobody actually wanted.
5. Define new requirements.
6. Spend a year...

This paradigm was called "waterfall" software development, for some reason. Probably because you can visualise the process as a cascade of activity, Requirements > Development > Discovery.

The waterfall paradigm made a lot of sense in the days when software came on physical media. You'd buy a disk, insert it in the computer, install the software, and use it warts and all until the new version came out next year. Then you'd upgrade, and hope that the new version had more useful features and less bugs than the previous version. Mostly this worked.

But as we moved into an age when more and more software was running as a service over the Internet (Turbotax Online, for example), software companies realized they didn't necessarily have to wait six months or a year for a new version. They could fix bugs and add features as they were discovered. You could update your service every six weeks, or every six days. Or every six hours. And being able to produce beneficial updates quickly gave you an advantage over your competition.

But to do that well, a new paradigm was needed. The new paradigm needed to have a system for breaking down software development into small tasks that could be completed quickly, tested quickly, and released quickly. And, in order to be valuable, the new system had to link these tasks to specific tangible benefits to your users.

The implications of such a system are literally paradigm-shifting. Instead of your software developer laboring for a year on a massive code base, making hundreds of changes without really knowing if they're useful or wanted or even simply not harmful; your software developer can labor for a week on something he knows is desirable, and at the end of the week he can test it and know that it's working as intended. Shortly thereafter, that improvement that customers actually want - the bug fix, the new feature - can be released, and customers can be made happier thereby. This is, in a word, *awesome*.

And this awsomeness hinges on knowing what customers want. You know there's a bug that affects database performance, but have your customers even noticed that? Or are they all clamoring for a delete button that warns them before they delete stuff? Gathering that customer feedback, and using it to decide which development goals to prioritize, is critical to the success of the system.

This new paradigm is called "agile" software development, probably because it's the same basic cycle of activity as the waterfall, but done at a much faster pace. Customer feedback about what they're trying to do and what they expect from your software is called "user stories". The phrase "recording user stories" is the agile paradigm's term for defining requirements.

A mature agile development team will usually have a Product Owner assigned or embedded with the developers. Their job is to gather the user stories, prioritize them, and bring them to the developers. The developers, armed with the knowledge of what the users want, are then responsible for breaking the requirements down into incremental development tasks that will produce real improvements as each one is completed.

One side effect of the agile paradigm is that it requires an acceleration of the entire software development lifecycle *and* of all the tooling required to get a piece of code from the developer's head onto the customer-facing website. When I started out in systems administration, I could leave a software QA server down for a week or two while I worked on more important tasks. What's a week or two of QA downtime, on a year-long development cycle?

But when that same cycle is supposed to run multiple times a day, and the developer has committed to having some good thing ready for customers by the end of the week, even an hour of QA downtime really hurts. So the entire pipeline has gotten more robust, more efficient, more fast.

And more automated. When you're cycling through the entire process multiple times a day, you can't just hand a guy an installer and some instructions and tell him to upgrade the server so they can test the new version. Instead, you fill your pipeline with robots that do all that automatically, on the fly, all day every day. Instead of telling your quality engineer to manually step through all of the testing processes, you tell him to write automated testing scripts using a standardized suite of tools, so that his expertise can also be applied continuously by robots, without having to wait for human intervention.

And that's basically my job: Administering an automated software delivery system, so that my developers can drop their code into a code repository, sit back, and let the robots do their job.

tl;dr - it means you start by finding out by what your customers actually want, so that when you give them stuff it's stuff you know they'll be happy to get.
Thank you.

I'm more familiar with "agile" as used in physical manufacturing, not software. This was very enlightening.
Suffering is not a punishment not a fruit of sin, it is a gift of God.
He allows us to share in His suffering and to make up for the sins of the world. -Mother Teresa

If I had a pet panda I would name it Snowflake.
Dr. Keith is offline   Quote this post in a PM   Nominate this post for this month's language award Copy a direct link to this post Reply With Quote Back to Top