I’m having a great discussion with Larry Maccherone over on his blog.
I’ve included my reply here, which will make more sense if you’ve read
Excerpts from his posting are in the block quotes below.
Thanks for chatting with me about this. It’s helping me to understand
and articulate my ideas, and it’s stretching my thinking.
It struck me when you said this:
Agilest will argue that feedback from actual running code is the best way to battle complexity and scale. Tighten the feedback loop with actual code. You can make more rapid progress by building something, even the wrong thing, and re-building it, than you can by wasting time modeling it and thinking about it because the assumptions you make when thinking about it always miss something important. You’ll call that something an insignificant detail. They’ll say, it’s the details that end up getting you.
I’m a strong supporter of the spiral model and iterative development.
Many folks who propose doing architecture do indeed suggest you do Big
Design Up Front. I disagree (in most cases). When an agile developer
sits at the keyboard, he’s thinking of what he’s building before he
types. He’s not a monkey getting lucky typing Shakespeare. “Am I
violating a design principle here? Is it OK for the front end to talk
directly to the DB?” I know agile developers think about these
things, and these are things I consider part of architecture.
One of the points I’m trying to hammer home in the book is that
your architecture effort should match the risk. Small risks
can be resolved at the keyboard in a coding session. Bigger risks may
need prototyping or modeling. Really serious risks might require
formalisms and analysis (e.g., avoiding priority inversion as happened
on the Mars Rover).
I may have hinted at something that makes you think I’d disagree with
“Tighten the feedback loop with actual code”, and I want to find where
that is and not only eliminate it but actually reinforce that I agree
with writing actual code. Here are some excerpts from Chapter 2: “We
could keep adding models that explained more about our system, but
spending effort building and maintaining models must be traded off
against building the system” … “We must prototype our design to
ensure it is viable before we commit more time and effort.”
… “Prototyping can accelerate learning. Problems that seem easy at
10,000 feet often have unforeseen difficulties. Use research and
prototyping to discover assumptions baked into COTS components, and
express those assumptions in your models so you do not have to
re-discover them later.” and then in the summary “A point that should
be clear is that architecture modeling by itself is
insufficient. Before modeling, we usually need to learn about our
domain, which takes many forms. Prototyping is essential before
proceeding too far with modeling, because continuing to work on a
model with unvalidated assumptions is itself risky.”
Re-reading what I wrote (and I’m not sure if you’ve looked at Ch2
anyway), I’m concerned that people could see “prototyping” and think
“throw-away”, but what I’m trying to convey is that Yes, there will be
details and wrinkles that your models won’t see and you must write
code to get feedback. Those are most certainly not insignificant
details. I do not want tether this idea (write code!) to Agile
specifically, but maybe it would be clearer to discuss that this need
to reduce risk by writing code is one of the Agile core principles.
I think you will lose the argument if someone comes up with a story about how Google’s (substitute, amazon web services, apache, or some other well known “big” system) massively scalable (although arguably not complex) system was created iteratively without any serious architecture work. Then, your approach to architecture is no longer design; it’s archeology, a concise way to document a system after a real engineer built it. I suspect that Eclipse was built with a more structured approach. Do you know if they followed an architecture approach?
I’m pretty sure that engineers/developers on large systems have some
way of thinking about the big chunks, and they analyze that thinking
to ensure the design gets them where they want to go, then they write
some code to be really sure. Of course, once they write code they
learn it’s more complex than they thought or that their design had
bugs in it, etc., and they iterate. The point I’m making is that they
neither 1) randomly write code, nor 2) fail to have the big picture in
mind before they write code. Of course you’re not suggesting that
either, but I want to make sure we agree that there’s thinking about
the design before coding. So let’s agree to call that big picture a
model, even if they do not write it down, and call that thinking
Perhaps I need to make the point more clearly that I’m not advocating
the following: Developers should think hard about problems, fire up a
CASE tool and design a solution, then code it up. Iterate as
Closer to what I advocate is: When your system is big and complex you
won’t be able to build it unless you can fit the problem and solution
in your head. This means constructing models, which by definition
elide details. These models might be in your head, on a whiteboard,
or in a CASE tool. You will use these models to convince yourself
that the code you are about to write (ie the design) will probably
work (ie achieve functions and qualities). Serious risks might force
you to write out detailed models that you analyze carefully, but small
risks might entail whiteboard sketches and quick prototyping. NASA is
probably going to build more models than MyStartup.com, but perhaps
friendster.com would not have failed because of performance problems
if they had done more design work.
(BTW, I have no inside knowledge of the Friendster case, but my
reading of the post-mortems says that they wrote a system that didn’t
scale with users, were convinced to rewrite it in a different
programming language, which also did not scale, then changed
programming languages again but also tweaked mySQL’s implementation to
make their queries more efficient, which finally gave them acceptable
performance but too late. The paper I read had the flavor of “my
favorite programming language rocks” but it seemed clear that the DB
tweaks were the solution, and the PL was not relevant to the scaling
Another way to lose the argument is by frameworks. I know architects whose job is mostly done once they decide what framework to use. Can you make the case that your approach could be used for them to make this decision? Be careful, most folks make the choice based upon things like how little boilerplate it makes you write, and the syntax of the templating language (Exhibit A). I see tons of discussion and thought go into the difference between SQLAlchemy and Hibernate’s approach to ORM versus RoR’s or Django’s. Can you address these things with your approach?
Frameworks are a great example of where almost every client program is
a big ball of mud, with only a few exceptions — like Eclipse since it
encourages building sub-frameworks using OSGi plugins/components.
I’ve just deleted a huge rathole I was starting about what appears to
be desirable vs. what is actually what the project needs. We can go
in to that later if you like. I bet we can agree that there are many
cases where developers might prefer to use X because it’s fun,
elegant, new, etc. but not the best choice for the project. Of course
other times you’d be foolish to use unsharp or inefficient tools. I
think it’s worth questioning if my liking to use language/tool X
translates into project success.
Currently fwks and architecture models are oil and vinegar. If I did
another dissertation it might be on this topic. It’s on my list of
things that are still hard about architecture.
I’m not sure what the right answer is for you. I’ve never been much of a believer myself. I hate to say that without reading the other draft chapters. I’ll probably do that some time but I don’t have the time right now.
It’s not clear what you are expressing concern about: is it the idea
of using architecture abstractions, or spending time drawing out
models? One of the novel (I think) parts of the book is that you
should do just enough modeling to discharge your risks, then start
building. Other books tell you how to draw out all kinds of models
about everything. This one says no, just model when you are worried,
and stop modeling when you stop worrying. It could be that for most
projects that Agile is applied to (small to medium IT projects are its
bread and butter, or at least they used to be) that there are
relatively few risks worth spending much time on. They copy the
standard styles (usually N-tier or J2EE) and standard solutions (eg
use DB to handle concurrency), so most of the time it works out.
I don’t want to throw your own words back at you but do you remember writing this? …
I’m almost ready to say and mean “software architecture should be
taught immediately after data structures”. Of course we first have to
figure out how to teach it. I can’t remember if this idea made it
into the paper you’re quoting, but I thought of the math analogy —
you used to have to mentor with a mathematician and even then by your
20’s you were only doing what we now consider elementary school math.
We got better at teaching math so now we do long division in 3rd grade
(though your average bar patron will struggle with it). You might not
realize that the quote above was challenging an earlier paper by
someone I highly respect, David Garlan, who had a few years earlier
written a paper concluding that it was relatively easy to teach
architecture. I will point out that he may be able to teach it better
than I can, but I did have trouble.
Can you imagine how much more interesting an OS class could be after
taking an architecture class?
Yes, getting general developers to read and embrace the book will be
difficult. Perhaps the best entrance is as you suggest, with
“architects”. (You probably haven’t read far enough to see my
criticism of the term and explanation of why I don’t use the term in
the book). I really think this kind of dialog with you is helping me
to express myself more clearly, and to address concerns that Agile and
other reasonable developers will have, so maybe there’s hope yet for
the general developer audience).
Sorry to be so glum. I kept going because I was hoping to come up with a good angle for you. Instead, I just ended up with a long depressing discussion.
One thing I’m concerned about is that you (or others) will read my
answers and feel like it’s judo — I’m avoiding disagreement by saying
“oh, I can do that too, architecture can be agile”. Judo is seen in
another architecture book I recently read, which is actually a
thorough description of how to apply up-front design, when it
describes how to reformat itself to be Agile, Scrum, etc. But that
reformatting was unconvincing, because it said things to the effect of
“Use iteration 0 for six months to develop a product line
architecture”, which would be contorting things too much.
I would be delighted if Agile developers read my book and never bought
a CASE tool, and didn’t change the code they write, just so long as
they agreed that they need words to describe big chunks of code and
communication channels, need to distinguish the module, runtime, and
allocation perspectives, and need to deliberately choose styles that
promote the quality attributes they want.
And I’d be happy to be wrong about my ideas, so long as I can
understand that and fix them. I hope I’ve been clear to you that if
you have misinterpreted my work so far that it’s my fault for not
being clearer, or for hinting the wrong direction. And boy am I
thankful for your time and ideas in this discussion!
subscribe via RSS