CrossModel is a pattern language for digging into an existing system with a client. It was presented as a tutorial at SATURN 2013. The formatted PDF contains diagrams and references. What follows is the introduction so you can get an idea of what it’s about.
Software architects often must recover, understand, or describe the architecture of an existing system. As a community, architects usually agree that systems should be described using multiple views. However, when it is time to build those views, each architect does it his/her own way and the results are correspondingly idiosyncratic.
[The below] document describes CrossModel, a technique for architecture recovery, understanding, and description that is repeatable and yields a set of self-consistent views. It is described as a set of over 75 patterns plus guidance on how to apply them.
CrossModel is particularly good at revealing the functional aspects of a system and adequate at revealing quality attribute aspects. This is a good fit for architecture discovery projects because most systems are Big Balls of Mud rather than optimized for chosen Architecture Drivers.
If you have never tried to recover the architecture of a system, you might think that it is a simple matter of interviewing experts and writing down your findings in a suitable notation. In practice, however, you get a set of incomplete and flawed descriptions from experts. From these vague clues, the architect must solve the puzzle: synthesize a coherent understanding of the system, recognize inconsistencies and gaps between views, and drive the discovery and creation of a complete and self-consistent model.
CrossModel makes this puzzle easier to solve by providing a mental framework to organize discovered details, techniques to discover inconsistencies and gaps, and a pragmatic process for interviewing experts. The name CrossModel evokes a crossword puzzle, which has scraps of information (clues) provided by experts that are scrutinized with the goal of making a consistent model (the puzzle).
You will often find yourself in need of a useful model of an existing system. Such a model helps you answer the questions you have, such as “How can I make it faster?”, “Can we integrate a third-party module?”, or “Why is it unreliable?” Even if you know these questions in advance, creating a useful model is not the work of a simple scribe who asks questions of the existing system’s stakeholders, developers, maintainers, etc. and writes down their answers. The obstacles are many, and include:
It’s relatively easy to produce views in isolation but fiendishly difficult to gain confidence that there are no lurking inconsistencies or gaps. An expert modeler is someone with the skill to detect such inconsistencies and gaps, but it is not a skill that is easy to teach. What can be taught is model syntax, such as UML or ADL.
But our modeling languages have a limited ability to express consistency between views at the same level of abstraction (refinement) and almost no ability to do it at different levels. Consequently, the goodness of the model depends heavily on the skill of the modeler in keeping the multiple views consistent.
If you are able to interview a Subject Matter Expert (SME) who truly understands all the dimensions of a system then you are very lucky for two reasons: that such a person exists and that you can get his/her time. Most often, no single person understands the whole system and if such a person exists they are too busy to spend days with you scribbling models. So you cope with multiple SMEs and hope for good coverage.
The best metaphor is probably multiple descriptions of an elephant, where one SME may tell you the business market drivers for why the system needs its response times, another tells you which technology platforms were chosen and the future plans, and yet another tells you how the system fits into the workflow of the company.
These descriptions will neither agree nor will they be complete. So your primary challenge is to synthesize them, detecting inconsistencies and gaps. Not every SME is ready to accept their imperfections or your improved and more comprehensive model. So your secondary challenge is to gain consensus and rally them to support your model. Having the right answer is good; having consensus that it’s right is better.
Modeling by definition means eliding details. You want an understanding and description that is accurate but also of tractable size. That works best when the system has overarching themes, policies, and standards. For example, “all drivers use the plug-in API”. But since this is a real system, you may find that each rule has more exceptions than you can count. That is part of the reason nobody understands the system and you have been asked to build this model.
You cannot expect architecture or models to be a silver bullet that removes this messy complexity. Your best hope is to merely expose it. (You may also help others understand the long-term price of breaking all those rules, or never creating any). It’s not like taking a car to the carwash and discovering it’s shiny underneath the mud. You may start washing it and discover it’s rocks, dirt, and sticks all the way through.
Nor can you expect architecture drivers or quality attribute goals to appear when they have never been contemplated by the system’s designers. Unless a system was designed for X, it probably is not very good at X (where X is a quality of your choice).
In the Big Ball of Mud pattern , Foote and Yoder note that there is a perverse incentive for those who have mastered a system’s complexity to perpetuate it. Those maintainers are viewed as irreplaceable employees and treated as heroes. Keep this in mind when you interview them. Michael Jackson  refers to a similar anti-pattern when he describes companies who respect employees who make things seem complicated instead of those who cut through the clutter. The outcome is the same: systems that are hard to understand, even by their maintainers.
Recall when you learned algebra and you had problems like figuring out when two trains would meet. If you include too many details in the model, such as the color of the trains, it can still be a useful model. Omitting a necessary detail is clearly a deadly mistake. So your temptation is to include every detail that you might need. With trivial models this is manageable. But for real-sized models of software systems, over-including details is also a deadly mistake. Consider that you already have the full source code and that is a kind of model – just not a particularly helpful one for your purposes. So you have the challenge of digging in deep enough so that you don’t miss essential details, but keeping the model simple enough to be tractable. This Organization of the patterns
In order to describe CrossModel, the catalog of patterns has been divided into sections:
As seen in the notional layer diagram in figure 1, the Architecture Discovery Patterns depend on the other four groups of patterns, but not the reverse. You may find each of the pattern collections useful on its own, separately from CrossModel. The Expert Interviewing Patterns could, for example, be used anytime you are interviewing experts.
Some patterns books like the Design Patterns  book give each pattern a standard structure. That makes sense with a whole book and only 20 patterns. Here we have many patterns so a shorter, more casual description of each is merely intended to provide the reader with an intuition about when and how to apply the pattern. The expectation is that most individual patterns are simple and need little elaboration, but the collection of them reveals a holistic way to do architecture discovery. Furthermore, the Architecture Discovery Patterns are roughly organized as a pattern language that gives guidance on how and when to use the other patterns.
subscribe via RSS