Friday, May 30, 2014

Eben Moglen Snowden and Surveillance

This is well worth reading. It's based on a series of talks he gave last fall.

TrueCrypt WTF

I have no idea what's going on with TrueCrypt. There's a good summary of the story at ArsTechnica, and Slashdot, Hacker News, and Reddit all have long comment threads. See also Brian Krebs and Cory Doctorow.
Speculations include a massive hack of the TrueCrypt developers, some Lavabit-like forced shutdown, and an internal power struggle within TrueCrypt. I suppose we'll have to wait and see what develops.

Vulnerabilities Found in Law Enforcement Surveillance System

SEC Consult has published an advisory warning people not to use a government eavesdropping product called Recording eXpress, sold by the Israeli company Nice Systems. Basically, attackers can completely compromise the system. There are good stories on this by Brian Krebs and Dan Goodin.

Friday, May 23, 2014

Alan Watts on the Harms of Surveillance

Biologist Alan Watts makes some good points:
Mammals don’t respond well to surveillance. We consider it a threat. It makes us paranoid, and aggressive and vengeful.
[...]

"Natural selection favors the paranoid," Watts said. Those who run away. In the earliest days of man on the savannah, when we roamed among the predatory, wild animals, someone realized pretty quickly that lions stalked their pray from behind the tall, untamed grass. And so anyone hoping to keep on breathing developed a healthy fear of the lions in the grass and listened for the rustling in the brush in order to avoid becoming lunch for an animal more powerful than themselves. It was instinct. If the rustling, the perceived surveillance, turns out to just be the wind? Well, no harm done.
"For a very long time, people who don't see agency have a disproportionate tendency to get eaten," Watts noted.
And so, we've developed those protective instincts. "We see faces in the clouds; we hear ghosts and monsters in the stairs at night," Watts said. "The link between surveillance and fear is a lot deeper than the average privacy advocate is willing to admit."
[...]
"A lot of critics say blanket surveillance treats us like criminals, but it's deeper than that," he said. "It makes us feel like prey. We’re seeing stalking behavior in the illogical sense," he said.
This is interesting. People accept government surveillance out of fear: fear of the terrorists, fear of the criminals. If Watts is right, then there's a conflict of fears. Because terrorists and criminals -- kidnappers, child pornographers, drug dealers, whatever -- is more evocative than the nebulous fear of being stalked, it wins.

Disclosing vs Hoarding Vulnerabilities

There's a debate going on about whether the US government -- specifically, the NSA and United States Cyber Command -- should stockpile Internet vulnerabilities or disclose and fix them. It's a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.
A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.
Unpublished vulnerabilities are called "zero-day" vulnerabilities, and they're very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.
When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn't make the vulnerability go away, but most users protect themselves by patch their systems regularly.
Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn't even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software's vendor finds out -- the timing depends on how extensively the vulnerability is used -- and issues a patch to close the vulnerability.
If an offensive military cyber unit discovers the vulnerability -- or a cyber-weapons arms manufacturer -- it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it'll remain secret until someone else discovers it.
Discoverers can sell vulnerabilities. There's a rich market in zero-days for attack purposes -- both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.
The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it as to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability -- or of the vulnerability becoming public and criminals starting to use it.
There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, "every offensive weapon is a (potential) chink in our defense -- and vice versa."
To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won't have any cyber-weapons to use against other countries.
Many people have weighed in on this debate. The president's Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem.  Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.
It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.
If vulnerabilities are sparse, then it's obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.
If vulnerabilities are plentiful -- and this seems to be true -- the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won't make it appreciably harder for criminals to find the next one. We don't really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.
But while vulnerabilities are plentiful, they're not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.
The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, "nobody but us." The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some -- we don't know how many -- vulnerabilities that "nobody but us" could find for attack purposes.
This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don't know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.
Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.
There's one more interesting wrinkle. Cyber-weapons are a combination of a payload -- the damage the weapon does -- and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won't know about it. But if it doesn't, it's deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we're nowhere near that point today.
The implications of US policy can be felt on a variety of levels. The NSA's actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we're putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.
An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable -- North Korea much less -- so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn't disarmament; it's making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.
Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that's what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.
We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.
In today's cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world's militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

Thursday, May 22, 2014

Government Policy on Cell Phone Interception Technology

New paper: "Your Secret Stingray's No Secret Anymore: The Vanishing Government Monopoly Over Cell Phone Surveillance and its Impact on National Security and Consumer Privacy," by Christopher Soghoian and Stephanie K. Pell:
Abstract: In the early 1990s, off-the-shelf radio scanners allowed any snoop or criminal to eavesdrop on the calls of nearby cell phone users. These radio scanners could intercept calls due to a significant security vulnerability inherent in then widely used analog cellular phone networks: calls were not encrypted as they traveled over the air. In response to this problem, Congress, rather than exploring options for improving the security of cellular networks, merely outlawed the sale of new radio scanners capable of intercepting cellular signals, which did nothing to prevent the potential use of millions of existing interception-capable radio scanners. Now, nearly two decades after Congress passed legislation intended to protect analog phones from interception by radio scanners, we are rapidly approaching a future with a widespread interception threat to cellular communications very reminiscent of the one scanner posed in the 1990s, but with a much larger range of public and private actors with access to a much more powerful cellular interception technology that exploits security vulnerabilities in our digital cellular networks.
This Article illustrates how cellular interception capabilities and technology have become, for better or worse, globalized and democratized, placing Americans' cellular communications at risk of interception from foreign governments, criminals, the tabloid press and virtually anyone else with sufficient motive to capture cellular content in transmission. Notwithstanding this risk, US government agencies continue to treat practically everything about this cellular interception technology, as a closely guarded, necessarily secret "source and method," shrouding the technical capabilities and limitations of the equipment from public discussion, even keeping its very name from public disclosure. This "source and method" argument, although questionable in its efficacy, is invoked to protect law enforcement agencies' own use of this technology while allegedly preventing criminal suspects from learning how to evade surveillance.
This Article argues that current policy makers should not follow the worn path of attempting to outlaw technology while ignoring, and thus perpetuating, the significant vulnerabilities in cellular communications networks on which it depends. Moreover, lawmakers must resist the reflexive temptation to elevate the sustainability of a particular surveillance technology over the need to curtail the general threat that technology poses to the security of cellular networks. Instead, with regard to this destabilizing, unmediated technology and its increasing general availability at decreasing prices, Congress and appropriate regulators should address these network vulnerabilities directly and thoroughly as part of the larger cyber security policy debates and solutions now under consideration. This Article concludes by offering the beginnings of a way forward for legislators to address digital cellular network vulnerabilities with a new sense of urgency appropriate to the current communications security environment.

Preplay Attack on Chip and PIN

Interesting research paper on a bank card chip-and-PIN vulnerability.
Our new paper shows that it is possible to create clone chip cards which normal bank procedures will not be able to distinguish from the real card.
When a Chip and PIN transaction is performed, the terminal requests that the card produces an authentication code for the transaction. Part of this transaction is a number that is supposed to be random, so as to stop an authentication code being generated in advance. However, there are two ways in which the protection can be bypassed: the first requires that the Chip and PIN terminal has a poorly designed random generation (which we have observed in the wild); the second requires that the Chip and PIN terminal or its communications back to the bank can be tampered with (which again, we have observed in the wild).

The NSA is Not Made of Magic

We are regularly asked what is the most surprising thing about the Snowden NSA documents. It's this: the NSA is not made of magic. Its tools are no different from what we have in our world, it's just better-funded. X-KEYSCORE is Bro plus memory. FOXACID is Metasploit with a budget. QUANTUM is AirPwnwith a seriously privileged position on the backbone. The NSA breaks crypto not with super-secret cryptanalysis, but by using standard hacking tricks such as exploiting weak implementations and default keys. Its TAO implants are straightforward enhancements of attack tools developed by researchers, academics, and hackers; here's a computer the size of a grain of rice, if you want to make your own such tools. The NSA's collection and analysis tools are basically what you'd expect if you thought about it for a while.
That, fundamentally, is surprising. If you gave a super-secret Internet exploitation organization $10 billion annually, you'd expect some magic. And my guess is that there is some, around the edges, that has not become public yet. But that we haven't seen any yet is cause for optimism.

Tuesday, May 20, 2014

Advances in Solving the Discrete Log Problem

At Eurocrypt this year, researchers presented a paper that completely breaks the discrete log problem in any field with a small characteristic. It's nice work, and builds on a bunch of advances in this direction over the last several years. Despite headlines to the contrary, this does not have any cryptanalytic application -- unless they can generalize the result, which seems unlikely to me.

Pervasive Monitoring as Network Attack

New IETF RFC: "RFC 7258: Pervasive Monitoring Is an Attack" that designers must mitigate.
Slashdot thread.

Abusing Power to Shut Down a Twitter Parody Account

This is a pretty horrible story of a small-town mayor abusing his authority -- warrants where there is no crime, police raids, incidental marijuana bust -- to identify and shut down a Twitter parody account. The ACLU is taking the case.

Saturday, May 17, 2014

Forged SSL Certificates Existing on the Internet

About 0.2% of all SSL certificates are forged. This is the first time I've ever seen a number based on real data. News article:
Of 3.45 million real-world connections made to Facebook servers using the transport layer security (TLS) or secure sockets layer protocols, 6,845, or about 0.2 percent of them, were established using forged certificates.
Actual paper.

How to Stop Insider from Stealing All Your Secrets

This article from Communications of the ACM outlines some of the security measures the NSA could, and should, have had in place to stop someone like Snowden. Mostly obvious stuff, although I'm not sure it would have been effective against such a skilled and tenacious leaker. What's missing is the one thing that would have worked: have fewer secrets.

Friday, May 16, 2014

Journalist Glenn Greenwald released his new book on the NSA documents he received from Edward Snowden entitled “No Place to Hide”. The following are internal NSA slides and documents that were contained in Greenwald’s new book.
10302021_10152009423115728_6935121413933226141_n (1)
The basic strategy behind the massive efforts to monitor and control global communications via the sniff it, know it, collect it, process it, exploit it all techniques shared by the ‘Five Eyes’ partnership.
1238320_10152009426235728_4124713099438539093_n
The above image describes the NSA’s Strategic Partnerships with major telecommunication and network service providers, hardware platforms, operating systems, security hardware and software and system’s integration readily available to NSA operatives. Partnerships include over 80 major global corporations aligned with supporting NSA in their ‘Collect it All’  directive including Verizon, AT&T, Motorola, Intel, Microsoft, Verizon, IBM, EDS, Oracle, Cisco, Hewlett Packard, Qualcom, and Oracle.
10294256_10152009428020728_1600785913702117463_n
Leveraging unique key corporate partnerships allows NSA to gain access to high-capacity international fiber-optic cables, switches and/or routers worldwide. The vast partnerships enable the worldwide signals intelligence data collection conducted, listed as cooperative large access points. With covert and clandestine interception points this makes up the 20 fiber-optic cable access programs often appearing at entry or exit points to entire nations of data.
The slide below informs of the 50,000 worldwide data interception implants. It shows five types of data collection, called “Classes of Accesses”. These correspond to the organizational channels through which NSA gathers its intelligence:
- 3rd Party Liaison - Intelligence sharing with foreign agencies
REGIONAL - SCS units, a joint venture between NSA and CIA
CNE - Computer Network Exploitation – TAO division
LARGE CABLE - NSA’s Special Source Operations (SSO) division
FORNSAT - Foreign Satellite interception – Global Access Operations (GAO) division
10271492_10152009449590728_433286583110964716_n
Approved SIGINT or signals intelligence partnerships listed in the new releases include the “five eyes” alliance of the US, Australia, Canada, New Zealand and the UK. Also listed is a long list of Third Party partnerships (Third Party Liasons from above slide) as well as Coalitions with AFSC, NATO, SSEUR (SIGINT Seniors Europe) and SSPAC (SIGINT Seniors Pacific).
10322606_10152009452310728_6718226436352069948_n
10329033_10152009424140728_8836759076408518299_n
Partnerships come with a price-tag.

The NSA (american taxpayers) Paid GCHQ £22.9m in 2009. In 2010 the NSA’s contribution increased to £39.9m, which included £4m to support GCHQ’s work for NATO forces in Afghanistan, and £17.2m for the agency’s Mastering the Internet project, which gathers and stores vast amounts of “raw” information ready for analysis. The NSA also paid £15.5m towards redevelopments at GCHQ‘s sister site in Bude, north Cornwall, which intercepts communications from the transatlantic fiber optic cables that carry internet traffic.
In 2011/12 the NSA paid another £34.7m to GCHQ.  The chart below for calendar year 2012 details payments made to about a dozen other nations with Pakistan topping the charts at 2,600,000 USD followed by Jordon with 1,200,000 and Ethiopia at just under half a million.
10246765_10152009453030728_1228289443530629823_n
13 May 2014. Glenn Greenwald released 107 pages, some new, some previously published, some full pages, some page fragments. Check back, our reports on the new document releases will continue.

Is Antivirus Dead?

Symantec declared anti-virus dead, and Brian Krebs writes a good response.
He's right: antivirus won't protect you from the ever-increasing percentage of malware that's specifically designed to bypass antivirus software, but it will protect you from all the random unsophisticated attacks out there: the "background radiation" of the Internet.

Wednesday, May 14, 2014

Al Qaeda uses new Encryption Software

The Web intelligence company Recorded Future is reporting -- picked up by the Wall Street Journal -- that al Qaeda is using new encryption software in the wake of the Snowden stories. I've been fielding press queries, asking me how this will adversely affect US intelligence efforts.
I think the reverse is true. I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight. Last fall, Matt Blaze said to me that he thought that the Snowden documents will usher in a new dark age of cryptography, as people abandon good algorithms and software for snake oil of their own devising. My guess is that this an example of that.

Computer Forensics in Fiction

New television show -- CSI: Cyber. I hope they have some good technical advisers.

Tuesday, May 13, 2014

Newly Released NSA Snowden Documents

Glenn Greenwald's new book, No Place to Hide, was published today. There are about 100 pages of NSA documents on the book's website. I haven't gone through them yet. At a quick glance, only a few of them have been published before.
Here are two book reviews.

How to Chat with Your Facebook Friends

Cryptocat can now pull a person's buddy list from Facebook

 Facebook's messaging application doesn't support encryption, but an open-source chat program, Cryptocat, has made it possible to chat with friends there over an encrypted connection.
The program's founder, Nadim Kobeissi, wrote Monday that the latest 2.2 version of Cryptocat can log a user into Facebook and pull his contact list in order to set up an end-to-end encrypted conversation.
"Effectively, what Cryptocat is doing is benefitting from your Facebook Chat contact list as a readily available buddy list," he wrote.
The move could augment Cryptocat's user base since new users won't have the chore of building a new contacts list, although they would need to download Cryptocat's browser extension or iPhone application to benefit from encryption.
The security of emails and messages was brought sharply into focus by secret documents leaked by former U.S. National Security Agency contractor Edward Snowden revealing sophisticated online surveillance techniques used by the spy agency.
Facebook has said it could enable end-to-end encryption between users exchanging data, but said such technology is complicated and makes it harder for people to communicate.
Messages exchanged using Facebook are protected by SSL (Secure Sockets Layer) encryption, but that only encrypts data between an end user and Facebook. The social networking service would have access to the clear text of those conversations, which potentially could be surrendered to law enforcement under a court order.
If two people are using Cryptocat, Facebook will know an exchange occurred between the two users and the time of their chat. But the messages themselves will only say: [encrypted message].
The fact that Facebook knows two people are chatting, a type of information known as metadata, should not be a deal breaker, Kobeissi wrote. Users presumably know they're divulging that information already to Facebook by using their service.
"There's no harm in Cryptocat using this already-given metadata to allow these users to set up encrypted chats," Kobeissi wrote. "The usability benefits of being able to quickly see which friends are online and ready for an encrypted chat remain overly substantial for those users."
Facebook will know, however, that the people are using the application due to the use of a Cryptocat relay to transfer the contacts list, he wrote.
Kobeissi wrote that if a person's Facebook friend logs into the service and is using Cryptocat, the conversation is automatically upgraded to an encrypted one. If one party does not have Cryptocat installed, the two people may chat, but the text will not be encrypted.
Cryptocat opted not to integrate itself directly into the Facebook chat interface to maintain "layers of separation," Kobeissi wrote.
"Such an approach would have made encrypted chats over Facebook even more immediate, but would have immersed Cryptocat into Facebook's network and runtime environment in a way that didn't satisfy our security precautions," he wrote.
Cryptocat connects to Facebook as an XMPP client over its outbound BOSH relay. No code from Facebook is loaded or executed within Cryptocat, and the login procedure happens in a sandboxed window, Kobeissi wrote.
Cryptocat version 2.2 is available for Chrome, Safari and Opera. An update to Firefox is due to be released later this week.

Steganography in Twitter

Clever, but make sure to heed the caveats in the final two paragraphs.

Monday, May 12, 2014

Internet Subversion

In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.
Trust is inherently social. It is personal, relative, situational, and fluid. It is not uniquely human, but it is the underpinning of everything we have accomplished as a species. We trust other people, but we also trust organizations and processes. The psychology is complex, but when we trust a technology, we basically believe that it will work as intended.
This is how we technologists trusted the security of the Internet. We didn't have any illusions that the Internet was secure, or that governments, criminals, hackers, and others couldn't break into systems and networks if they were sufficiently skilled and motivated. We didn't trust that the programmers were perfect, that the code was bug-free, or even that our crypto math was unbreakable. We knew that Internet security was an arms race, and the attackers had most of the advantages.
What we trusted was that the technologies would stand or fall on their own merits.
We now know that trust was misplaced. Through cooperation, bribery, threats, and compulsion, the NSA -- and the United Kingdom's GCHQ -- forced companies to weaken the security of their products and services, then lie about it to their customers.
We know of a few examples of this weakening. The NSA convinced Microsoft to make some unknown changes to Skype in order to make eavesdropping on conversations easier. The NSA also inserted a degraded random number generator into a common standard, then worked to get that generator used more widely.
I have heard engineers working for the NSA, FBI, and other government agencies delicately talk around the topic of inserting a "backdoor" into security products to allow for government access. One of them told me, "It's like going on a date. Sex is never explicitly mentioned, but you know it's on the table." The NSA's SIGINT Enabling Project has a $250 million annual budget; presumably it has more to show for itself than the fragments that have become public. Reed Hundt calls for the government to support a secure Internet, but given its history of installing backdoors, why would we trust claims that it has turned the page?
We also have to assume that other countries have been doing the same things. We have long believed that networking products from the Chinese company Huawei have been backdoored by the Chinese government. Do we trust hardware and software from Russia? France? Israel? Anywhere?
This mistrust is poison. Because we don't know, we can't trust any of them. Internet governance was largely left to the benign dictatorship of the United States because everyone more or less believed that we were working for the security of the Internet instead of against it. But now that system is in turmoil. Foreign companies are fleeing US suppliers because they don't trust American firms' security claims. Far worse governments are using these revelations to push for a more isolationist Internet, giving them more control over what their citizens see and say.
All so we could eavesdrop better.
There is a term in the NSA: "nobus," short for "nobody but us." The NSA believes it can subvert security in such a way that only it can take advantage of that subversion. But that is hubris. There is no way to determine if or when someone else will discover a vulnerability. These subverted systems become part of our infrastructure; the harms to everyone, once the flaws are discovered, far outweigh the benefits to the NSA while they are secret.
We can't both weaken the enemy's networks and protect our own. Because we all use the same products, technologies, protocols, and standards, we either allow everyone to spy on everyone, or prevent anyone from spying on anyone. By weakening security, we are weakening it against all attackers. By inserting vulnerabilities, we are making everyone vulnerable. The same vulnerabilities used by intelligence agencies to spy on each other are used by criminals to steal your passwords. It is surveillance versus security, and we all rise and fall together.
Security needs to win. The Internet is too important to the world -- and trust is too important to the Internet -- to squander it like this. We'll never get every power in the world to agree not to subvert the parts of the Internet they control, but we can stop subverting the parts we control. Most of the high-tech companies that make the Internet work are US companies, so our influence is disproportionate. And once we stop subverting, we can credibly devote our resources to detecting and preventing subversion by others.

Sunday, May 11, 2014

The Role of the Chief Risk Officer in an Organization

The complexity of the business environment and rapid changes in market place calls for stronger risk function and management. More and more organizations are appointing chief risk officers to help lead in these efforts. A recent Protiviti white paper highlights the importance of a risk function led by a chief risk officer (CRO). It highlights how organizations must realize that placing the CRO in the right springboard within the organizational structure is essential to the effectiveness of the role towards strategy setting and decision making process. Hence, the white paper provides six critical success factors that organizations should be aware of and promote to ensure the effectiveness of the CRO role.

Current State of Positioning

Following the recent financial crisis, there has been a transformation on the view that risk management is mainly a compliance tool. Protiviti’s review of the CRO function provides persuasive evidence that CROs have raised their stature within the organization and have developed clearer lines of communication and reporting with the CEO and a committee of the board of directors engaging in business and strategy setting. However, room for improvement still exists given one in four institutions still claim that the views of their risk functions are often overridden in their organization. This is attributed mainly due to poor communication between departments as a result of silo mentality.
However, the bar has been raised for engaging risk management. Hence, the current state of positioning for the risk function in an organization suggests that progress is being made in gaining a seat at the decision-making table for the risk function. And, windows of improvement opportunities are still open.
While executives must foster team work in an organization, the CRO must often be the devil’s advocate and offer alternative views to enrich discussions. Care must be taken in understanding the CRO’s role to sometimes be adversarial in order to provide diversity and avoid tunnel vision. Missing this point, the paper states, will limit the decision-making process by allowing groupthink to enter.
There are six key success factors for CROs to consider to effectively position the CRO to increase the value of risk management.
  • 1. Viewed as a Peer with Business Line Leaders – In order to achieve the forward-looking risk perspective and its serious consideration, the CRO must be able to deliver those expectations through a collaborative relationship with business line leaders. Hence, the CRO must be viewed as a peer. Not doing so would hamper the ability for the CRO to function effectively and misplace the CRO’s direct reporting capability.
  • 2. Board Reporting and Interactions – While the CRO is not an owner of specific risks, he or she has the task of executing a strategic oversight of the entire risk management focus that mandates free access to the board for conveyance and reporting. Not providing such access would cause disconnect in communication and the loss of resolutions to various strategic problems.
  • 3. Managing Risks is Everyone’s Job – The board, senior management and other line managers must remove the misconception that the CRO is the only person responsible for risk. Risk has to be an enterprise-wide concern. Thus, raising awareness and owning risks within operations to establish a risk-aware culture is imperative to a successful implementation of a CRO’s function.
  • 4. Risk is Equal to Opportunity Pursuit – Risk management functions to preserve value as well as create value. The best interests of the organizations are kept while pursuing these strategies to improve the organization. Nevertheless, organizations must realize limits in engaging in value-creating activities against value preservations controls. The CRO attempts to strike this balance through decision making and risk appetite formulation. An imbalance may raise the level of risk for the less proportionate aspect and create a setback to the organizations.
  • 5. Broaden Focus Beyond Compliance – It cannot be stressed more than the fact that the focus of the CRO should be on the enterprise risks, risk profile and aligning strategy based on risks. This goes beyond compliance risks and raises the bar for the CRO. While the CRO has to be in compliance with the laws and regulations, expanding the focus will make it easier for the CRO to have the desired impact in managing risks.
  • 6. Clearly Defined CRO Position – Clearly defined CRO position should be in place in order to enhance the CRO’s objectivity in fact and appearance. Setting the right expectation about the CRO’s responsibility for promoting effective governance of significant risks is crucial in furthering the role of the CRO. Not having clear definitions may cause the CRO to lose focus and extend resources in less important matters lowering the overall effectiveness of the CRO.
The importance of the depth and extent of the relationship the CRO has with senior executives and the board will enable the CRO to be in a stronger and more effective role to manage the overall risks instrumentally protecting the value that organizations have taken years to build.

 

Systematic Approach to Risk Management Roles and Duties

Implementing the Framework throughout the Organization

After an organization selects a risk management framework, they need to communicate the roles and responsibilities related to risk management throughout the organization. In this position paper, The Institute of Internal Auditors (“IIA”) identifies three levels of an organization and the risk management functions carried out by each:

  1. Operational Management
  2. Risk Management and Compliance Functions
  3. Internal Auditors
Notably, the IIA terms these levels as the “three lines of defense” and differentiates their functions from the Board and senior management, meaning that neither the Board nor senior management are direct lines of defense, but serve as the driving force to implement the defenses. Further, the position paper explains that the three lines of defense serve the Board and senior management in their strategy functions.

First Line - Owners and Managers of Risk


The first line of defense consists of those individuals who are deemed to be the owners and managers of risks.  They are described as the people that use the controls on a day-to-day basis to manage a specific risk and who can then identify deficiencies with the risk management processes and controls in place within the organization in a timely manner. Given their familiarity with the specific risks and the organization’s related plan for managing the risks via controls, the IIA provides guidance for upward communication from the first line of defense about such deficiencies for further development of policies and procedures to ensure that controls and processes are effective and meeting the organizational goals.

Second Line - Overseers of Risk


The second line of defense includes those individuals who oversee the functioning of the first line of defense.  Without the second line of defense, there would be no first line of defense, or at least not an effective one. The position paper describes how these overseers of the owners of risks serve multiple functions, one of which is ensuring the controls and processes used by the risk owners and managers are:

  1. Properly designed,
  2. In place, and
  3. Operating as intended.

Moreover, risk management and compliance functions of the second line of defense serve as the link that bridges the gap between the organization and senior management and the Board. They design, implement, and monitor the controls and processes used by the risk owners and managers which then become the basis for which internal auditors provide assurance.

The position paper expands on the functions of the risk overseers and provides specific examples of functions this line of defense could provide in the overall risk management environment.

Third Line - Independent Assurance


Independence assurance is the attribute that separates the second and third line of defense. The article mentions the lack of independence by the second line of defense, which necessitates the need for an objective evaluation function that can be provided by an internal audit function within the organization and describes how the internal audit functions as the third line of defense in risk management.
Additionally, the IIA provides several examples of how the internal audit function can serve an organization to promote a strong risk management environment.

Recommended Practices


Within the position paper, the IIA underscores that not all organizations can fully implement three distinct lines of defense with separate roles due to size of the organization or other similar reasons. In fact, the position paper emphasizes the importance of the functions at each level, rather than the assigned narrow roles, and provides a list of recommended practices for any organization.

Summary


Regardless of the organization size, complexity, or industry, risk management needs to be approached in a manner that the members of the organization know their role in the greater scope of the overall framework. It is important to develop this communication throughout the organization to help guide improvements to risk management processes and controls. Without the three lines of defense operating as intended, senior management and the Board have a reduced effectiveness and efficiency in developing strategies.

 

Internal Audit’s Role in Risk Management

Michael Somich, Executive Director of Internal Audit at Duke University, discusses with Dr. Mark Beasley his views about the role internal audit should play within an organization’s ERM process. He shares insights from his experiences of leading the launch of the ERM initiative at Duke University while also serving as the general audit executive.

Create Communication between Risk Management and Internal Audit

Leveraging All Your Resources to Get Ahead

Over the last decade or more, a number of companies have succumbed to what now seems to have been avoidable catastrophes, such as the BP Deepwater Horizon, 2008 financial crisis, or the Enron collapse. Each event has given birth to a growing attention to risk management by creditors, regulators, and the public as a whole. Stakeholders expect companies to not only avoid these types of events, but to overcome and succeed even when the economy is down.
As risk oversight continues to evolve, organizations are seeking ways to strengthen those efforts to minimize the impact of emerging risk events that might be on the horizon.  The Risk Management Society (“RIMS”) and The Institute of Internal Auditors (“IIA”) believe that the next phase is for the risk management and internal audit functions to collaborate. To understand why they believe collaboration is necessary, the roles of each need to be understood.
  • Risk Management – The area of risk management has evolved in three stages that include traditional, integrated, and ERM. Each stage builds on the next. Many companies implement some form of ERM using one of several frameworks to include COSO’s ERM Framework or ISO 31000. Regardless of the framework, the goal is to identify risks for the purpose of either identifying areas they need to protect or areas where they can pursue an opportunity. 
  • Internal Audit – Whether external or internal, both types of auditors serve the purpose of providing independent, objective assurances designed to give more credibility to a report or activity done by others. The idea behind the assurance is to give the data more value and make it more reliable for use by management. However, the role of internal audit does not simply end with providing assurances. The white paper provides a clear distinction in the roles of internal audit to include roles they should not perform. 

Why Bring Them Together?

When it comes to ERM, risk managers tend to be responsible for leading ERM within the organization, whereas internal audit’s role is to assess the ERM process led by the risk managers. The decision makers or users of information to drive strategy within the company need reliable data. Assurance provided by internal audit regarding ERM in terms of evaluating risk reporting and management processes, lead to a greater confidence in information. As such overlapping of roles exist, it is only logical that they can make each other better by collaboration.

Where is the communication?

Unfortunately for many organizations the overlap between the roles and responsibilities of risk management and internal audit lead to duplication of much of the work. However, each function has a particular set of skills and responsibilities that may make them better in particular areas. The RIMS/IIA white paper presents several potential areas that can be exploited to develop a synergy between the two functions, including the following areas where the path to success has been paved with proven results by proven companies.
  • Enterprise risk assessment and the audit plan – Using the COSO ERM framework as a point of reference, after the first step of setting a company’s objectives, a risk management function identifies, assesses, and develops responses to enterprise risks. The risk assessment should identify the most critical risks to the company and what function is best served to understand how well that risk is controlled. Not only can internal audit’s input help assess the level of control over the risk, but the risks raised during the assessments by risk management can be used to drive the audit plan for the next operating cycle, leading to a better understanding of the risk. 
  • Share resources when feasible – Many risk management departments in a company have a limited staff. As needed, risk management should draw on the resources of other departments, particularly internal audit because there are no other departments that have exposure to every facet of an enterprise.  That means, internal audit is often in the best position to contribute to risk management. 
  • Most companies possess a risk management and internal audit function in some form or another. The mere description of their roles within the company makes it easy to understand why it is only logical that they collaborate to create synergies that are not just theoretical.

Risk Appetite

Every day, each of us intuitively operates within the framework of our own personal appetite for various things (e.g. food) without a hitch. Perhaps that is what makes the enterprise risk management (ERM) concept of “risk appetite” appear so simple on its surface. However, as the ERM community has discovered over time, articulating and utilizing risk appetite is actually a uniquely challenging component of the enterprise risk management process. In this article published in the Spring 2012 issue of Corporate Risk Canada, author Rob Quail acknowledges the confusion that risk appetite has caused amongst ERM practitioners.
Quail points out that much of the professional guidance surrounding risk appetite cryptically advises organizations to develop a singular, broad statement of the desire, or “appetite,” for risk-taking across all the organization’s strategic objectives. Oftentimes, this pursuit leads to results that simply aren’t meaningful to a company in its risk management process. In an effort to correct this issue, Quail explains his idea of what risk appetite should do for an organization, and also lays out a method for exploring risk appetite that drives discussion and creates measurable results.

Back to Basics

In this article, Quail reminds readers that risk appetite should:
  • be oriented around improving actual business decision-making throughout the organization,
  • be concretely centered around an enterprise’s strategic objectives, and
  • address each strategy individually; it is not optimal to simply create a blanket risk appetite statement for the organization as a whole

Risk Appetite Methodology

The risk appetite method that Quail explains in the article primarily asks risk managers to collectively compare their organization’s so-called “target” risk appetite for each strategy with the risk appetite that their organization currently demonstrates in its actual decision-making in relation to each strategy. The risk appetites for each strategy are scored using a rating scale similar to those used in the assessment of individual risks.
Quail suggests that organizations should then plot the assessment data on a “spider charter.” This circular chart allows you to see the risk appetite for each strategy on the same graphic, and also allows for comparison of the target and actual risk appetites for each strategy. The full article contains figures showing the risk appetite assessment scale and samples of spider charts.
Overall, the author demonstrates that organizations can better incorporate risk appetite into their current ERM practices by adopting a methodology that creates measurable, comparable data about risk appetite – much like the data created by other components of the ERM process. By departing from the practice of trying to land on a blanket risk appetite statement, ERM practitioners can address each individual enterprise strategy more effectively. They can also begin to disseminate a consistent, understandable expectation regarding the degree and type of risk that decision-makers throughout the organization should be taking or avoiding on a day-to-day basis.

GRC Solutions in Today’s Market:

A recent Protiviti thought paper highlights a number of challenges associated with governance, risk and compliance (GRC) and provides insights about lessons learned from their experiences in working with a number of organizations in their GRC implementations. The paper also provides a number of suggestions for preventing, preparing or responding to those issues.

What are GRC Solutions?

In order to understand GRC solutions, it is important to first understand the meaning of the terms governance, risk and compliance. Protiviti defines these three terms as follows:
  • Governance – an executive approach to oversight and management
  • Risk – tracking probability of specific harms  
  • Compliance – tracking compliance or regulatory obligations (both internal and external)  
GRC solutions assist organizations in overseeing and automating processes. These tools can reduce redundant activities, produce more reliable data, and increase the automation of processes. Overall, studies show that GRC solutions can help organizations achieve considerable cost savings. In simple terms, GRC solutions consist of the following four GRC tool life cycle phases: Tool Selection, Deployment, Scope Change, Maintenance. Even though this thought paper touches upon all four of these phases, the main purpose of the paper is to highlight potential issues that arise during phases of Scope Change and Maintenance and to provide suggestions about options for preventing, preparing or responding to those issues.

Tool Selection & Deployment

Whenever an organization decides to implement a GRC solution, it needs to have a clear vision of what it wants to achieve with the implementation, have overall goals, and well-defined functionality requirements. Using a top-down approach can help an organization determine the risks and the way they should be handled. The implementation of GRC tools is a rigorous process, which starts at an initial process planning and ends when an organization achieves an optimized and efficient functioning of the GRC tool.

Scope Change

This phase of GRC tool life cycle occurs when other departments within an organization “become aware of the features and capabilities of any given GRC solution already being used for another function.” This can occur for several reasons, such as role shifts or employee transfers. Regardless of the cause for Scope Change, there are several potential issues than can arise. Protiviti describes few important ones and suggests methods for dealing with them.
  1. Infectious Spread - Implementing “too much, too fast” can create a problem of not being able to keep up with the ever-changing nature of the project. Some of Protiviti’s suggestions are creating GRC workflows for more than just core business functions, defining goals, prioritizing, not limiting requirements to one tool, and using consistent processes during the development. 
  2. Difficulty Obtaining Management Buy-In – In some cases it can be difficult to gain full support of management because of concerns such as cost, maintenance, training or security. For this scenario, it is important to perform demonstrations and test runs and ensure sufficient and flawless communication within the organization. 
  3. Increased Resources – Any scope change is prone to additional resource requirements. The means to deal with this issue are testing for possible IT problems and preventing human conflicts. 
  4. Cloud – Considering the sensitive nature of most GRC solutions, storing GRC data in the cloud can often cause many issues mostly related to security and data storage. It is important to define clear contracts with the vendors and put back-up procedures in place. 
  5. Unexpected Costs – During an extensive project such as a GRC tool implementation, unexpected costs should be expected and accounted for already during the initial planning and budgeting.  
  6. Integrating Tools – In cases when tools do not work together properly, additional tools and software needs to be created to achieve correct software interaction. 
  7. Internet-Facing Components – Issues can occur when determining the access of third parties to the organizations GRC solution. Access procedures and policies need to be established to achieve adequate protection. 

Maintenance

This last phase of the GRC tool life cycle can often be misjudged and do not receive appropriate attention, however, there are issues that need to be considered in this stage as well. The following are some of the common issues that can occur:
  1. Tool Ownership – Who should be responsible for the GRC tool? This responsibility should belong to a department that can handle this comprehensive task in the best way. In most organization, where GRC is predominantly an IT support tool, the IT team will take on this responsibility.   
  2. Too Many Administrators – The organization needs to be careful when distributing the rights and access for the GRC tool. There is a risk of accidental or malicious modifications of the GRC data. Any data access capabilities should be monitored, recorded and retained for certain time period.   
  3. Security Concerns – Security costs should be expected in a case like this. Periodical maintenance is an efficient way of preventing security flaws and additional related costs.   
  4. Business Continuity/Disaster Recovery – When the GRC tool becomes an essential part of an organization, requirements for business continuity are more complex and allowable downtimes are very limited.  
  5. Reporting Requirements – No matter what the actual change of reporting requirements is, effective communication throughout the organization is essential in this case.  
  6. Usability/Training – Sufficient training is necessary for both the administrators and the end users to prevent any major problems and their possible consequences. In addition, it is very important to review organizational changes and determine modifications or improvements to the GRC tools.   
  7. Documentation – Appropriate documentation needs to be developed to ensure continual improvement and reliable performance throughout the organization. Some good examples of GRC documentation are: run book, administrators’ cheat sheet, data maps, user guides, test plans and process flow diagrams. 

Conclusion

GRC solutions can be very powerful tools for today’s businesses, but they require appropriate execution and maintenance. The issues mentioned in this thought paper are just few of many that can occur during the GRC life cycle and therefore it is important to continuously monitor GRC tools and ensure their proper performance. The suggestions and guidance presented in this paper are designed to help organizations be ready for GRC implementation, avoid common mistakes, resolve issues and ensure correct performance of their GRC tools.
Click below to download the thought paper.
Link: Protiviti

Zachman framework EAM:

The Zachman framework is a logical structure intended to provide a comprehensive representation of an information technology enterprise. It allows for multiple perspectives and categorization of business artifacts. The brainchild of John Zachman who conceived the idea in 1987, the full technical name is Zachman Framework for Enterprise Architecture and Information Systems Architecture.
The Zachman framework gathers and refines principles from older methods. It has a structure (or framework) independent of the tools and methods used in any particular IT business. The framework defines how perspectives are related according to certain rules or abstractions. A framework takes the form of a 36-cell table with six rows (scope, business model, system model, technology model, components, and working system) and six columns (who, what, when, where, why, and how).
The Zachman framework is seen by some business managers as an ideal set of rules for the management of complex and evolving IT enterprises.
Here are some references.

A Comparison of the Top Four EAM:

 Enterprise Architecture
Summary: Twenty years ago, a new field was born that soon came to be known as enterprise architecture. This paper covers a broad introduction to the field of enterprise architecture. Although the history of the field goes back 20 years, the field is still evolving—and rapidly so.

Executive Summary

Twenty years ago, a new field was born that soon came to be known as enterprise architecture. The field initially began to address two problems:
  • System complexity—Organizations were spending more and more money building IT systems; and
  • Poor business alignment—Organizations were finding it more and more difficult to keep those increasingly expensive IT systems aligned with business need.
The bottom line: more cost, less value. These problems, first recognized 20 years ago, have today reached a crisis point. The cost and complexity of IT systems have exponentially increased, while the chances of deriving real value from those systems have dramatically decreased.
Today's bottom line: even more cost, even less value. Large organizations can no longer afford to ignore these problems. The field of enterprise architecture that 20 years ago seemed quaintly quixotic today seems powerfully prophetic.
Many enterprise-architectural methodologies have come and gone in the last 20 years. At this point, perhaps 90 percent of the field use one of these four methodologies:
  • The Zachman Framework for Enterprise Architectures—Although self-described as a framework, is actually more accurately defined as a taxonomy
  • The Open Group Architectural Framework (TOGAF)—Although called a framework, is actually more accurately defined as a process
  • The Federal Enterprise Architecture—Can be viewed as either an implemented enterprise architecture or a proscriptive methodology for creating an enterprise architecture
  • The Gartner Methodology—Can be best described as an enterprise architectural practice
This white paper discusses these four approaches to enterprise architecture. It does so within the context of a fictional company that is facing some very nonfictional operations problems. These problems include:
  • IT systems that have become unmanageably complex and increasingly costly to maintain.
  • IT systems that are hindering the organization's ability to respond to current, and future, market conditions in a timely and cost-effective manner.
  • Mission-critical information that is consistently out-of-date and/or just plain wrong.
  • A culture of distrust between the business and technology sides of the organization.
How should this company choose from among these four very different approaches to enterprise architecture? This white paper traces the journey the company is likely to face in using any one of these methodologies.
When examining each of these methodologies in depth, one is struck by the fact that none of these approaches is really complete. Each has strengths in some areas and weaknesses in others.
For many enterprises, none of these methodologies will therefore be a complete solution. For such organizations, this white paper proposes another approach, one that might be called a blended methodology. Choose bits and pieces from each of these methodologies, and modify and merge them according to the specific needs of your organization. This white paper gives an approach to creating such a blended methodology that is a best fit for your organization's needs.
But even a blended methodology will only be as good as an organization's commitment to making changes. This commitment must be driven by the highest level of the organization. The good news is that, with a real commitment to change and a tailored methodology for guiding that change, the 20-year-old promise of enterprise architecture is within reach.
That promise hasn't changed: reducing IT cost and complexity, while increasing business value and effectiveness—or, to put it even more simply, improving your competitiveness in an increasingly competitive world.

Introduction

The year 2007 marks the 20-year anniversary of enterprise architecture. In that time, a number of enterprise-architectural methodologies have come and gone. Today, four dominate the field: the Zachman Framework for Enterprise Architectures, The Open Group Architecture Framework (TOGAF), the Federal Enterprise Architecture (FEA), and Gartner (formerly, the Meta Framework).
Should you care about a field that is 20 years old? It depends. This field was inaugurated to address two major problems in information technology (IT) that were then already becoming apparent. The first problem was managing the increasing complexity of information-technology systems. The second problem was the increasing difficulty in delivering real business value with those systems.
As you can imagine, these problems are related. The more complex a system, the less likely it is that it will deliver maximum business value. As you better manage complexity, you improve your chances of delivering real business value.
So, should you care about this field? It depends on how you feel about positively affecting your organization's bottom line. If managing system complexity and delivering business value are key priorities for you, you should care about enterprise-architecture methodologies. If you are focused on maintaining, or rebuilding, IT's credibility in your organization, or if you strive to promote the use of IT to maintain a competitive position in your industry, you should continue reading this white paper. If these issues don't concern you, these methodologies have little to offer.
As systems become more complex, they generally require more planning. It is easy to see this in buildings. When Henry David Thoreau built his little cabin on Walden Pond (shown in Figure 1), he embraced simplicity and needed no architects. If you are building New York City (shown in Figure 2), simplicity is out of the question, and you will need many architects.
Bb466232.eacompar01(en-us,MSDN.10).gif
Figure 1. Thoreau's cabin at Walden Pond, as drawn by Thoreau's sister, Sophia Thoreau
Bb466232.eacompar02(en-us,MSDN.10).gif
Figure 2. New York City
The relationship between complexity and planning for buildings and cities is similar for information systems. If you are building a simple, single-user, nondistributed system, you might need no architects at all. If you are building an enterprise-wide, mission critical, highly distributed system, you might need a database architect, a solutions architect, an infrastructure architect, a business architect, and an enterprise architect.
This paper is about the methodologies needed to develop the overall architectural vision for an organization. This is the responsibility of the enterprise architect. This is the architect who specializes in the broadest possible view of architecture within the enterprise. This is the architect's architect, the architect who is responsible for coordinating the work of all of the other architects. Do you need such an architect? It all depends on what you are building: Thoreau's cabin or New York City.
Building a large, complex, enterprise-wide information system without an enterprise architect is like trying to build a city without a city planner. Can you build a city without a city planner? Probably. Would you want to live in such a city? Probably not.
Of course, hiring a city planner does not guarantee a livable city; it merely improves your chances. Similarly, having an enterprise architect does not guarantee a successful enterprise architecture. There are many examples of failed enterprise architectures in the world today, and all of them had enterprise architects (probably dozens!). Architectural methodologies can help, but they go only so far. I'll discuss some of the reasons for these failures, and how to avoid them, also in this paper.
Before I get too far into comparing the methodologies that make up the enterprise architect's toolkit, I need to define some terms. This is especially important in an article that is comparing methodologies, because the different methodologies sometimes use similar terms to mean different things.
For example, we have two methodologies that describe themselves as enterprise-architectural frameworks: the Zachman Framework for Enterprise Architectures and The Open Group Architectural Framework (TOGAF). Yet these two methodologies share little in common other than the words enterprise, architecture, and framework.
So, I will start by defining the terms as I will use them in this white paper. Those definitions marked with an asterisk (*) are taken mostly from IEEE-1471-2000 [01], whose definitions I use where they exist and make sense.
architect—One whose responsibility is the design of an architecture and the creation of an architectural description
architectural artifact—A specific document, report, analysis, model, or other tangible that contributes to an architectural description
architectural description*—A collection of products (artifacts) to document an architecture
architectural framework—A skeletal structure that defines suggested architectural artifacts, describes how those artifacts are related to each other, and provides generic definitions for what those artifacts might look like
architectural methodology—A generic term that can describe any structured approach to solving some or all of the problems related to architecture
architectural process—A defined series of actions directed to the goal of producing either an architecture or an architectural description
architectural taxonomy—A methodology for organizing and categorizing architectural artifacts
architecture*—The fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution
enterprise architecture—An architecture in which the system in question is the whole enterprise, especially the business processes, technologies, and information systems of the enterprise
Now that we have a common understanding of these key terms, I can take you through the history of enterprise-architecture methodologies, discuss the problems these methodologies are trying to solve, and compare the top four methodologies in terms of their approach and their relationship to each other.

A Brief History of Enterprise Architecture

The field of enterprise architecture essentially started in 1987, with the publication in the IBM Systems Journal of an article titled "A Framework for Information Systems Architecture," by J.A. Zachman. In that paper, Zachman laid out both the challenge and the vision of enterprise architectures that would guide the field for the next 20 years. The challenge was to manage the complexity of increasingly distributed systems. As Zachman said:
The cost involved and the success of the business depending increasingly on its information systems require a disciplined approach to the management of those systems. [02]
Zachman's vision was that business value and agility could best be realized by a holistic approach to systems architecture that explicitly looked at every important issue from every important perspective. His multiperspective approach to architecting systems is what Zachman originally described as an information systems architectural framework and soon renamed to be an enterprise-architecture framework.
Zachman was a major influence on one of the earliest attempts by a branch of the U.S. Government, the Department of Defense, to create an enterprise architecture. This attempt was known as the Technical Architecture Framework for Information Management (TAFIM) [03] and was introduced in 1994.
The promise of enterprise architectures, such as TAFIM, to better align technical projects with business need was noticed by no less a body than the U.S. Congress. Most likely influenced by the promised benefits of TAFIM, Congress in 1996 passed a bill known as the Clinger-Cohen Act of 1996 [04], also known as the Information Technology Management Reform Act, which mandated that all federal agencies take steps to improve the effectiveness of their IT investments. A CIO Council, consisting of CIOs from all major governmental bodies, was created to oversee this effort.
In April 1998, the CIO Council began work on its first major project, the Federal Enterprise Architecture Framework (FEAF). Version 1.1 [05] of this framework was released in September of 1999. This document contained some innovate ideas, such as "segmented architectures"—that is, architectural focus on segmented subsets of the larger enterprise.
Over time, responsibility for federal enterprise architecture moved from the CIO Council to the Office of Management and Budget (OMB). In 2002, the OMB evolved and renamed the FEAF methodology as the Federal Enterprise Architecture (FEA). I will describe FEA in greater detail, in the section dedicated to it.
Despite the very significant enterprise-architectural activity in the Federal Government (one could argue that no organization has spent more money attempting to develop an enterprise architecture than the U.S. Government), progress has been slow and success stories are overshadowed by higher-profile failures. In 2004, a full eight years after the Clinger-Cohen Act mandated the use of effective IT planning processes, the General Accounting Office (GAO) reported the following:
Only 20 of 96 agencies examined had established at least the foundation for effective architecture management. Further, while 22 agencies increased in maturity since 2001, 24 agencies decreased in maturity and 47 agencies remained the same. [06]
Since January of 2005, the General Accounting Office (GAO, not to be confused with the OMB) has severely chastised a number of U.S. agencies for failures in their adoption or use of enterprise architecture. A few examples include the FBI [07], the Department of Defense [08], the Department of Homeland Security [09], and NASA [10].
In 1998, four years after TAFIM (remember TAFIM?) was introduced and two years after it became codified as Clinger-Cohen, TAFIM was officially retired by the Department of Defense.
The work done on TAFIM was turned over to The Open Group. They morphed it into a new standard that is today known as The Open Group Architectural Framework—better known by its acronym, TOGAF. I will discuss the TOGAF work in the section dedicated to that topic.
In 2005, about the same time that OMB was becoming the dominant EA force in the public sector, another organization was taking steps to become a dominant force in the private sector. This group was Gartner.
By 2005, Gartner was already one of the most influential organizations specializing in CIO-level consulting. However, in the specific area of enterprise architecture, the best known IT research and advisory group was not Gartner, but Meta Group.
Gartner had struggled to build an enterprise-architecture practice, but never achieved the status of the Meta Group. In 2005, Gartner decided that if they couldn't compete with Meta Group, they would do the next best thing: They would buy it.
Following the purchase of Meta Group, Gartner/Meta spent a year looking at what each company brought to the table as far as enterprise-architecture experience and methodologies. The two companies discussed how best to reconcile their often quite different approaches.
In the end, a fairly simple algorithm was applied: If Meta Group liked it, it was in; if Meta Group didn't like it, it was out. Gartner liked architectural frameworks. The Meta Group liked architectural process. So, frameworks were out; processes were in. I'll discuss this Gartner/Meta process in detail, in the section devoted to Gartner.
Figure 3 summarizes this history with an enterprise-architecture timeline. This brings us up to date in the history of enterprise architecture. Now, let's look more closely at today's main methodologies and introduce a case study that will be used in this white paper.
Bb466232.eacompar03(en-us,MSDN.10).gif
Figure 3. Enterprise-architecture timeline

Case Study

So that we can compare and contrast the four major approaches to enterprise architectures, I am going to illustrate how each would approach a similar scenario. This fictitious scenario is a composite of several enterprises with which I have worked over the past several years. So, while it is fictitious, it is very realistic. I'll first describe the scenario.
MedAMore is a chain of drug stores. It started as a regional chain in 1960. In 1995, it developed an innovative software system that enabled it to run drug stores very efficiently. It called this system MedAManage, or MAM. MAM incorporated some innovate business ideas, such as patient-relationship management, inventory management, automated insurance billing, and even utility optimization.
MAM consisted of three programs: MAM/Store, which ran on a small computer at a drug store; MAM/Warehouse, which ran on a server in a regional warehouse; and MAM/Home, which ran on a large server at the home office.
These three programs communicated through files that were transferred from one location (for example, a store) to another (for example, a regional warehouse). When reliable communications lines existed, file transfers could occur through FTP. The system was also flexible enough to accommodate transfers through courier, where necessary.
By 2000, MedAMore was doing quite well—in part, because of the cost-cutting moves enabled by the MAM system. MedAMore decided to begin expansion. To do this, it purchased three regional chains. With these purchases, MedAMore extended its reach through the southeast quadrant of the U.S.
By 2002, it was clear that the same software systems that had initially fueled MedAMore's success were now hampering its future. Some of the problems MedAMore was running into were the following:
  • MAM/Store required regional specializations. For example, different insurance plans needed to be supported in different regions, and these all required changes to the MAM/Store module.
  • The regional warehouses that had been acquired through acquisition each had different ways of receiving orders from the retail stores and different procedures from ordering supplies from the wholesalers. Each of these differences required changes to the MAM/Warehouse module.
  • The file-transfer approach to information sharing that had worked so well when MedAMore consisted of 30 drugstores, one regional warehouse, and one home office were turning out to be difficult to coordinate among 200 drugstores, four regional warehouses, two geographic offices, and one home office. Files were often delivered late, sometimes not at all, and occasionally multiple times. This made it difficult for the home office to access reliable, up-to-date financial information, especially in the areas of sales and inventory.
It was clear to MedAMore management that the MAM system needed many enhancements. However, upgrading this system was difficult. Each of the three modules (store, warehouse, and home office) was huge, inefficient, and cumbersome, and each included functionality for everything that each entity might need.
The modules had grown to over 1 million lines of code each. It was difficult to change one function without affecting others. All of the functions accessed a single database, and changes to one record definition could ripple through the system in an unpredictable fashion. Changing even a single line of code required a rebuild of the entire multimillion-line module.
MedAManage had become MedANightmare. Debugging was difficult. Software builds were torturous. Installing new systems was hugely disruptive.
These technical problems soon created internal conflicts within the home office of MedAMore. The business side of MedAMore wanted to acquire two more regional chains, but IT was still struggling to bring the existing acquisitions online.
This resulted in a rapidly growing divide between the business and the technical sides of MedAMore. The business side saw IT as reducing business agility. The technical side saw the business side as making impossible demands and blamed it for refusing to consult IT before entering into acquisition discussions.
The distrust had reached such a point that, by 2005, the CIO was no longer considered part of the executive team of MedAMore. The business side distrusted IT and tried to circumvent it at every opportunity. The technical side built its IT systems with little input from the business folks. Several large and expensive IT initiatives were ignored by the business side and were eventually abandoned.
By 2006, MedAMore was in crisis. It clearly needed to revamp its technical systems to make them easier to specialize for regional requirements. This was going to be an expensive proposition, and MedAMore couldn't afford for the effort to fail.
Just as importantly, MedAMore also had to rebuild its internal relationships. The constant bickering and distrust between business and IT was affecting morale, efficiency, and profitability. A company that only five years earlier was an industry leader in profitability—in large part, because of its innovative use of IT—was now struggling to stay out of the red—in large part, because of the inflexibility of those same IT systems.
Cath, the CEO of MedAMore, desperately needed a solution. At a CEO conference, she heard how many of her peers were using enterprise architectures to build stronger partnerships between their technical and business groups and deliver more cost-effective IT systems that enabled business agility.
Cath decided that this approach merited further investigation. She asked Irma, her CIO, to prepare a recommendation on the use of an enterprise architecture within MedAMore. Irma was impressed with the approach, but recognized that any such initiative needed to be driven from the top and needed to involve the business side from the start.
On Irma's recommendation, Cath called a meeting with Bret, the Vice-President of Business, and Irma. Cath announced that she had decided to create a common enterprise architecture for MedAMore that would unite its technical and business people. This common enterprise architecture would be named MedAMore-Enterprise Architecture, or MAM-EA. After it was completed, MAM-EA would drive all new IT investment and ensure that every dollar invested in IT was delivering the maximum value to the business.
Cath knew that MAM-EA was a bet-the-company decision for MedAMore. The MAM-EA vision had to work. Cath was depending on Bret (the business side) and Irma (the IT side) to make it work.
So, that is the problem. Now, let's see how each of the EA approaches might provide a solution for MedAMore.

The Zachman Framework for Enterprise Architectures

The first thing we need to understand about the Zachman Framework is that it isn't a framework—at least, by my definition of a framework. According to the American Heritage Dictionary, a framework is defined as:
A structure for supporting or enclosing something else, especially a skeletal support used as the basis for something being constructed; An external work platform; a scaffold; A fundamental structure, as for a written work; A set of assumptions, concepts, values, and practices that constitutes a way of viewing reality. [11].
A taxonomy, on the other hand, is defined as:
The classification of organisms in an ordered system that indicates natural relationships; The science, laws, or principles of classification; systematics; Division into ordered groups or categories [12]
The Zachman "Framework" is actually a taxonomy for organizing architectural artifacts (in other words, design documents, specifications, and models) that takes into account both who the artifact targets (for example, business owner and builder) and what particular issue (for example, data and functionality) is being addressed.
As John Zachman retrospectively described his work:
The [Enterprise Architecture] Framework as it applies to Enterprises is simply a logical structure for classifying and organizing the descriptive representations of an Enterprise that are significant to the management of the Enterprise, as well as to the development of the Enterprise's systems. [13]
Many proponents of the Zachman Framework see it as cross-disciplinary, with influence extending far beyond IT. One popular book on Zachman, for example, says:
...in due course, you will discover that the Framework exists in everything you do, not only IT projects. When you thoroughly understand the Framework, you can become more effective in everything you do. This means everything. This statement is not made lightly. [14]
John Zachman himself told me, in an interview that I recently conducted with him:
...the Framework schema has been around for thousands of years and I am sure it will be around for a few more thousands of years. What changes is our understanding of it and how to use it for Enterprise engineering and manufacturing. [15]
Zachman originally explained his IT taxonomy using the building industry as an analogy. In that industry, architectural artifacts are implicitly organized using a two-dimensional organization. One dimension is the various "players in the game." For a physical building, some of these players are the owner (who is paying for the project), the builder (who is coordinating the overall construction), and a zoning board (who is ensuring that construction follows local building regulations).
A building architect prepares different artifacts for each of these players. Every player demands complete information, but what constitutes completeness differs for the different players. The owner is interested in a complete description of the functionality and aesthetics of the building. The builder is interested in a complete description of the materials and construction process. The owner doesn't care about the placement of studs in the walls. The builder doesn't care how the bedroom windows line up with the morning sun.
As Zachman said in his original article:
...each of the architectural representations differs from the others in essence, not merely in level of detail. [16]
The second dimension for architectural artifact organization is the descriptive focus of the artifact: the what, how, where, who, when, and why of the project. This dimension is independent of the first. Both the builder and the owner need to know what, but the owner's need to know what is different from the builder's need to know what. What what is what depends on who is asking the question.
In his first paper and Zachman's subsequent elaboration in 1992 [17], Zachman proposed that there are six descriptive foci (data, function, network, people, time, and motivation) and six player perspectives (planner, owner, designer, builder, subcontractor, and enterprise.) These two dimensions can be arranged in a grid, as shown in Figure 4.
From the business owner's perspective, "data" means business entities. This can include information about the entities themselves, such as customers and products, or information about relationships between those entities, such as demographic groups and inventories. If you are talking to a business owner about data, this is the language you should use.
From the perspective of the person implementing the database, "data" does not mean business entities, but rows and columns organized into tables and linked together by mathematical joins and projections. If you are talking to a database designer about data, don't talk about customer demographic groups, but talk about third-normal relational tables.
It's not that one of these perspectives is better than the other or more detailed than the other or of a higher priority than the other. Both of these perspectives on data are critical to a holistic understanding of the system's architecture. As Zachman said:
We are having difficulties communicating with one another about information systems architecture, because a set of architectural representations exists, instead of a single architecture. One is not right and another wrong. The architectures are different. They are additive and complementary. There are reasons for electing to expend the resources for developing each architectural representation. And there are risks associated with not developing any one of the architectural representations. [18]
I discussed the historical importance of the Zachman Framework in the history section. Here, I will discuss the actual framework itself and how it could be used to help build MAM-EA, the problem proposed in the case-study section.
As I mentioned earlier, the Zachman Framework consists of six functional foci, each considered from the perspective of a major player. The Zachman Framework as it is portrayed today is shown in Figure 4.
Bb466232.eacompar04(en-us,MSDN.10).gif
Figure 4. Zachman grid
As you can see from Figure 4, there are 36 intersecting cells in a Zachman grid—one for each meeting point between a player's perspective (for example, business owner) and a descriptive focus (for example, data.). As we move horizontally (for example, left to right) in the grid, we see different descriptions of the system—all from the same player's perspective. As we move vertically in the grid (for example, top to bottom), we see a single focus, but change the player from whose perspective we are viewing that focus.
There are three suggestions of the Zachman grid that can help MedAMore in the development of MAM-EA.
The first suggestion of the Zachman taxonomy is that every architectural artifact should live in one and only one cell. There should be no ambiguity about where a particular artifact lives. If it is not clear in which cell a particular artifact lives, there is most likely a problem with the artifact itself.
As MedAMore begins accumulating artifacts in the development of MAM-EA, it can use the Zachman grid to clarify the focus of each of these artifacts. For example, artifacts relating to a service-oriented architecture live mostly in the third row (designer's perspective). They generally will not be of interest to the business owner (Bret, in the MedAMore case study).
The second suggestion of the Zachman taxonomy is that an architecture can be considered a complete architecture only when every cell in that architecture is complete. A cell is complete when it contains sufficient artifacts to fully define the system for one specific player looking at one specific descriptive focus.
When every cell is populated with appropriate artifacts, there is a sufficient amount of detail to fully describe the system from the perspective of every player (what we might today call a stakeholder) looking at the system from every possible angle (descriptive focus). So, MedAMore can use the Zachman grid to ensure that appropriate discussions are occurring between all of the important stakeholders of MAM-EA.
The third suggestion of the Zachman grid is that cells in columns should be related to each other. Consider, for example, the data column (the first column) of the Zachman grid. From the business owner's (Bret's) perspective, data is information about the business. From the database administrator's perspective, data is rows and columns in the database.
While the business owner thinks about data quite differently from the database administrator, there should be some relationship between these perspectives. Somebody should be able to follow Bret's business requirements and show that the database design is, in fact, being driven by those requirements. If Bret has requirements that are not traceable down to the database design, we must ask if the business needs will be met by this architecture. On the other hand, it there are database-design elements that do not trace back to business requirements, we might ask if we have included unnecessary design at the database level.
So, we can see five ways in which the Zachman grid can help in the development of MAM-EA. It can help:
  1. Ensure that every stakeholder's perspective has been considered for every descriptive focal point.
  2. Improve the MAM-EA artifacts themselves by sharpening each of their focus points to one particular concern for one particular audience.
  3. Ensure that all of Bret's business requirements can be traced down to some technical implementation.
  4. Convince Bret that Irma's technical team isn't planning on building a bunch of useless functionality.
  5. Convince Irma that the business folks are including her IT folks in their planning.
But Zachman by itself is not a complete solution for MedAMore. There are far too many issues that will be critical to MAM-EA's success that Zachman does not address. Zachman does not give us a step-by-step process for creating a new architecture. Zachman doesn't even give us much help in deciding if the future architecture we are creating is the best architecture possible. For that matter, Zachman doesn't even give us an approach to show a need for a future architecture. For these and other issues, we are going to need to look at other methodologies.

The Open Group Architecture Framework (TOGAF)

The Open Group Architecture Framework is best known by its acronym, TOGAF. TOGAF is owned by The Open Group [19]. TOGAF's view of an enterprise architecture is shown in Figure 5.
Bb466232.eacompar05(en-us,MSDN.10).gif
Figure 5. TOGAF's enterprise architecture
As shown in the figure, TOGAF divides an enterprise architecture into four categories, as follows:
  1. Business architecture—Describes the processes the business uses to meet its goals
  2. Application architecture—Describes how specific applications are designed and how they interact with each other
  3. Data architecture—Describes how the enterprise datastores are organized and accessed
  4. Technical architecture—Describes the hardware and software infrastructure that supports applications and their interactions
TOGAF describes itself as a "framework," but the most important part of TOGAF is the Architecture Development Method, better known as ADM. ADM is a recipe for creating architecture. A recipe can be categorized as a process. Given that ADM is the most visible part of TOGAF, I categorize TOGAF overall as an architectural process, instead of either an architectural framework (as The Open Group describes TOGAF) or a methodology (as it describes ADM).
Viewed as an architectural process, TOGAF complements Zachman—which, recall, I categorized as an architectural taxonomy. Zachman tells you how to categorize your artifacts. TOGAF gives you a process for creating them.
TOGAF views the world of enterprise architecture as a continuum of architectures, ranging from highly generic to highly specific. It calls this continuum the Enterprise Continuum. It views the process of creating a specific enterprise architecture, such as MAM-EA, as moving from the generic to the specific. TOGAF's ADM provides a process for driving this movement from the generic to the specific.
TOGAF calls most generic architectures Foundation Architectures. These are architectural principles that can, theoretically, be used by any IT organization in the universe.
TOGAF calls the next level of specificity Common Systems Architectures. These are principles that one would expect to see in many—but, perhaps, not all—types of enterprises.
TOGAF calls the next level of specificity Industry Architectures. These are principles that are specific across many enterprises that are part of the same domain—such as, in our MedAMore case study, all pharmaceutical enterprises.
TOGAF calls the most specific level the Organizational Architectures. These are the architectures that are specific to a given enterprise, such as MedAMore.
Figure 6 shows the relationship between the Enterprise Continuum and the Architecture Development Method (ADM).
Bb466232.eacompar06(en-us,MSDN.10).gif
Figure 6. The TOGAF Enterprise Continuum
TOGAF defines the various knowledge bases that live in the Foundation Architecture. Two that you might run into are the Technical Reference Model (TRM) and the Standards Information Base (SIB). The TRM is a suggested description of a generic IT architecture. The SIB is a collection of standards and pseudo-standards that The Open Group recommends that you consider in building an IT architecture.
TOGAF presents both the TRM and the SIB as suggestions; neither is required. In my view, both the TRM and the SIB are flawed for the same reason: They are biased toward application portability, at the expense of application interoperability and application autonomy. I consider this an outdated view of technical architectures.
For an organization such as MedAMore, TOGAF largely boils down to the Architecture Development Method (ADM). Individuals within MedAMore will be exposed to the Enterprise Continuum, the SIB, and the TRM (as well as a few other TOGAF features), which is why I discussed them. But the day-to-day experience of creating an enterprise architecture will be driven by the ADM, a high-level view of which is shown in Figure 7.
Bb466232.eacompar07(en-us,MSDN.10).gif
Figure 7. The TOGAF Architecture Development Method (ADM)
As shown in Figure 7, the TOGAF ADM consists of eight phases that are cycled through after an initial "priming of the pump." I'll take you through these phases as they could be applied to the MedAMore case study. But, before MedAMore can start the ADM, it needs to gain some experience with TOGAF. MedAMore will have two choices on how it can get this experience.
First, MedAMore can train itself in TOGAF. MedAMore can download the TOGAF documentation [20]—which describes all of TOGAF, including the ADM, in considerable detail. It can purchase books on TOGAF [21]. There is probably more free and inexpensive available information about TOGAF than about all other architectural methodologies combined.
Second, MedAMore can buy expertise in TOGAF. There are consultants who specialize in TOGAF and have earned Open Group certification [22]. Because MedAMore wants to minimize any chances of failure, it has chosen to call in a TOGAF consultant. MedAMore has brought in Teri, an Open Group–certified TOGAF architect. Remember that the other players at MedAMore are Cath, the CEO of MedAMore; Bret, the Business VP; and Irma, the CIO.
In the Preliminary Phase, Teri meets with the major players at MedAMore to introduce the TOGAF process. Her three goals in the preliminary phase are to:
  1. Make sure everybody is comfortable with the process.
  2. Modify the TOGAF process, as necessary, to fit within the MedAMore culture.
  3. Set up the governance system that will oversee future architectural work at MedAMore.
Teri will work closely with Bret to understand the business philosophy, business models, and strategic drivers of MedAMore. She will work closely with Irma to define the architectural principles that drive technological architectures at MedAMore and document those principles using the TOGAF-recommended format.
In some organizations, achieving buy-in on the need for an enterprise architecture could be very difficult. This is especially true when the effort is driven from the IT organization, and even more so when there is a history of discord between the business and the technical sides of the organization. MedAMore does have this history of animosity. However, it has another fact going for it from which Teri should take heart: The effort is not driven by IT, but is driven by Cath, the CEO. This gives the project high visibility and creates a positive incentive for cooperation from all sides.
As soon as Teri and MedAMore have completed the Preliminary Phase, they are ready to start Phase A. Phase A begins, at least in theory, with a Request for Architecture Work from some organization within MedAMore. This document includes the business reasons for the request, budget and personnel information, and any constraints that need to be considered. Because MedAMore has never done a Request for Architecture Work, Teri will probably need to work with the sponsoring organization in creating such a request.
As soon as the Request for Architecture Work (or some equivalent) has been received, Teri (the TOGAF consultant) starts MedAMore on Phase A. In Phase A, Teri will ensure that the project has the necessary support within MedAMore, define the scope of the project, identify constraints, document the business requirements, and establish high-level definitions for both the baseline (starting) architecture and target (desired) architecture.
These baseline and target definitions will include high-level definitions on all four of the EA sub-architectures shown back in Figure 5—namely, business, technology, data, and application architectures.
The culmination of Phase A will be a Statement of Architecture Work, which must be approved by the various stakeholders before the next phase of the ADM begins. The output of this phase is to create an architectural vision for the first pass through the ADM cycle. Teri will guide MedAMore into choosing the project, validating the project against the architectural principles established in the Preliminary Phase, and ensure that the appropriate stakeholders have been identified and their issues have been addressed.
The Architectural Vision created in Phase A will be the main input into Phase B. Teri's goal in Phase B is to create a detailed baseline and target business architecture and perform a full analysis of the gaps between them. She will work primarily with Bret (or Bret's team) to achieve this.
Phase B is quite involved—involving business modeling, highly detailed business analysis, and technical-requirements documentation. A successful Phase B requires input from many stakeholders. The major outputs will be a detailed description of the baseline and target business objectives, and gap descriptions of the business architecture.
Phase C does for the information-systems architecture what Phase B does for the business architecture. In this phase, Teri works primarily with Irma (or her team). TOGAF defines nine specific steps, each with multiple sub-steps:
  1. Develop baseline data-architecture description
  2. Review and validate principles, reference models, viewpoints, and tools
  3. Create architecture models, including logical data models, data-management process models, and relationship models that map business functions to CRUD (Create, Read, Update, Delete) data operations
  4. Select data-architecture building blocks
  5. Conduct formal checkpoint reviews of the architecture model and building blocks with stakeholders
  6. Review qualitative criteria (for example, performance, reliability, security, integrity)
  7. Complete data architecture
  8. Conduct checkpoint/impact analysis
  9. Perform gap analysis
The most important deliverable from this phase will be the Target Information and Applications Architecture.
Phase D completes the technical architecture—the infrastructure necessary to support the proposed new architecture. This phase is completed mostly by engaging with Irma's technical team.
Phase E evaluates the various implementation possibilities, identifies the major implementation projects that might be undertaken, and evaluates the business opportunity associated with each. The TOGAF standard recommends that Teri's first pass at Phase E "focus on projects that will deliver short-term payoffs and so create an impetus for proceeding with longer-term projects."
This is good advice in any architectural methodology. Therefore, Teri should be looking for projects that can be completed as cheaply as possible, while delivering the highest perceived value. A good starting place to look for such projects is the organizational pain-points that initially convinced Cath (the MedAMore CEO) to adopt an enterprise architectural-based strategy in the first place. These pain-points, described earlier, included difficulties in completing regional/warehouse specialization and unreliability in data sharing.
Phase F is closely related to Phase E. In this phase, Teri works with MedAMore's governance body to sort the projects identified in Phase E into priority order that include not only the cost and benefits (identified in Phase E), but also the risk factors.
In Phase G, Teri takes the prioritized list of projects and creates architectural specifications for the implementation projects. These specifications will include acceptance criteria and lists of risks and issues.
The final phase is H. In this phase, Teri modifies the architectural change-management process with any new artifacts created in this last iteration and with new information that becomes available.
Teri is then ready to start the cycle again. One of the goals from the first cycle should be information transfer, so that Teri's services are required less and less as more and more iterations of the cycle are completed.
Much of the results of the TOGAF process will be determined as much by the Teri/MedAMore relationship as it will by the TOGAF specification itself. TOGAF is meant to be highly adaptable, and details for the various architectural artifacts is sparse. As one book on TOGAF says:
TOGAF is not wholly specific with respect to generated documents; in fact, it provides very little in the way of prescriptive document templates—merely guidelines for inputs and outputs. [23]
The TOGAF specification is also flexible with respect to the phases. As the specification itself says:
One of the tasks before applying the ADM is to review its components for applicability, and then tailor them as appropriate to the circumstances of the individual enterprise. This activity might well produce an "enterprise-specific" ADM. [24]
TOGAF allows phases to be done incompletely, skipped, combined, reordered, or reshaped to fit the needs of the situation. So, it should be no surprise if two different TOGAF-certified consultants end up using two very different processes—even when working with the same organization.
TOGAF is even more flexible about the actual generated architecture. In fact, TOGAF is, to a surprising degree, "architecture-agnostic". The final architecture might be good, bad, or indifferent. TOGAF merely describes how to generate an enterprise architecture, not necessarily how to generate a good enterprise architecture. For this, you are dependent on the experience of your staff and/or TOGAF consultant. People adopting TOGAF in the hopes of acquiring a magic bullet will be sorely disappointed (as they will be with any of the methodologies).

Federal Enterprise Architecture (FEA)

The Federal Enterprise Architecture (FEA) is the latest attempt by the federal government to unite its myriad agencies and functions under a single common and ubiquitous enterprise architecture. FEA is still in its infancy, as most of the major pieces have been available only since 2006. However, as I discussed in the history section, it has a long tradition behind it and, if nothing else, has many failures from which it has hopefully learned some valuable lessons.
FEA is the most complete of all the methodologies discussed so far. It has both a comprehensive taxonomy, like Zachman, and an architectural process, like TOGAF. FEA can be viewed as either a methodology for creating an enterprise architecture or the result of applying that process to a particular enterprise—namely, the U.S. Government. I will be looking at FEA from the methodology perspective. My particular interest here is how we can apply the FEA methodology to private enterprises.
Most writers describe FEA as simply consisting of five reference models, one each for performance: business, service, components, technical, and data. It is true that FEA has these five references models, but there is much more to FEA than just the reference models. A full treatment of FEA needs to include all of the following:
  • A perspective on how enterprise architectures should be viewed (the segment model, that I will describe shortly)
  • A set of reference models for describing different perspectives of the enterprise architecture (the five models, mentioned earlier)
  • A process for creating an enterprise architecture
  • A transitional process for migrating from a pre-EA to a post-EA paradigm
  • A taxonomy for cataloging assets that fall within the purview of the enterprise architecture
  • An approach to measuring the success of using the enterprise architecture to drive business value
You can see that the FEA is about much more than models. It includes everything necessary to build an enterprise architecture for probably the most complex organization on earth: the U.S. Government. As the FEA-Program Management Office (FEAPMO) says, FEA, taken in toto, provides:
...a common language and framework to describe and analyze IT investments, enhance collaboration and ultimately transform the Federal government into a citizen-centered, results-oriented, and market-based organization as set forth in the President's Management Agenda. [25].
While it might be a stretch to imagine that anything short of divine intervention could "transform the Federal government into a citizen-centered, results-oriented, and market-based organization," there is at least hope that some of the FEA methodology could help our beleaguered MedAMore corporation deal with its much more mundane problems. So, let's take a look at what FEA has to offer.

The FEA Perspective on EA

The FEA Perspective on EA is that an enterprise is built of segments, an idea first introduced by FEAF [26]. A segment is a major line-of-business functionality, such as human resources. There are two types of segments: core mission-area segments and business-services segments.
A core mission-area segment is one that is central to the mission or purpose of a particular political boundary within the enterprise. For example, in the Health and Human Services (HHS) agency of the federal government, health is a core mission-area segment.
A business-services segment is one that is foundational to most, if not all, political organizations. For example, financial management is a business-services segment that is required by all federal agencies.
Another type of enterprise-architecture asset is an enterprise service. An enterprise service is a well-defined function that spans political boundaries. An example of an enterprise service is security management. Security management is a service that works in a unified manner across the whole swath of the enterprise.
The difference between enterprise services and segments, especially business-service segments, is confusing. Both are shared across the entire enterprise. The difference is that business-service segments have a scope that encompasses only a single political organization. Enterprise services have a scope that encompasses the entire enterprise.
In the federal government, for example, both HHS and the Environmental Protection Agency (EPA) use the human resources business-service segment. However, the people who are managed by human resources are a different group for HHS from what they are for the EPA.
Both HHS and the EPA also use the security management enterprise service. But the security credentials that are managed by the security-management service are not specific to either of those agencies. Security credentials are managed effectively only when they are managed at the scope of the enterprise.
Resist the temptation to equate either segments or services with services, as in service-oriented architectures. There are two reasons such a comparison would be flawed. Firstly, enterprise services, business-service segments, and core mission-area segments are all much broader in focus than services found in service-oriented architectures. Secondly, segments are an organizational unit for an enterprise architecture, whereas services are an organizational unit for technical implementations. As organizational units for an enterprise architecture, their depth includes not just the technical, but also the business and the data architectures.
One final note about segments: Although segments function at the political (that is, agency) level, they are defined at the enterprise (that is, government) level. Enterprise services, of course, both function and are defined at the enterprise level.
The fact that segments are defined globally facilitates their reuse across political boundaries. One can map out the usage of segments across the political boundaries of the enterprise, then use that map to seek opportunities for architectural reuse. Figure 8, for example, shows a segment map of the federal government from the FEA Practice Guide [27]. As you can see, there are many segments (the vertical columns) that are used in multiple agencies, and any or all of these are good candidates for sharing.
Bb466232.eacompar08(en-us,MSDN.10).gif
Figure 8. Segment map of the federal government

FEA Reference Models

The five FEA reference models are all about establishing common languages. The goal here is to facilitate communication, cooperation, and collaboration across political boundaries. According to the FEAPMO:
The FEA consists of a set of interrelated "reference models" designed to facilitate cross-agency analysis and the identification of duplicative investments, gaps, and opportunities for collaboration within and across agencies. Collectively, the reference models [compose] a framework for describing important elements of the FEA in a common and consistent way. [28]
Why do we need a common language? Consider this exchange:
James: Do you have a torch I can borrow?
Roger: No, I'm afraid not.
James: Do you know where I can get one?
Roger: The hardware store in town should have one.
So, James goes out to the hardware store and buys himself a torch. He returns.
Roger: Did you get your torch?
James: Yes, here it is.
Roger: That's not a torch! That's a flashlight. Why didn't you say so? I have one you could have borrowed.
James: Well, why didn't you say so?
The problem, of course, is that James comes from England, where what I call a flashlight they call a torch. And when I hear torch, I think of a blowtorch. Although we both speak English, we don't necessarily speak the same English. The result is that James goes out and unnecessarily spends money on something that I could have lent him.
This is exactly the problem that the FEA Reference Models are trying to solve on a much larger scale. Suppose the Internal Revenue Service (IRS) decides it needs a demographics system to track taxpayer data. They ask around to see if anybody has one they can modify for their purposes. Nobody does.
Little do they know that, right next door, the Government Printing Office (GPO) has a perfectly good demographics system that is almost exactly what the IRS needs. They just happen to call it a customer-analytics system.
So, the IRS goes out and builds its system from scratch, instead of just modifying the one already built (and paid for) by the GPO. And, in doing so, the IRS will waste considerably more money than James spent on his unnecessary flashlight.
This, in a nutshell, is the goal of the five FEA reference models: to give standard terms and definitions for the domains of enterprise architecture and, thereby, facilitate collaboration and sharing across the federal government. The five reference models are as follows:
  1. The Business Reference Model (BRM) gives a business view of the various functions of the federal government. For example, the BRM defines a standard business capability called water resource management that is a subfunction of natural resources that is considered a line-of-business of the broader services for citizens business area. [29]
  2. The Components Reference Model (CRM) gives a more IT view of systems that can support business functionality. For example, the CRM defines a customer-analytics system that I described earlier in the hypothetical interchange between the IRS and the GPO. [30]
  3. The Technical Reference Model (TRM) defines the various technologies and standards that can be used in building IT systems. For example, the TRM defines HTTP as a protocol that is a subset of a service transport that is a subset of service access and delivery. [31]
  4. The Data Reference Model (DRM) defines standard ways of describing data. For example, the DRM defines an entity as something that contains attributes and participates in relationships. [32]
  5. The Performance Reference Model (PRM) defines standard ways of describing the value delivered by enterprise architectures. For example, the PRM describes quality as a technology measurement area that is defined as "the extent to which technology satisfies functionality or capability requirements." [33]

FEA Process

The FEA Process is primarily focused on creating a segment architecture for a subset of the overall enterprise (in FEA's case, the enterprise is the federal government and the subset is a governmental agency) and is described in the FEA Practice Guidance [34]. I discussed the FEA vision on enterprise segments earlier. The overall segment-architecture development process is (at a very high level) as follows:
  • Step 1: Architectural Analysis—Define a simple and concise vision for the segment, and relate it back to the organizational plan.
  • Step 2: Architectural Definition—Define the desired architectural state of the segment, document the performance goals, consider design alternatives, and develop an enterprise architecture for the segment, including business, data, services, and technology architectures.
  • Step 3: Investment and Funding Strategy—Consider how the project will be funded.
  • Step 4: Program-Management Plan and Execute Projects—Create a plan for managing and executing the project, including milestones and performance measures that will assess project success.

FEA Success Measurement

The FEA framework for measuring organizational success in using enterprise architecture is defined in the Federal Enterprise Architecture Program EA Assessment Framework 2.1 [35]. Federal agencies are rated on their overall maturity levels in three main categories:
  1. Architectural completion—Maturity level of the architecture itself
  2. Architectural use—How effectively the agency uses its architecture to drive decision-making
  3. Architectural results—The benefits being realized by the use of the architecture
OMB assigns each agency a success rating, based on its scores in each category and a cumulative score, as follows:
  • Green—The agency rates quite well in the completion area (it has a quite mature enterprise architecture). It also rates well in both the use area (it is effectively using that enterprise architecture to drive ongoing strategy) and the results area (the usage of that architecture is driving business value).
  • Yellow—The agency rates quite well in the completion area. It also rates well in either the use area or the results area.
  • Red—The agency either does not have a completed architecture and/or is not effectively using that architecture.
The framework is interesting beyond the confines of the public sector. The category ratings can be fruitfully adapted by many enterprises to assess the maturity level of their own architectural efforts. Figure 9, for example, shows my own interpretation of the OMB maturity rankings for architectural completion, as I adapt them for the private sector. Similar adaptations can be created for architectural usage and architectural results.
Bb466232.eacompar09(en-us,MSDN.10).gif
Figure 9. OMB ranking of architectural completion, adapted for private sector by author (Roger Sessions)

FEA Applied to MedAMore

Now that I have taken you through the FEA approach, let's see what this might mean to MedAMore. Let's assume that Cath (MedAMore's CEO) has heard about FEA and how it is promising to streamline the federal government. If it can do this for the federal government, she reasons, surely it can do this for her company.
Cath hires a consultant, Fred, who is an expert on FEA (if such a thing can be said to exist for a methodology that is, at the time of this writing, less than a year old!). Fred's job is to show MedAMore how to do FEA—of course, not the real FEA, but FEA as it might be applied to the private sector. Cath introduces Fred to Bret (the business VP) and Irma (the CIO), and tells them to build her anMEA—FEA adapted for MedAMore.
Keep in mind that Cath has taken quite a risk. No other company to date has attempted to apply FEA to the private sector; and even the experience of using FEA within the public sector is nominal, at best.
The first thing that Fred will want to do is build enthusiasm for MEA. Keep in mind that he is coming into an organization in which the business folks barely speak to IT folks. If MEA is going to succeed, it needs to transform not only processes, but people. He will want to create a series of seminars explaining the value of the soon-to-be-defined MEA and how MEA will benefit not only MedAMore as a whole, but the individual units specifically.
Fred will next build a governance structure—MedAMore's equivalent to FEAPMO. I'll call this group MEAPMO. MEAPMO will own MEA, including the processes, the models, and the architecture itself.
The next thing that Fred will likely do is create reference models that can be used by all of the organizations across MedAMore. The five reference models from FEA can serve as a starting point. Some, such as the Technical Reference Model, might be usable with few modifications. Others, such as the Business Reference Model, will require extensive renovation. He shouldn't do these in excruciating detail, but create starting points and build them up as MEA evolves.
Next, Fred will probably want to create a description of the segment architecture as it applies to MedAMore. Note that he will not be doing a complete segment architecture—just a high-level description. The actual process of completing the architecture will be a constantly evolving project.
By this point, a lot of work will have been done with few results. Fred will probably want to take a first pass at a segment-architecture process. FEA's process will be a good starting point, but will require specialization to MedAMore at the detail level (such as who the team members are and what form the generated artifacts should take).
Now, Fred will test-drive the process with the first segment delivery. He will need to build a team, and then lead this team in analyzing and prioritizing the segments—mapping them to business value, determining their architectural options, delivering the work, and, perhaps most importantly, measuring the results. These measurements will be critical in building momentum for future work.
Soon after completing the first segment, Fred might decide that it is time to measure the progress of the different groups in MedAMore in using MEA effectively. To do so, Fred needs a yardstick to measure the success of the different groups within MedAMore in driving business value with MEA. Fred thus leads MEAPMO in building a MedAMore equivalent to the Federal Enterprise Architecture Program EA Assessment Framework [35]. This yardstick will be Cath's main tool for ensuring that both the different groups are taking MEA seriously and her investment is paying off.
And, finally, after Fred has completed this process, he will start the process again. Each iteration will result in new segments being delivered, more business value being generated, and more substance being added to the MEA methodology. At least, this is the theory. As I said earlier, with MEA, we are working at the bleeding edge.

Gartner

So far, I have written about three different methodologies that come together under the banner of enterprise architectures. This last methodology is a little different. It isn't a taxonomy (like Zachman), a process (like TOGAF), or a complete methodology (like FEA). Instead, it is what I define as a practice. It is the enterprise-architecture practice of one of the best known IT research and consulting organizations in the world: Gartner.
Let me spend a moment exploring my use of the word practice. I use the word "practice" much like I would to describe a physician's practice. A physician—say, Dr. Pérez—does not diagnose a disease by studying taxonomies, although taxonomies do help him communicate to other healthcare providers. He does not diagnose a disease by following a process, although he might go through an informal process in his head. He diagnoses a disease by applying his practice skills. These practice skills include his experience, training, and ongoing relationships with his colleagues.
How do you choose a physician? Do you grill candidates on how well they know the taxonomy of medicine? Do you sit candidates down and ask for a detailed description of the methodology each follows to diagnose illness? Probably not. You might ask your friends, but they probably only know a limited pool of candidates.
One approach to choosing a physician is to go to a well-known institution (a hospital or medical school) and choose from among their staff. In this approach, you are counting on the institution to choose highly qualified physicians and to have developed a community that encourages collaboration and best practices.
Does that institution insist on a rigid methodology for its physicians to follow? Probably not. Even if it does, it is not your primary concern. You are not even concerned with who the physicians in the institution are—although, in time, that will be of more interest to you. Your initial concern is only the reputation of the institution.
This is very similar to the Gartner approach to enterprise architecture. You don't bring in Gartner because they do or don't use TOGAF. You don't go to Gartner because they do or don't follow Zachman's taxonomy. You go to Gartner because they are well-known in their field. You assume both that they hire well-qualified specialists and that they have developed a community that encourages collaboration and best practice.
If you are a Gartner customer and you check the Garner library for research notes describing their enterprise-architecture practice, you can find many such documents. For example, there is "Gartner Enterprise Architecture Process: Evolution 2005" [36] and "Gartner Enterprise Architecture Framework: Evolution 2005" [37]. However, these documents contain little descriptive information and, in any case, are dated in the late-2005 timeframe. Gartner contends that these best practices are timeless, and they continue to augment them as appropriate. The current Gartner methodology was not solidified until probably April of 2006, after the Gartner/Meta merger that I described in the history section.
The best summation of the Gartner practice that I have heard is the following:
Architecture is a verb, not a noun.
What does it mean to say that architecture is a verb, not a noun? It means that it is the ongoing process of creating, maintaining, and, especially, leveraging an enterprise architecture that gives an enterprise architecture its vitality. An architecture that is just a bunch of stiff artifacts that sit in a corner gathering dust is useless, regardless of how sophisticated your taxonomy is for categorizing those artifacts or how brilliant your process is that guided their development.
Gartner believes that enterprise architecture is about bringing together three constituents: business owners, information specialists, the technology implementers. If you can bring these three groups together and unify them behind a common vision that drives business value, you have succeeded; if not, you have failed. Success is measured in pragmatic terms, such as driving profitability, not by checking off items on a process matrix.
Gartner believes that the enterprise architectures must start with where an organization is going, not with where it is. If we are going to clean house, we don't need to exhaustively document everything we are throwing out. Let's focus our energy on what we want to end up with. As soon as we know our goal, we can see how what we have relates to that goal.
Gartner recommends that an organization begin by telling the story of where its strategic direction is heading and what the business drivers are to which it is responding. Gartner will want this story in plain language, without worrying about prescribed documentation standards, acronyms, or techno-babble. The only goal is making sure that everybody understands and shares a single vision.
Most organizations are facing major changes in their business processes. The process of creating an enterprise-architecture vision is the organization's opportunity to sit down, take a collective breath, and ensure that everybody understands the nature, the scope, and the impact of those changes.
As soon as an organization has this single shared vision of the future, it can consider the implications of this vision on the business, technical, information, and solutions architectures of the enterprise. The shared vision of the future will dictate changes in all of these architectures, assign priorities to those changes, and keep those changes grounded in business value.
Enterprise architecture, in the Gartner view, is about strategy, not about engineering. It is focused on the destination. The two things that are most important to Gartner are where an organization is going and how it will get there. Any architectural activity that is extraneous to these questions is irrelevant. "Just enough enterprise architecture, just in time," is another saying you will hear from the Gartner analyst.
Let's say Cath (MedAMore's CEO) likes what she hears. How is a Gartner engagement likely to proceed? With FEA, TOGAF, or Zachman, Cath needs to start by finding a qualified consultant who understands the methodology. With Gartner, this step is much easier: She merely calls Gartner.
Let's say Gartner sends Greg, the Gartner EA consultant. The first thing Greg will want to do is make sure the architecture is driven from the highest levels of the corporation. The fact that he is being called by MedAMore's CEO will be very reassuring.
Exactly how Greg will proceed is difficult to predict, because Gartner does not have a firm, step-by-step process. However, it is likely that he will start by focusing on Cath's strategic vision for MedAMore. He will want her to specify her vision in business terms and resist any temptation to discuss technology. Here are some possible business-vision statements Greg might elicit:
  • MedAMore will have stores in at least 30 states, spread out over 8 geographic regions, by the year 2010. It will accomplish this mainly through acquisition of regional pharmacies.
  • MedAMore will be able to assimilate new regional systems within 120 days of finalization of purchase.
  • MedAMore will reduce its purchasing costs by 10 percent, by consolidating all regional purchasing into a central system.
  • MedAMore's central office will be able to view consolidated sales and inventory reports from all stores that include data up to and including the previous day.
  • MedAMore will be able to reduce its inventory on-hand to no more than a five-day supply.
  • MedAMore will be able to invoice insurance companies by the end of the day on which the prescription was delivered to the patient.
  • Patients will be able to transfer prescriptions from any MedAMore pharmacy to any other.
  • Patients will be able to request prescription refills though a Web interface and receive e-mail notification of their availability for pick-up.
Notice that none of these visionary statements mentions technology (except as a delivery mechanism, in the last statement). Greg is purposely keeping these early discussions focused on business strategy.
Any one of Cath's vision "bullets" will have major ramifications across the business, information, and technical architectures. Part of Greg's job will be to prioritize the bulleted items. Let's say Cath decides that her highest priority is consolidating purchasing, because this will improve profitability in the near term.
Greg will soon work to turn Cath's idea about consolidated purchasing into a common-requirements vision (CRV). The CRV is where we will see some of the changes that will be required to drive Cath's vision for MedAMore. Greg will be going over the business changes with Bret and the technical and information changes with Irma, but he will also be working to bring everybody together as a unified team.
Greg will work with Bret (the business VP) to develop a target business architecture that supports consolidated purchasing. As soon as they have spec'd out the future system, they will also look at their current architecture to see what can be recycled.
Greg will work with Irma (the CIO) to develop a target information architecture that allows the home office to track regional inventories and consolidate procurement. They will also work on the technical architecture for the IT systems that will support the new business architecture. After they understand the future, they will look at current architectures for opportunities to reuse existing technology assets.
After Greg has completed the broad-brush architecture for their strategic vision, he will probably step back from the picture until the consolidated purchasing system has been implemented. If Cath needs help with the implementation of the architecture, she will likely look outside of Gartner, because Gartner does not do implementations.
As soon as the implementation of consolidated purchasing has been completed, Greg will step back in to help with the next iteration. His approach will be to keep the architecture at a high level, business-focused, and hone in on details only when and where necessary. He will continue to see his role not as creating an enterprise architecture for MedAMore, but helping them institute a process for allowing an enterprise architecture to emerge and evolve from the business strategy.

Comparison

As you can see, the leading enterprise-architecture methodologies are very different in their approaches. Which one is best for your organization? There is no one answer to this question. I'll take you through the 12 criteria that I most often use for comparing and evaluating enterprise-architectural methodologies. Not all of these criteria might be relevant to your organization, and some might be more important than others. But, at least, this section can serve as a starting point for your own evaluation.
I'll rank each methodology in each criteria. The ratings will be assigned as follows:
  • 1: Does a very poor job in this area
  • 2: Does an inadequate job in this area
  • 3: Does an acceptable job in this area
  • 4: Does a very good job in this area
Keep in mind that these ratings are subjective. I'm sure most people would disagree with at least one of my ratings.
Taxonomy completeness refers to how well you can use the methodology to classify the various architectural artifacts. This is almost the entire focus of Zachman. None of the other methodologies focuses as much on this area. Ratings:
  • Zachman: 4
  • TOGAF: 2
  • FEA: 2
  • Gartner: 1
Process completeness refers to how fully the methodology guides you through a step-by-step process for creating an enterprise architecture. This is almost the entire focus of TOGAF, with its Architecture Development Method (ADM). Ratings:
  • Zachman: 1
  • TOGAF: 4
  • FEA: 2
  • Gartner: 3
Reference-model guidance refers to how useful the methodology is in helping you build a relevant set of reference models. This is almost the entire focus of FEA. TOGAF also provides support; however, I am less impressed with the TOGAF reference models. Ratings:
  • Zachman: 1
  • TOGAF: 3
  • FEA: 4
  • Gartner: 1
Practice guidance refers to how much the methodology helps you assimilate the mindset of enterprise architecture into your organization and develop a culture in which it is valued and used. This is a primary focus of Gartner's architectural practice. Ratings:
  • Zachman: 1
  • TOGAF: 2
  • FEA: 2
  • Gartner: 4
Maturity model refers to how much guidance the methodology gives you in assessing the effectiveness and maturity of different organizations within your enterprise in using enterprise architecture. Ratings:
  • Zachman: 1
  • TOGAF: 1
  • FEA: 3
  • Gartner: 2
Business focus refers to whether the methodology will focus on using technology to drive business value, in which business value is specifically defined as either reduced expenses and/or increased income. Ratings:
  • Zachman: 1
  • TOGAF: 2
  • FEA: 1
  • Gartner: 4
Governance guidance refers to how much help the methodology will be in understanding and creating an effective governance model for enterprise architecture. Ratings:
  • Zachman: 1
  • TOGAF: 2
  • FEA: 3
  • Gartner: 3
Partitioning guidance refers to how well the methodology will guide you into effective autonomous partitions of the enterprise, which is an important approach to managing complexity. Ratings:
  • Zachman: 1
  • TOGAF: 2
  • FEA: 4
  • Gartner: 3
Prescriptive catalog refers to how well the methodology guides you in setting up a catalogue of architectural assets that can be reused in future activities. Ratings
  • Zachman: 1
  • TOGAF: 2
  • FEA: 4
  • Gartner: 2
Vendor neutrality refers to how likely you are to get locked-in to a specific consulting organization by adopting this methodology. A high rating here indicates low vendor lock-in. Ratings:
  • Zachman: 2
  • TOGAF: 4
  • FEA: 3
  • Gartner: 1
Information availability refers to the amount and quality of free or inexpensive information about this methodology. Ratings:
  • Zachman: 2
  • TOGAF: 4
  • FEA: 2
  • Gartner: 1
Time to value refers to the length of time you will likely be using this methodology before you start using it to build solutions that deliver high business value. Ratings:
  • Zachman: 1
  • TOGAF: 3
  • FEA: 1
  • Gartner: 4
The criteria and ratings are summarized in Figure 10.
Bb466232.eacompar10(en-us,MSDN.10).gif
Figure 10. Criteria and ratings for each methodology
One of the important points of Figure 10 is that none of the enterprise-architecture methodologies is really complete. Each has its strengths and weaknesses.
How do you choose which methodology is best for you? Here is my recommended approach:
  1. Go through the rows (criteria) in Figure 10, eliminating any that you feel are not important to your organization.
  2. Add any additional rows (criteria) that you feel are important, and rate each of the methodologies in that area.
  3. Change any of my ratings with which you disagree.
At the end of this exercise, you should have a good idea about the strengths and weaknesses of each methodology with respect to your enterprise's needs. If a clear winner emerges, count yourself lucky. Find a consultant who specializes in helping enterprises implement that methodology, and go for it.
For many organizations, there will be no clear winner. For such organizations, I recommend you use a blended approach, in which you create your own enterprise-architectural methodology consisting of bits and pieces of each of the methodologies that provide the highest value in your specific areas of concern.
You will want a different kind of consultant—one who has a broad perspective of all of these methodologies and specializes in helping enterprises create a methodology that works best, given the specific needs and political realities of that enterprise.

Conclusion

This paper has covered a broad introduction to the field of enterprise architecture. The history of the field goes back 20 years, but the field is still evolving—and rapidly so. Two of the four major methodologies (Gartner and FEA) have undergone major changes in the last two years alone.
As I have shown, these methodologies are quite different from each other, both in goals and in approach. This is good news and bad. It is bad news, in that it increases the difficulty for many organizations in choosing one single enterprise-architectural methodology. How do you choose between methodologies that have so little in common? Choosing between Zachman and TOGAF, for example, is like choosing between spinach and hammers.
But the good news is that these methodologies can be seen as complementing each other. For many organizations, the best choice is all of these methodologies, blended together in a way that works well within that organization's constraints. This white paper should provide a good starting place for understanding the value of each of these methodologies and how they can complement each other.
Whichever route you choose, remember that enterprise architecture is a path, not a destination. An enterprise architecture has no value unless it delivers real business value as quickly as possible. One of the most important goals of any enterprise architecture is to bring the business side and the technology sides together, so that both are working effectively toward the same goals.
In many organizations, there is a culture of distrust between the technology and business folks. No enterprise-architecture methodology can bridge this divide unless there is a genuine commitment to change. That commitment must come from the highest level of the organization. Methodologies cannot solve people problems; they can only provide a framework in which those problems can be solved.
But, as soon as you have that commitment to change, an enterprise-architecture methodology can be a valuable tool for guiding that change. This change can manifest itself in many ways. Some of the predicted benefits from a successfully implemented enterprise architectural include:
  • Improvements in using IT to drive business adaptability.
  • Closer partnership between business and IT groups.
  • Improved focus on organizational goals.
  • Improved morale, as more individuals see a direct correlation between their work and the organization's success.
  • Reduced numbers of failed IT systems.
  • Reduced complexity of existing IT systems.
  • Improved agility of new IT systems.
  • Closer alignment between IT deliverables and business requirements.
It is obvious that an organization that does well in these key areas will be more successful than one that doesn't. This is true regardless of whether success is measured with tangibles, such as profitability and return on investment, or intangibles, such as customer satisfaction and employee turnover.
The starting point for any enterprise architecture is some critical self-analysis. Does your organization spend too much money building IT systems that deliver inadequate business value? Is IT seen as improving or hampering business agility? Is there a growing divide between your business and IT folks? And, finally, perhaps the most important question of all: Is your organization truly committed to solving these problems, and does that commitment come from the highest levels of the organization? If the answer to all of these questions is "yes," enterprise architecture is your path. It's up to you to take that next step.

Glossary

  • ADM (Architecture Development Method)—A process for creating an enterprise architecture that is part of the TOGAF standard.
  • application architecture—The architecture of a specific application.
  • architect—One whose responsibility is the design of an architecture and the creation of an architectural description.
  • architectural artifact—A specific document, report, analysis, model, or other tangible that contributes to an architectural description.
  • architectural description—A collection of products (artifacts) to document an architecture.
  • architectural framework—A skeletal structure that defines suggested architectural artifacts, describes how those artifacts are related to each other, and provides generic definitions for what those artifacts might look like.
  • architectural methodology—A generic term that can describe any structured approach to solving some or all of the problems related to architecture.
  • architectural process—A defined series of actions directed to the goal of producing either an architecture or an architectural description.
  • architectural taxonomy—A methodology for organizing and categorizing architectural artifacts.
  • architecture—The fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution (from IEEE-1471-2000).
  • business architecture—An architecture that deals specifically with business processes and business flow.
  • business reference model (BRM)—An FEA term that gives a business view of the various functions of the federal government.
  • business services segment—An FEA term that refers to a segment that is foundational to most, if not all, political organizations, such as financial management.
  • CIO—Chief Information Officer, the executive in charge of information technology in a corporation.
  • CIO Council—A council consisting of CIOs from each of the federal governmental agencies that coordinates work related to common interests.
  • Clinger-Cohen Act of 1996—See Information Technology Management Reform Act.
  • common-systems architectures—A TOGAF term referring to an architecture that is common to many (but not all) types of enterprises, in contrast to foundation architectures and industry architectures.
  • component reference model (CRM)—An FEA term that gives an IT view of systems that support business functionality.
  • data architecture—The architecture of the data (typically stored in databases) owned by the enterprise.
  • enterprise architect—An architect who specializes in enterprise architectures.
  • enterprise architecture—An architecture in which the system in question is the whole enterprise, especially the business processes, technologies, and information systems of the enterprise.
  • enterprise service—An FEA term referring to a well-defined function that spans political boundaries, such as security management.
  • FEA—See Federal Enterprise Architecture (FEA).
  • FEAF—See Federal Enterprise Architectural Framework (FEAF).
  • FEAPMO—The organization within the OMB that owns and administers the Federal Enterprise Architecture.
  • Federal Architecture Program EA Assessment Framework—A benchmark used by the OMB to measure the effectiveness of governmental bodies in using enterprise architecture.
  • Federal Enterprise Architectural Framework (FEAF)—An enterprise-architectural framework used by the U.S. federal government to describe how the various governmental agencies and their IT systems are related to each other.
  • Federal Enterprise Architecture (FEA)—An architectural description of the enterprise architecture of the U.S. federal government that includes various reference models, processes for creating organizational architectures that fit in with the federal enterprise architecture, and a methodology for measuring the success of an organization in using enterprise architectures.
  • foundation architecture—A term used by TOGAF to refer to the most generic of architectures that can be used by any IT organization, in contrast to common systems architectures.
  • GAO—See General Accountability Office (GAO).
  • Gartner—An IT research and advisory organization.
  • gateway—A transfer point of an autonomous system from which messages from the outside world are received or through which messages to the outside world are sent.
  • General Accountability Office (GAO)—A branch of the U.S. Government that is responsible for monitoring the effectiveness of different organizations within the U.S. Government.
  • industry architecture—A TOGAF term that refers to a architecture that is common to most enterprises within an industry, in contrast to a common-systems architecture and an organizational architecture.
  • Information Technology Management Reform Act—An act passed by the U.S. Congress in 1996 that requires all governmental organizations to use effective strategies and frameworks for developing and maintaining IT resources.
  • OMB (Office of Management and Budget)—Part of the Executive Office of the President of the U.S. that serves the function of presidential oversight on federal agencies.
  • The Open Group Architectural Framework—See TOGAF (The Open Group Architectural Framework) 8.1.
  • organizational architecture—A TOGAF term that applies to an architecture that is specific to a particular organization, in contrast to an industry architecture.
  • performance reference model (PRM)—An FEA term that gives standard ways of describing terms related to measuring value.
  • Return on Investment (ROI)—A measure (in percent) of the business value of a project, based on the increase in profit (either because of increased income or decreased expenses) divided by the cost of the project. For example, a project with a cost of $100,000 that returned $200,000 in increased profit has an ROI of 200 percent.
  • ROI—See Return on Investment (ROI).
  • segment—An FEA term that refers to a major line-of-business functionality, such as human resources, that might be shared across organizations.
  • standards information base (SIB)—A TOGAF term that refers to a collection of information about standards, particularly in the area of open-source.
  • TAFIM (Technical Architecture Framework for Information Management)—An architectural framework developed by the Department of Defense and officially discontinued in 2000.
  • technical architecture—Usually refers to the architecture of the technical infrastructure within which applications run and interact.
  • technical reference model (TRM)—Part of TOGAF, a reference model that gives a common language for various pieces of IT architecture. This term is also used for a similar meaning within FEA.
  • TOGAF (The Open Group Architectural Framework) 8.1—An architectural methodology that is controlled by The Open Group.
  • Zachman Framework for Enterprise Architectures—An architectural framework in which an enterprise is modeled as 30 or 36 cells, each of which represents an intersection between a stakeholder perspective and an abstraction.