Monday, March 31, 2014

Getting Started with Risk Management

Learn how to evolve a sustainable and repeatable RIMS program.

source : http://is.gd/ZZtBCg

Saturday, March 29, 2014

Geolocating Twitter Users

Interesting research into figuring out where Twitter users are located, based on similar tweets from other users:
While geotags are the most definitive location information a tweet can have, tweets can also have plenty more salient information: hashtags, FourSquare check-ins, or text references to certain cities or states, to name a few. The authors of the paper created their algorithm by analyzing the content of tweets that did have geotags and then searching for similarities in content in tweets without geotags to assess where they might have originated from. Of a body of 1.5 million tweets, 90 percent were used to train the algorithm, and 10 percent were used to test it.
The paper.

Smarter People are More Trusting

Interesting research.
Both vocabulary and question comprehension were positively correlated with generalized trust. Those with the highest vocab scores were 34 percent more likely to trust others than those with the lowest scores, and someone who had a good perceived understanding of the survey questions was 11 percent more likely to trust others than someone with a perceived poor understanding. The correlation stayed strong even when researchers controlled for socio-economic class.
This study, too, found a correlation between trust and self-reported health and happiness. The trusting were 6 percent more likely to say they were "very happy," and 7 percent more likely to report good or excellent health.
Full study results.

Creating Forensic Sketches from DNA

This seems really science fictional:
It's already possible to make some inferences about the appearance of crime suspects from their DNA alone, including their racial ancestry and some shades of hair colour. And in 2012, a team led by Manfred Kayser of Erasmus University Medical Center in Rotterdam, the Netherlands, identified five genetic variants with detectable effects on facial shape. It was a start, but still a long way from reliable genetic photofits.
To take the idea a step further, a team led by population geneticist Mark Shriver of Pennsylvania State University and imaging specialist Peter Claes of the Catholic University of Leuven (KUL) in Belgium used a stereoscopic camera to capture 3D images of almost 600 volunteers from populations with mixed European and West African ancestry. Because people from Europe and Africa tend to have differently shaped faces, studying people with mixed ancestry increased the chances of finding genetic variants affecting facial structure.
Kayser's study had looked for genes that affected the relative positions of nine facial "landmarks", including the middle of each eyeball and the tip of the nose. By contrast, Claes and Shriver superimposed a mesh of more than 7000 points onto the scanned 3D images and recorded the precise location of each point. They also developed a statistical model to consider how genes, sex and racial ancestry affect the position of these points and therefore the overall shape of the face.
Next the researchers tested each of the volunteers for 76 genetic variants in genes that were already known to cause facial abnormalities when mutated. They reasoned that normal variation in genes that can cause such problems might have a subtle effect on the shape of the face. After using their model to control for the effects of sex and ancestry, they found 24 variants in 20 different genes that seemed to be useful predictors of facial shape (PLoS Genetics, DOI: 10.1371/journal.pgen.1004224).
Reconstructions based on these variants alone aren't yet ready for routine use by crime labs, the researchers admit. Still, Shriver is already working with police to see if the method can help find the perpetrator in two cases of serial rape in Pennsylvania, for which police are desperate for new clues.
If I had to guess, I'd imagine this kind of thing is a couple of decades away. But with a large enough database of genetic data, it's certainly possible.

Wednesday, March 26, 2014

Tuesday, March 25, 2014

Password Hashing Competition

There's a private competition to identify new password hashing schemes. Submissions are due at the end of the month.

NSA Hacks Huawei

Both Der Spiegel and the New York Times are reporting that the NSA has hacked Huawei pretty extensively, getting copies of the company's products' source code and most of the e-mail from the company. Aside from being a pretty interesting story about the operational capabilities of the NSA, it exposes some pretty blatant US government hypocrisy on this issue. As former Bush administration official (and a friend of mine) Jack Goldsmith writes:
The Huawei revelations are devastating rebuttals to hypocritical U.S. complaints about Chinese penetration of U.S. networks, and also make USG protestations about not stealing intellectual property to help U.S. firms' competitiveness seem like the self-serving hairsplitting that it is. (I have elaborated on these points many times and will not repeat them here.) "The irony is that exactly what they are doing to us is what they have always charged that the Chinese are doing through us," says a Huawei Executive.
This isn't to say that the Chinese are not targeting foreign networks through Huawei equipment; they almost certainly are.

Monday, March 24, 2014

An Open Letter to IBM's Open Letter

Last week, IBM published an "open letter" about "government access to data," where it tried to assure its customers that it's not handing everything over to the NSA. Unfortunately, the letter (quoted in part below) leaves open more questions than it answers.
At the outset, we think it is important for IBM to clearly state some simple facts:
  • IBM has not provided client data to the National Security Agency (NSA) or any other government agency under the program known as PRISM.
  • IBM has not provided client data to the NSA or any other government agency under any surveillance program involving the bulk collection of content or metadata.
  • IBM has not provided client data stored outside the United States to the U.S. government under a national security order, such as a FISA order or a National Security Letter.
  • IBM does not put "backdoors" in its products for the NSA or any other government agency, nor does IBM provide software source code or encryption keys to the NSA or any other government agency for the purpose of accessing client data.
  • IBM has and will continue to comply with the local laws, including data privacy laws, in all countries in which it operates.
To which we ask:
  • We know you haven't provided data to the NSA under PRISM. It didn't use that name with you. Even the NSA General Counsel said: "PRISM was an internal government term that as the result of leaks became the public term." What program did you provide data to the NSA under?
  • It seems rather obvious that you haven't provided the NSA with any data under a bulk collection surveillance program. You're not Google; you don't have bulk data to that extent. So why the caveat? And again, under what program did you provide data to the NSA?
  • Okay, so you say that you haven't provided any data stored outside the US to the NSA under a national security order. Since those national security orders prohibit you from disclosing their existence, would you say anything different if you did receive them? And even if we believe this statement, it implies two questions. Why did you specifically not talk about data stored inside the US? And why did you specifically not talk about providing data under another sort of order?
  • Of course you don't provide your source code to the NSA for the purpose of accessing client data. The NSA isn't going to tell you that's why it wants your source code. So, for what purposes did you provide your source code to the government? To get a contract? For audit purposes? For what?
  • Yes, we know you need to comply with all local laws, including US laws. That's why we don't trust you -- the current secret interpretations of US law requires you to screw your customers. I'd really rather you simply said that, and worked to change those laws, than pretending that you can convince us otherwise.

Sunday, March 23, 2014

Targeting Huawei: NSA Spied on Chinese Government and Networking Firm

Former Chinese President Hu Jintao (left) and his successor, Xi Jinping (right): Politicians in Beijing were targets of NSA espionage. 
REUTERS
Former Chinese President Hu Jintao (left) and his successor, Xi Jinping (right): Politicians in Beijing were targets of NSA espionage.
According to documents viewed by SPIEGEL, America'a NSA intelligence agency put considerable efforts into spying on Chinese politicians and firms. One major target was Huawei, a company that is fast becoming a major Internet player.
The American government conducted a major intelligence offensive against China, with targets including the Chinese government and networking company Huawei, according to documents from former NSA worker Edward Snowden that have been viewed by SPIEGEL. Among the American intelligence service's targets were former Chinese President Hu Jintao, the Chinese Trade Ministry, banks, as well as telecommunications companies.
ANZEIGE
But the NSA made a special effort to target Huawei. With 150,000 employees and €28 billion ($38.6 billion) in annual revenues, the company is the world's second largest network equipment supplier. At the beginning of 2009, the NSA began an extensive operation, referred to internally as "Shotgiant," against the company, which is considered a major competitor to US-based Cisco. The company produces smartphones and tablets, but also mobile phone infrastructure, WLAN routers and fiber optic cable -- the kind of technology that is decisive in the NSA's battle for data supremacy. A special unit with the US intelligence agency succeeded in infiltrating Huwaei's network and copied a list of 1,400 customers as well as internal documents providing training to engineers on the use of Huwaei products, among other things.
Source Code Breached
According to a top secret NSA presentation, NSA workers not only succeeded in accessing the email archive, but also the secret source code of individual Huwaei products. Software source code is the holy grail of computer companies. Because Huawei directed all mail traffic from its employees through a central office in Shenzhen, where the NSA had infiltrated the network, the Americans were able to read a large share of the email sent by company workers beginning in January 2009, including messages from company CEO Ren Zhengfei and Chairwoman Sun Yafang.
"We currently have good access and so much data that we don't know what to do with it," states one internal document. As justification for targeting the company, an NSA document claims that "many of our targets communicate over Huawei produced products, we want to make sure that we know how to exploit these products." The agency also states concern that "Huawei's widespread infrastructure will provide the PRC (People's Republic of China) with SIGINT capabilities." SIGINT is agency jargon for signals intelligence. The documents do not state whether the agency found information indicating that to be the case.
The operation was conducted with the involvement of the White House intelligence coordinator and the FBI. One document states that the threat posed by Huawei is "unique".
The agency also stated in a document that "the intelligence community structures are not suited for handling issues that combine economic, counterintelligence, military influence and telecommunications infrastructure from one entity."
Fears of Chinese Influence on the Net
The agency notes that understanding how the firm operates will pay dividends in the future. In the past, the network infrastructure business has been dominated by Western firms, but the Chinese are working to make American and Western firms "less relevant". That Chinese push is beginning to open up technology standards that were long determined by US companies, and China is controlling an increasing amount of the flow of information on the net. In a statement, Huawei spokesman Bill Plummer criticized the spying measures. "If it is true, the irony is that exactly what they are doing to us is what they have always charged that the Chinese are doing through us," he said. "If such espionage has been truly conducted, then it is known that the company is independent and has no unusual ties to any government and that knowledge should be relayed publicly to put an end to an era of mis- and disinformation."
Responding to the allegations, NSA spokeswoman Caitlin Hayden said she should could not comment on specific collection activities or on the intelligence operations of specific foreign countries, "but I can tell you that our intelligence activities are focused on the national security needs of our country." She also said, "We do not give intelligence we collect to US companies to enhance their international competitiveness or increase their bottom line."

Friday, March 21, 2014

Liveblogging the Financial Cryptography Conference

Ross Anderson is liveblogging Financial Cryptography 2014. Interesting stuff.

Thursday, March 20, 2014

Automatic Face-Recognition Software Getting Better

Facebook has developed a face-recognition system that works almost as well as the human brain:
Asked whether two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge, regardless of variations in lighting or whether the person in the picture is directly facing the camera.
Human brains are optimized for facial recognition, which makes this even more impressive.
This kind of technology will change video surveillance. Right now, it's general, and identifying people is largely a forensic activity. This will make cameras part of an automated process for identifying people.

Wednesday, March 19, 2014

10 steps to a sustainable integrated GRC architecture


Organizations today are facing increased risk and regulatory pressures. The risk management, compliance and governance reforms that followed the corporate failures of the past decade have dramatically changed today’s business environment. Organizations worldwide are coping with a proliferation of new regulations and standards, and are challenged to do so in a way that supports performance objectives, upholds stakeholder expectations, sustains value and protects the organization’s brand.

Recent studies indicate that Fortune 1000 corporations are subject to 35–40 different regulatory mandates and the management of regulation and compliance has become a serious risk factor. Complying with each individual regulation is always complicated, lengthy and costly. Managing the burden of complying with multiple and overlapping regulations is becoming increasingly difficult and expensive.

To address these issues, organizations have invested in multiple risk and compliance initiatives, with little co-ordination between them. Working in silos causes a substantial amount of duplicated control activities, which results in high cost and inefficiency. The lack of consistent methodology among the multiple GRC initiatives causes a limited visibility at upper management and board levels. Executive management is unable to obtain a comprehensive view of risk and compliance.

The challenge

In many cases promoting an integrated GRC initiative in organizations requires dealing with the natural skepticism of some of the staff members. Claims that integrated GRC is nice in theory but will not work in practice are common.

In fact, there are many good reasons for this skepticism. To implement an integrated GRC platform, organizations need to find a way to manage the following complexity:
  1. Multiple regulations and standards.
  2. Various regulators.
  3. The differing scope of each regulation and standard.
  4. Various concepts.
  5. Different reporting due dates.
  6. Different workflow for each project.
  7. Different data architecture requirements.
  8. Diverse participants within the organization.
The fact that in many cases several consulting firms are involved in the different projects, bringing different methodologies to the table, combined with the natural internal politics within organizations, makes the challenge of building an integrated GRC architecture even more complicated. Besides the professional challenges, there are serious cultural barriers to be considered. Building the architecture is just the first stage of working in an integrated fashion. This architecture cannot be implemented successfully without ensuring a convenient platform for co-ordination between all the participants.

Another common barrier comes from the world of IT – the existence of information existing in diverse platforms such as MS-Excel files, MS-Word documents, Access databases or other dedicated applications. Managers are afraid of losing the information and time invested in accumulating this information during the migration process to a new architecture and system. Automating the data import of legacy and existing information must be accounted for to overcome this IT barrier.

Due to the complexity mentioned above, most organizations still manage GRC projects in silos, adopting different methodologies and different software point solutions for each project. As a result of this approach, organizations face the following difficulties:
  • Inconsistency among the different projects.
  • Lack of a unified view of risk and compliance that limits management’s decision-making process.
  • Lack of scalability from an enterprise-wide prospective.
  • Duplication of activities and overlapping efforts, which increases cost, internal overhead and external consulting expenses.
As Gartner, the IT research and advisory firm, said: “Companies that select individual solutions for each regulatory challenge they face will spend 10 times more on the IT portion of compliance projects than companies that take on a proactive and more integrated approach.”

Building an integrated GRC architecture

An integrated GRC strategy must provide an environment that allows each GRC process to be fully managed independently, while providing tools for defining complex relationships, and sharing and linking information between the different regulations and standards.

The goal is not only to generate a maintainable and reusable framework, but to ensure, over time, compliance with a list of changing legislative, regulatory, quality and internal control requirements.

Based on our experience in implementing dozens of risk and compliance projects in large and fragmented enterprise organizations, we have applied our accumulated know-how and expertise to design a step-by-step, practical process for building and implementing an integrated GRC architecture.

The process is suitable for any type of organization but was specifically developed for large enterprises where multiple GRC audit groups and processes were created separately over time, and where these disparate GRC functions continued to evolve in parallel, with little interaction between them.

One of the critical factors that can determine the success of incorporating a culture of GRC integration in an enterprise is having a high-level, internal sponsor. Besides the functionality and technical obstacles of GRC integration, it can be politically difficult to integrate and foster co-operation between disparate audit functions in the organization. An internal patron with enough influence can help to overcome the natural internal hardships, objections and frictions that already exist and can be magnified when trying to foster professional GRC collaboration.

The process we have designed for generating an integrated GRC approach is composed of four key phases.
The first phase is perhaps the most crucial one, in which we model the integrated architecture in a structured, 10-step process. We call this step GRC modeling.
The second phase is defining and implementing a pilot for the integrated GRC architecture. We believe that, in almost all cases, the right strategy for long-term success is a bottom-up approach. By this we mean the company should choose initially two, three or, at most, four parallel audit processes to integrate and focus on a specific subset that is common to the chosen GRC projects.
We have often seen companies decide to implement a high-level, top-down strategy that encompasses mapping the entire organization and design a comprehensive GRC architecture for most or all of the separate GRC functions. We understand the inclination to plan a comprehensive and long-term strategy. The danger lies in that this high-level top-down strategy can realistically take up to two years years to develop, while enterprises today are subject to internal, market and regulatory pressures forcing them to be dynamic and undergo rapid change. By the time the master GRC plan is ready, the organization and its audit processes have evolved and the strategic architecture designed no longer fits the new organization.
After selecting the audit projects that will participate in the pilot, we must define the pilot boundaries by agreeing to:
  • Define concrete pilot objectives.
  • Select a subset of the organization structure and/or sub-processes while ensuring that both business and IT processes are included.
  • Select from one of several scenarios for the GRC integration .
  • Define a realistic pilot time-line.
  • Designate key participants from each audit process.
  • Solicit (if possible) the involvement of the internal, critical sponsor.
After defining and agreeing to the pilot scope, it can begin. The designated pilot players track and/or implement the first phase of GRC modeling on the chosen subset (or scenario) based on the time-line developed.

In phase III, pilot analysis and GRC architecture modification, the pilot participants individually and as a team analyze the pilot results and perform the important, but oft-overlooked, step of modifying the GRC architecture based on the conclusions of the pilot analysis. Each participant separately analyzes the pilot results from two perspectives: a) how did the pilot perform in relation to the ongoing work-flow and functionality of the specific auditing group represented by the participant; and b) how did the pilot perform in relation to the overall objective of GRC integration, reducing duplications and redundancies between the selected audit groups and improving the overall GRC efficiency? The entire pilot team reviews all the participant results and then discusses together and agrees to modifications to the GRC architecture.

Phase IV is the final deployment phase of the integrated GRC architecture that was created and improved on in the pilot process throughout the organization. Phase IV is an ongoing phase that is planned carefully by the enterprise. The company might begin by continuing to focus on the two to four selected GRC processes and expanding the deployment from the pilot subset to a wider scope. Alternatively, the organization might decide to expand the subset of sub-processes from the initial scope of two to four audit processes to include more such groups.

We have found that, as the deployment proceeds, fewer modifications are required to the overall GRC architecture. Because it was designed with a bottom-up approach, the integrated GRC solution more easily evolves with time and the scope of the implementation widens both vertically and horizontally.

10 practical techniques

1. Scoping
As stated above, in the scoping phase we define two to four GRC processes, because the basis for the integrated GRC architecture will be built around them.

2. Defining building blocks
Each regulatory process is composed of main building blocks such as: organizational units, processes, sub-processes, risks, controls, loss events, IT systems, financial accounts, audit plans and scenarios.

Not all the building blocks are relevant for each regulatory requirement. After defining the building blocks, we need to assign the right building blocks to each GRC process. This mapping will be used later on to define the relations between the building blocks with regards to each risk management or compliance process.
Table 1 below presents a sample matrix mapping building blocks and GRC projects.
grc-supplement-article

3. Defining common terminology
Due to the diversification of the GRC methodologies, adopting a common language among all GRC projects is a crucial step to avoid misunderstandings within the organization.

Often different GRC project managers are using the same terminology for different data information. For example, when the Sox manager refers to “process”, he is referring to a different place in the architecture than the operational risk manager. Co-ordination between these two managers can prove difficult unless they define a mutual, common terminology.

When defining a common terminology, it is recommended to define a common name for each and every data field and to assign each field to one or more relevant GRC projects.

4. Ensuring consistent organizational structure
One of the typical mistakes we have seen across many enterprises is the existence of different organizational structures for each GRC process. Variant organizational structures often inadvertently cause mistaken assessments that are based on erroneous risk and control calculations up the organizational tree.

Obviously, a short scoping process is needed to determine which organizational units will participate in each GRC project.

5. Correlate processes and units
Because organizational processes usually cross several organizational units, it is important to define which part of the process is performed in each unit. This enables organizations to analyze the information using the organizational structure perspective.
For example, we assume a human resources process creates a residual risk of 10 X.
Company A documents this process with no correlation to the actual organizational units that are running this process. Company B divided this process into three sub-processes as follows:
grc-supplement-article_table
As opposed to company A who has only mapped residual risk to process, company B has the ability to analyze the information both from the process point of view and the organizational structure viewpoint. This can be useful when aggregating all the processes because this method easily highlights the more ‘risky’ organizational units. This technique also enables us to define more specific and accurate key risk, contextual and performance indicators to leverage existing information for extensive analysis of the data (see point 10 below).

6. Enable high-resolution granularity
According to one of the most famous axioms in the GRC industry, a correlation of many-to-many between all the building blocks is required to enable integrated GRC. For example, it is common knowledge that there are many-to-many relationships between risks and controls.

This is indeed necessary, but not quite enough to support an integrated GRC environment. The organization must be able to define different, distinct attributes for each instance of common risks and controls shared by multiple GRC processes. A common control that occurs in two separate GRC processes might be critically important for one regulation and less important in the other. The ability to define many-to-many granularity at the level of each attribute of each building block is critical for the success of an integrated approach.

Here is a sample that illustrates the need for this granularity:
Risk: Purchase-related transactions may not be recorded in a proper period.

This risk has several attributes, such as risk impact, risk likelihood and risk response, and is found in both Sox and operational risk processes.
The qualitative impact of risk when a purchase-related transaction is recorded on say, the January 1, 2008 instead of the correct date of December 31, 2007 is different for Sox and operational risk.

Since Sox is all about financial reporting, most companies would assign the risk impact here as high, while from an operational risk perspective, the fact that this transaction recording was delayed by one day is minor, and the impact would be classified as small.

This sample shows us that, to reduce the amount of risks being managed, each attribute of the risk should be assigned a different value in relation to other dimensions such as GRC project, organizational process and organizational unit. This requires a high level of granularity in the database.

Companies that select a GRC system that cannot support many-to-many relations at the level of attributes will be forced to choose from one of the following bad options to overcome this limitation: either they will have to document the same risk again and again to assign the accurate attribute value for each project, process or organizational unit, or they will define this risk only once with constant attributes in a way that does not reflect the situation in practice or allow for accurate analysis and remediation.

7. Designing control hierarchy
To reduce the duplication of controls between separate compliance procedures, the organisation needs tools to define control dependencies intelligently.

Sometimes, a high level control in a regulation might be identical to a combination of five controls in another standard. The ability to define such smart links and multi-level hierarchies between risks, controls and GRC projects is vital to reducing the overheads of managing and testing controls across the enterprise.

For example, a detailed control objective of CobiT is ensuring network security, which is a fairly high-level objective. To ensure that this high-level objective is met in the organisation, a number of lower-level controls must be tested, most likely based on IT Security standard ISO27001. Each of the lower level ISO27001 controls might be effective or ineffective, and indicate whether the CobiT objective is met. The system should enable smart correlations of n-level hierarchy to support this capability.

8. Assigning roles and responsibilities
To enable and facilitate coordination between GRC participants, there is a clear need to define the rights to view, update and modify shared information.

9. Defining alerts and notifications
To enhance corporate internal communication, it is imperative to automate alerts and notification techniques.
Typical triggers for sending alerts and notifications are:
  • organisational change;
  • process change;
  • redesign of shared control;
  • new risk;
  • successful implementation of remediation task;
  • change of process owner.
10. Leveraging correlative information between the GRC projects
Each GRC unit has its own individual workflow that might consist of periodic control testing, conducting multi-year audit plans or accumulating loss events. To achieve an overall organisational risk view, information must be shared between the different processes.
For example, the internal audit team should receive status of control tests for determining the frequency of audit plans by topic. Loss event information collected by the operational risk group should be shared with other GRC functions.

Summary

The process of building the integrated GRC architecture as described in this article can help organisations improve the quality of risk and compliance information, prevent duplicated activities, decrease costs and create a long-term framework for existing and future regulatory and risk management needs.
 

Enterprise Architecture Lifecycle

The Enterprise Architecture team has a lifecycle of its own, but doesn’t operate in a vacuum. The Enterprise Architecture capability fails if it is seen too much as blue sky thinking in an ivory tower.
The Enterprise Architecture team will interact closely with all the other management processes in an organization, especially the IT management processes. When all these processes work together effectively, an enterprise will be able to successfully manage strategic changes and drive business transformation effectively and efficiently. Often in organizations little thought has been given to the integration of the EA processes to the other management processes. This contributes to making the EA team into an ivory tower, seemingly unconnected with everything else. The aim of this post is to shine some light the EA life cycle and its interactions.
One of the goals when establishing or maturing an enterprise architecture capability is to make sure that the enterprise architecture a fundamental and normal part of the business-as-usual decision making flow rather than considered as an afterthought.
Too often I have seen major changes apparently started directly at the project initiation phase before there has been any serious appraisal of the business fit, technical fit and feasibility of that change undertaken, not least by the enterprise architects.
The Enterprise Architecture capability is driven by understanding the business strategy and strategic scenarios which drive the business and IT enabled changes in an organisation. It is there to ensure that any strategic change is viable in the future, but it also identifies the dependencies, feasibility, risks, issues, costs, and informs the subsequent investment decisions that need to be made.
The current state and future state enterprise architecture models will be developed (typically using the TOGAF ADM).
In the EA road map, the strategic changes will be prioritized and arranged into a meaningful sequence to inform the decisions made by the programme and project portfolio management and before any projects are initiated.
The Enterprise Architecture capability will govern what parts of the future state enterprise architecture are developed and delivered by the projects, and thereafter ensure that the delivered solutions and services remain compliant with it. The compliance stage will also capture and approve any innovations that are identified as useful. The enterprise architecture team and/or a technical design authority will provide design assurance for the projects, to ensure that principles, standards, patterns, policies and guidelines are being followed.
It’s worth noting that the EA lifecycle is not a part of the project solution development lifecycle as many organisations seem to imagine it is, but is a separate lifecycle that operates in parallel at a strategic level. Neither is the EA lifecycle the same as the TOGAF Architecture Development Method.
After a solution has been delivered, the enterprise architecture team will harvest the results in order to update the current state enterprise architecture, to measure performance and to publish a dashboard for the senior management team.
The following diagram illustrates the major stages and processes that are undertaken by an Enterprise Architecture team, for each iteration they undertake.
EA lifecycle

Context

These Enterprise Architecture processes can be best understood in the wider scope and context of all the processes defined by the COBIT5 framework for the governance and management of enterprise IT. http://en.wikipedia.org/wiki/COBIT  http://www.isaca.org/cobit/pages/default.aspx
I’m surprised that COBIT is not used more in UK based organisations, but it is more popular in Europe. I would recommend COBIT5 is used as a broad framework for assessing the risk and value of IT and the governance of all IT management processes.
The following view is broadly based on the COBIT processes, and illustrates the position of the Enterprise Architecture processes relative to the other IT management processes identified by COBIT.
EA processes
Starting from the Strategy & Vision there is an overall clockwise cycle through all the processes. The Enterprise Architecture capability is responsible or accountable for the processes shown in red, and is consulted and informed about the other processes. The responsibilities will, of course, vary in each organisation and in many cases the enterprise architecture team will be additionally responsible with many more of the Solution Development processes (for example, Select, Acquire, and maintain COTS software products).
In a more mature enterprise Architecture environment, all these processes will be expected to consume and contribute to the knowledge, information and models held in the Enterprise Architecture repository (illustrated in the centre of the diagram). The management dashboard of performance metrics, charts and graphs will be generated from the EA repository.

COBIT

The above diagram is based on the COBIT processes. The latest version of COBIT5 is more explicit about enterprise architecture than earlier versions were.  The following table shows the COBIT5 processes that directly relate to or are supported by an Enterprise Architecture team and an Enterprise Architecture Governance Board.
COBIT5 reference Process
APO03 Managing Enterprise Architecture
APO02 Define Strategy (in this context this usually means the IT strategy)
APO04 Manage Innovation (via the Enterprise Architecture Governance Board)
BAI08 Manage Knowledge (via the EA Repository)
BAI06 Manage Changes (i.e. Strategic changes and IT enabled Business changes that drive the future state enterprise architecture)
MEA03 Monitor and assess compliance with external requirements (via the Architecture Governance Board)
APO05 Manage Portfolios (with EA Roadmap)
APO011 Manage Quality (via EA Appraisals)
APO012 Manage Risk (via EA Appraisals)
EDM01 Set and Maintain Governance Framework
EDM02 Ensure Value Optimisation
EDM03 Ensure Risk Optimisation

RACI

The following table shows who is Responsible, Accountable, Informed or Consulted in regard to the services provided by the Enterprise Architecture team and an Enterprise Architecture Governance Board.
RACI
Implementing the EA lifecycle and integrating it with the IT management processes in an organisation will help the Enterprise Architecture capability to avoid the challenges and misperceptions that it is some kind of ivory tower that can be wilfully ignored and disbanded when looking for budget cuts.
Senior management teams will instead come to appreciate the valuable contribution that Enterprise Architecture makes to strategic planning, appraising investments in change, driving business transformations, finding opportunities and innovations, and to understand the value EA has to the organisation as a whole.

MYSTIC: The NSA's Telephone Call Collection Program

The Washington Post is reporting on an NSA program called MYSTIC, which collects all -- that's 100% -- of a country's telephone calls. Those calls are stored in a database codenamed NUCLEON, and can be retrieved at a later date using a tool codenamed RETRO. This is voice, not metadata.
What's interesting here is not the particular country whose data is being collected; that information was withheld from the article. It's not even that the voice data is stored for a month, and then deleted. All of that can change, either at the whim of the NSA or as storage capabilities get larger. What's interesting is that the capability exists to collect 100% of a country's telephone calls, and the analysis tools are in place to search them.

Tuesday, March 18, 2014

Details of the Target Credit Card Breach

Long and interesting article about the Target credit card breach from last year. What's especially interesting to me is that the attack had been preventable, but the problem was that Target messed up its incident response.
In testimony before Congress, Target has said that it was only after the U.S. Department of Justice notified the retailer about the breach in mid-December that company investigators went back to figure out what happened. What it hasn't publicly revealed: Poring over computer logs, Target found FireEye's alerts from Nov. 30 and more from Dec. 2, when hackers installed yet another version of the malware. Not only should those alarms have been impossible to miss, they went off early enough that the hackers hadn't begun transmitting the stolen card data out of Target's network. Had the company's security team responded when it was supposed to, the theft that has since engulfed Target, touched as many as one in three American consumers, and led to an international manhunt for the hackers never would have happened at all.

Monday, March 17, 2014

An internal audit is an organizational initiative to monitor and analyze its own business operations in order to determine how well it conforms to a set of specific criteria.
In the enterprise, an internal audit can focus on enterprise risk management functions, security processes or maintaining regulatory compliance, as well as many other business departments and activities. As part of the internal audit plan, auditors try to find discrepancies between operational processes and what those processes are designed to do. If internal auditors do find discrepancies, they report them to company leadership so the processes can be improved.
Internal audit plans are often designed as a preemptive maneuver to maintain operational efficiency and financial reliability, as well as to safeguard assets.

Saturday, March 15, 2014

Friday Squid Blogging: Squid Ring

It's a nice design, even if you aren't a squid person.

Nicholas Weaver Explains how QUANTUM Works

An excellent essay. For the non-technical, his conclusion is the most important:
Everything we've seen about QUANTUM and other internet activity can be replicated with a surprisingly moderate budget, using existing tools with just a little modification. The biggest limitation on QUANTUM is location: The attacker must be able to see a request which identifies the target. Since the same techniques can work on a Wi-Fi network, a $50 Raspberry Pi, located in a Foggy Bottom Starbucks, can provide any country, big and small, with a little window of QUANTUM exploitation. A foreign government can perform the QUANTUM attack NSA-style wherever your traffic passes through their country.
And that's the bottom line with the NSA's QUANTUM program. The NSA does not have a monopoly on the technology, and their widespread use acts as implicit permission to others, both nation-state and criminal.
Moreover, until we fix the underlying Internet architecture that makes QUANTUM attacks possible, we are vulnerable to all of those attackers.

Friday, March 14, 2014

Security as a Public Health Issue

Cory Doctorow argues that computer security is analogous to public health:
I think there's a good case to be made for security as an exercise in public health. It sounds weird at first, but the parallels are fascinating and deep and instructive. Last year,  a talk about all the ways that insecure computers put us all at risk, a woman in the audience put up her hand and said, "Well, you've scared the hell out of me. Now what do I do? How do I make my computers secure?"
And I had to answer: "You can't. No one of us can. I was a systems administrator 15 years ago. That means that I'm barely qualified to plug in a WiFi router today. I can't make my devices secure and neither can you. Not when our governments are buying up information about flaws in our computers and weaponising them as part of their crime-fighting and anti-terrorism strategies. Not when it is illegal to tell people if there are flaws in their computers, where such a disclosure might compromise someone's anti-copying strategy.
But: If I had just stood here and spent an hour telling you about water-borne parasites; if I had told you about how inadequate water-treatment would put you and everyone you love at risk of horrifying illness and terrible, painful death; if I had explained that our very civilisation was at risk because the intelligence services were pursuing a strategy of keeping information about pathogens secret so they can weaponise them, knowing that no one is working on a cure; you would not ask me 'How can I purify the water coming out of my tap?'"
Because when it comes to public health, individual action only gets you so far. It doesn't matter how good your water is, if your neighbour's water gives him cholera, there's a good chance you'll get cholera, too. And even if you stay healthy, you're not going to have a very good time of it when everyone else in your country is stricken and has taken to their beds.
If you discovered that your government was hoarding information about water-borne parasites instead of trying to eradicate them; if you discovered that they were more interested in weaponising typhus than they were in curing it, you would demand that your government treat your water-supply with the gravitas and seriousness that it is due.

Metadata = Surveillance

Ever since reporters began publishing stories about NSA activities, based on documents provided by Edward Snowden, we've been repeatedly assured by government officials that it's "only metadata." This might fool the average person, but it shouldn't fool those of us in the security field. Metadata equals surveillance data, and collecting metadata on people means putting them under surveillance.
An easy thought experiment demonstrates this. Imagine that you hired a private detective to eavesdrop on a subject. That detective would plant a bug in that subject's home, office, and car. He would eavesdrop on his computer. He would listen in on that subject's conversations, both face to face and remotely, and you would get a report on what was said in those conversations. (This is what President Obama repeatedly reassures us isn't happening with our phone calls. But am I the only one who finds it suspicious that he always uses very specific words? "The NSA is not listening in on your phone calls." This leaves open the possibility that the NSA is recording, transcribing, and analyzing your phone calls -- and very occasionally reading them. This is far more likely to be true, and something a pedantically minded president could claim he wasn't lying about.)
Now imagine that you asked that same private detective to put a subject under constant surveillance. You would get a different report, one that included things like where he went, what he did, who he spoke to -- and for how long -- who he wrote to, what he read, and what he purchased. This is all metadata, data we know the NSA is collecting. So when the president says that it's only metadata, what you should really hear is that we're all under constant and ubiquitous surveillance.
What's missing from much of the discussion about the NSA's activities is what they're doing with all of this surveillance data. The newspapers focus on what's being collected, not on how it's being analyzed -- with the singular exception of the Washington Post story on cell phone location collection. By their nature, cell phones are tracking devices. For a network to connect calls, it needs to know which cell the phone is located in. In an urban area, this narrows a phone's location to a few blocks. GPS data, transmitted across the network by far too many apps, locates a phone even more precisely. Collecting this data in bulk, which is what the NSA does, effectively puts everyone under physical surveillance.
This is new. Police could always tail a suspect, but now they can tail everyone - suspect or not. And once they're able to do that, they can perform analyses that weren't otherwise possible. The Washington Post reported two examples. One, you can look for pairs of phones that move toward each other, turn off for an hour or so, and then turn themselves back on while moving away from each other. In other words, you can look for secret meetings. Two, you can locate specific phones of interest and then look for other phones that move geographically in synch with those phones. In other words, you can look for someone physically tailing someone else. I'm sure there are dozens of other clever analyses you can perform with a database like this. We need more researchers thinking about the possibilities. I can assure you that the world's intelligence agencies are conducting this research.
How could a secret police use other surveillance databases: everyone's calling records, everyone's purchasing habits, everyone's browsing history, everyone's Facebook and Twitter history? How could these databases be combined in interesting ways? We need more research on the emergent properties of ubiquitous electronic surveillance.
We can't protect against what we don't understand. And whatever you think of the NSA or the other 5-Eyes countries, these techniques aren't solely theirs. They're being used by many countries to intimidate and control their populations. In a few years, they'll be used by corporations for psychological manipulation -- persuasion or advertising -- and even sooner by cybercriminals for more illicit purposes.

Thursday, March 13, 2014

How the NSA Exploits VPN and VoIP Traffic

These four slides, released yesterday, describe one process the NSA has for eavesdropping on VPN and VoIP traffic. There's a lot of information on these slides, though it's a veritable sea of code names. No details as to how the NSA decrypts those ESP -- "Encapsulating Security Payload" -- packets, although there are some clues in the form of code names in the slides.

STELLARWIND Classification Guide

Also released today is the STELLARWIND classification guide, in conjunction with a New York Times article on how the FISA court expanded domestic surveillance. (Here's the previous story about STELLARWIND, from the Washington Post.) See also this NSA document. Both stories are based on Snowden documents.
Is it only me, but does anyone else wonder why a court with the word "foreign" in its name would rule on domestic intelligence collection?

Insurance Companies Pushing for More Cybersecurity

This is a good development:
For years, said Ms Khudari, Kiln and many other syndicates had offered cover for data breaches, to help companies recover if attackers penetrated networks and stole customer information. Now, she said, the same firms were seeking multi-million pound policies to help them rebuild if their computers and power-generation networks were damaged in a cyber-attack.
"They are all worried about their reliance on computer systems and how they can offset that with insurance," she said.
Any company that applies for cover has to let experts employed by Kiln and other underwriters look over their systems to see if they are doing enough to keep intruders out.
Assessors look at the steps firms take to keep attackers away, how they ensure software is kept up to date and how they oversee networks of hardware that can span regions or entire countries.
Unfortunately, said Ms Khudari, after such checks were carried out, the majority of applicants were turned away because their cyber-defences were lacking.
Insurance is an excellent pressure point to influence security.

New Information on the NSA's QUANTUM Program

There's a new (overly breathless) article on the NSA's QUANTUM program, including a bunch of new source documents. Of particular note is this page listing a variety of QUANTUM programs. Note that QUANTUMCOOKIE, "which forces users to divulge stored cookies," is not on this list.
I'm busy today, so please tell me anything interesting you see in the comments.

Wednesday, March 12, 2014

How the NSA Plans to Infect ‘Millions’ of Computers with Malware

Featured photo - How the NSA Plans to Infect ‘Millions’ of Computers with Malware One presentation outlines how the NSA performs “industrial-scale exploitation” of computer networks across the world.
Top-secret documents reveal that the National Security Agency is dramatically expanding its ability to covertly hack into computers on a mass scale by using automated systems that reduce the level of human oversight in the process.
The classified files – provided previously by NSA whistleblower Edward Snowden – contain new details about groundbreaking surveillance technology the agency has developed to infect potentially millions of computers worldwide with malware “implants.” The clandestine initiative enables the NSA to break into targeted computers and to siphon out data from foreign Internet and phone networks.
The covert infrastructure that supports the hacking efforts operates from the agency’s headquarters in Fort Meade, Maryland, and from eavesdropping bases in the United Kingdom and Japan. GCHQ, the British intelligence agency, appears to have played an integral role in helping to develop the implants tactic.
In some cases the NSA has masqueraded as a fake Facebook server, using the social media site as a launching pad to infect a target’s computer and exfiltrate files from a hard drive. In others, it has sent out spam emails laced with the malware, which can be tailored to covertly record audio from a computer’s microphone and take snapshots with its webcam. The hacking systems have also enabled the NSA to launch cyberattacks by corrupting and disrupting file downloads or denying access to websites.
The implants being deployed were once reserved for a few hundred hard-to-reach targets, whose communications could not be monitored through traditional wiretaps. But the documents analyzed by The Intercept show how the NSA has aggressively accelerated its hacking initiatives in the past decade by computerizing some processes previously handled by humans. The automated system – codenamed TURBINE – is designed to “allow the current implant network to scale to large size (millions of implants) by creating a system that does automated control implants by groups instead of individually.”
In a top-secret presentation, dated August 2009, the NSA describes a pre-programmed part of the covert infrastructure called the “Expert System,” which is designed to operate “like the brain.” The system manages the applications and functions of the implants and “decides” what tools they need to best extract data from infected machines.
Mikko Hypponen, an expert in malware who serves as chief research officer at the Finnish security firm F-Secure, calls the revelations “disturbing.” The NSA’s surveillance techniques, he warns, could inadvertently be undermining the security of the Internet.
“When they deploy malware on systems,” Hypponen says, “they potentially create new vulnerabilities in these systems, making them more vulnerable for attacks by third parties.”
Hypponen believes that governments could arguably justify using malware in a small number of targeted cases against adversaries. But millions of malware implants being deployed by the NSA as part of an automated process, he says, would be “out of control.”
“That would definitely not be proportionate,” Hypponen says. “It couldn’t possibly be targeted and named. It sounds like wholesale infection and wholesale surveillance.”
The NSA declined to answer questions about its deployment of implants, pointing to a new presidential policy directive announced by President Obama. “As the president made clear on 17 January,” the agency said in a statement, “signals intelligence shall be collected exclusively where there is a foreign intelligence or counterintelligence purpose to support national and departmental missions, and not for any other purposes.”

“Owning the Net”

The NSA began rapidly escalating its hacking efforts a decade ago. In 2004, according to secret internal records, the agency was managing a small network of only 100 to 150 implants. But over the next six to eight years, as an elite unit called Tailored Access Operations (TAO) recruited new hackers and developed new malware tools, the number of implants soared to tens of thousands.
To penetrate foreign computer networks and monitor communications that it did not have access to through other means, the NSA wanted to go beyond the limits of traditional signals intelligence, or SIGINT, the agency’s term for the interception of electronic communications. Instead, it sought to broaden “active” surveillance methods – tactics designed to directly infiltrate a target’s computers or network devices.
In the documents, the agency describes such techniques as “a more aggressive approach to SIGINT” and says that the TAO unit’s mission is to “aggressively scale” these operations.
But the NSA recognized that managing a massive network of implants is too big a job for humans alone.
“One of the greatest challenges for active SIGINT/attack is scale,” explains the top-secret presentation from 2009. “Human ‘drivers’ limit ability for large-scale exploitation (humans tend to operate within their own environment, not taking into account the bigger picture).”
The agency’s solution was TURBINE. Developed as part of TAO unit, it is described in the leaked documents as an “intelligent command and control capability” that enables “industrial-scale exploitation.”
TURBINE was designed to make deploying malware much easier for the NSA’s hackers by reducing their role in overseeing its functions. The system would “relieve the user from needing to know/care about the details,” the NSA’s Technology Directorate notes in one secret document from 2009. “For example, a user should be able to ask for ‘all details about application X’ and not need to know how and where the application keeps files, registry entries, user application data, etc.”
In practice, this meant that TURBINE would automate crucial processes that previously had to be performed manually – including the configuration of the implants as well as surveillance collection, or “tasking,” of data from infected systems. But automating these processes was about much more than a simple technicality. The move represented a major tactical shift within the NSA that was expected to have a profound impact – allowing the agency to push forward into a new frontier of surveillance operations.
The ramifications are starkly illustrated in one undated top-secret NSA document, which describes how the agency planned for TURBINE to “increase the current capability to deploy and manage hundreds of Computer Network Exploitation (CNE) and Computer Network Attack (CNA) implants to potentially millions of implants.” (CNE mines intelligence from computers and networks; CNA seeks to disrupt, damage or destroy them.)
Eventually, the secret files indicate, the NSA’s plans for TURBINE came to fruition. The system has been operational in some capacity since at least July 2010, and its role has become increasingly central to NSA hacking operations.
Earlier reports based on the Snowden files indicate that the NSA has already deployed between 85,000 and 100,000 of its implants against computers and networks across the world, with plans to keep on scaling up those numbers.
The intelligence community’s top-secret “Black Budget” for 2013, obtained by Snowden, lists TURBINE as part of a broader NSA surveillance initiative named “Owning the Net.”
The agency sought $67.6 million in taxpayer funding for its Owning the Net program last year. Some of the money was earmarked for TURBINE, expanding the system to encompass “a wider variety” of networks and “enabling greater automation of computer network exploitation.”

Circumventing Encryption

The NSA has a diverse arsenal of malware tools, each highly sophisticated and customizable for different purposes.
One implant, codenamed UNITEDRAKE, can be used with a variety of “plug-ins” that enable the agency to gain total control of an infected computer.
An implant plug-in named CAPTIVATEDAUDIENCE, for example, is used to take over a targeted computer’s microphone and record conversations taking place near the device. Another, GUMFISH, can covertly take over a computer’s webcam and snap photographs. FOGGYBOTTOM records logs of Internet browsing histories and collects login details and passwords used to access websites and email accounts. GROK is used to log keystrokes. And SALVAGERABBIT exfiltrates data from removable flash drives that connect to an infected computer.
The implants can enable the NSA to circumvent privacy-enhancing encryption tools that are used to browse the Internet anonymously or scramble the contents of emails as they are being sent across networks. That’s because the NSA’s malware gives the agency unfettered access to a target’s computer before the user protects their communications with encryption.
It is unclear how many of the implants are being deployed on an annual basis or which variants of them are currently active in computer systems across the world.
Previous reports have alleged that the NSA worked with Israel to develop the Stuxnet malware, which was used to sabotage Iranian nuclear facilities. The agency also reportedly worked with Israel to deploy malware called Flame to infiltrate computers and spy on communications in countries across the Middle East.
According to the Snowden files, the technology has been used to seek out terror suspects as well as individuals regarded by the NSA as “extremist.” But the mandate of the NSA’s hackers is not limited to invading the systems of those who pose a threat to national security.
In one secret post on an internal message board, an operative from the NSA’s Signals Intelligence Directorate describes using malware attacks against systems administrators who work at foreign phone and Internet service providers. By hacking an administrator’s computer, the agency can gain covert access to communications that are processed by his company. “Sys admins are a means to an end,” the NSA operative writes.
The internal post – titled “I hunt sys admins” – makes clear that terrorists aren’t the only targets of such NSA attacks. Compromising a systems administrator, the operative notes, makes it easier to get to other targets of interest, including any “government official that happens to be using the network some admin takes care of.”
Similar tactics have been adopted by Government Communications Headquarters, the NSA’s British counterpart. As the German newspaper Der Spiegel reported in September, GCHQ hacked computers belonging to network engineers at Belgacom, the Belgian telecommunications provider.
The mission, codenamed “Operation Socialist,” was designed to enable GCHQ to monitor mobile phones connected to Belgacom’s network. The secret files deem the mission a “success,” and indicate that the agency had the ability to covertly access Belgacom’s systems since at least 2010.
Infiltrating cellphone networks, however, is not all that the malware can be used to accomplish. The NSA has specifically tailored some of its implants to infect large-scale network routers used by Internet service providers in foreign countries. By compromising routers – the devices that connect computer networks and transport data packets across the Internet – the agency can gain covert access to monitor Internet traffic, record the browsing sessions of users, and intercept communications.
Two implants the NSA injects into network routers, HAMMERCHANT and HAMMERSTEIN, help the agency to intercept and perform “exploitation attacks” against data that is sent through a Virtual Private Network, a tool that uses encrypted “tunnels” to enhance the security and privacy of an Internet session.
The implants also track phone calls sent across the network via Skype and other Voice Over IP software, revealing the username of the person making the call. If the audio of the VOIP conversation is sent over the Internet using unencrypted “Real-time Transport Protocol” packets, the implants can covertly record the audio data and then return it to the NSA for analysis.
But not all of the NSA’s implants are used to gather intelligence, the secret files show. Sometimes, the agency’s aim is disruption rather than surveillance. QUANTUMSKY, a piece of NSA malware developed in 2004, is used to block targets from accessing certain websites. QUANTUMCOPPER, first tested in 2008, corrupts a target’s file downloads. These two “attack” techniques are revealed on a classified list that features nine NSA hacking tools, six of which are used for intelligence gathering. Just one is used for “defensive” purposes – to protect U.S. government networks against intrusions.

“Mass exploitation potential”

Before it can extract data from an implant or use it to attack a system, the NSA must first install the malware on a targeted computer or network.
According to one top-secret document from 2012, the agency can deploy malware by sending out spam emails that trick targets into clicking a malicious link. Once activated, a “back-door implant” infects their computers within eight seconds.
There’s only one problem with this tactic, codenamed WILLOWVIXEN: According to the documents, the spam method has become less successful in recent years, as Internet users have become wary of unsolicited emails and less likely to click on anything that looks suspicious.
Consequently, the NSA has turned to new and more advanced hacking techniques. These include performing so-called “man-in-the-middle” and “man-on-the-side” attacks, which covertly force a user’s internet browser to route to NSA computer servers that try to infect them with an implant.
To perform a man-on-the-side attack, the NSA observes a target’s Internet traffic using its global network of covert “accesses” to data as it flows over fiber optic cables or satellites. When the target visits a website that the NSA is able to exploit, the agency’s surveillance sensors alert the TURBINE system, which then “shoots” data packets at the targeted computer’s IP address within a fraction of a second.
In one man-on-the-side technique, codenamed QUANTUMHAND, the agency disguises itself as a fake Facebook server. When a target attempts to log in to the social media site, the NSA transmits malicious data packets that trick the target’s computer into thinking they are being sent from the real Facebook. By concealing its malware within what looks like an ordinary Facebook page, the NSA is able to hack into the targeted computer and covertly siphon out data from its hard drive. A top-secret animation demonstrates the tactic in action.
The documents show that QUANTUMHAND became operational in October 2010, after being successfully tested by the NSA against about a dozen targets.
According to Matt Blaze, a surveillance and cryptography expert at the University of Pennsylvania, it appears that the QUANTUMHAND technique is aimed at targeting specific individuals. But he expresses concerns about how it has been covertly integrated within Internet networks as part of the NSA’s automated TURBINE system.
“As soon as you put this capability in the backbone infrastructure, the software and security engineer in me says that’s terrifying,” Blaze says.
“Forget about how the NSA is intending to use it. How do we know it is working correctly and only targeting who the NSA wants? And even if it does work correctly, which is itself a really dubious assumption, how is it controlled?”
In an email statement to The Intercept, Facebook spokesman Jay Nancarrow said the company had “no evidence of this alleged activity.” He added that Facebook implemented HTTPS encryption for users last year, making browsing sessions less vulnerable to malware attacks.
Nancarrow also pointed out that other services besides Facebook could have been compromised by the NSA. “If government agencies indeed have privileged access to network service providers,” he said, “any site running only [unencrypted] HTTP could conceivably have its traffic misdirected.”
A man-in-the-middle attack is a similar but slightly more aggressive method that can be used by the NSA to deploy its malware. It refers to a hacking technique in which the agency covertly places itself between computers as they are communicating with each other.
This allows the NSA not only to observe and redirect browsing sessions, but to modify the content of data packets that are passing between computers.
The man-in-the-middle tactic can be used, for instance, to covertly change the content of a message as it is being sent between two people, without either knowing that any change has been made by a third party. The same technique is sometimes used by criminal hackers to defraud people.
A top-secret NSA presentation from 2012 reveals that the agency developed a man-in-the-middle capability called SECONDDATE to “influence real-time communications between client and server” and to “quietly redirect web-browsers” to NSA malware servers called FOXACID. In October, details about the FOXACID system were reported by the Guardian, which revealed its links to attacks against users of the Internet anonymity service Tor.
But SECONDDATE is tailored not only for “surgical” surveillance attacks on individual suspects. It can also be used to launch bulk malware attacks against computers.
According to the 2012 presentation, the tactic has “mass exploitation potential for clients passing through network choke points.”
Blaze, the University of Pennsylvania surveillance expert, says the potential use of man-in-the-middle attacks on such a scale “seems very disturbing.” Such an approach would involve indiscriminately monitoring entire networks as opposed to targeting individual suspects.
“The thing that raises a red flag for me is the reference to ‘network choke points,’” he says. “That’s the last place that we should be allowing intelligence agencies to compromise the infrastructure – because that is by definition a mass surveillance technique.”
To deploy some of its malware implants, the NSA exploits security vulnerabilities in commonly used Internet browsers such as Mozilla Firefox and Internet Explorer.
The agency’s hackers also exploit security weaknesses in network routers and in popular software plugins such as Flash and Java to deliver malicious code onto targeted machines.
The implants can circumvent anti-virus programs, and the NSA has gone to extreme lengths to ensure that its clandestine technology is extremely difficult to detect. An implant named VALIDATOR, used by the NSA to upload and download data to and from an infected machine, can be set to self-destruct – deleting itself from an infected computer after a set time expires.
In many cases, firewalls and other security measures do not appear to pose much of an obstacle to the NSA. Indeed, the agency’s hackers appear confident in their ability to circumvent any security mechanism that stands between them and compromising a computer or network. “If we can get the target to visit us in some sort of web browser, we can probably own them,” an agency hacker boasts in one secret document. “The only limitation is the ‘how.’”

Covert Infrastructure

The TURBINE implants system does not operate in isolation.
It is linked to, and relies upon, a large network of clandestine surveillance “sensors” that the agency has installed at locations across the world.
The NSA’s headquarters in Maryland are part of this network, as are eavesdropping bases used by the agency in Misawa, Japan and Menwith Hill, England.
The sensors, codenamed TURMOIL, operate as a sort of high-tech surveillance dragnet, monitoring packets of data as they are sent across the Internet.
When TURBINE implants exfiltrate data from infected computer systems, the TURMOIL sensors automatically identify the data and return it to the NSA for analysis. And when targets are communicating, the TURMOIL system can be used to send alerts or “tips” to TURBINE, enabling the initiation of a malware attack.
The NSA identifies surveillance targets based on a series of data “selectors” as they flow across Internet cables. These selectors, according to internal documents, can include email addresses, IP addresses, or the unique “cookies” containing a username or other identifying information that are sent to a user’s computer by websites such as Google, Facebook, Hotmail, Yahoo, and Twitter.
Other selectors the NSA uses can be gleaned from unique Google advertising cookies that track browsing habits, unique encryption key fingerprints that can be traced to a specific user, and computer IDs that are sent across the Internet when a Windows computer crashes or updates.
What’s more, the TURBINE system operates with the knowledge and support of other governments, some of which have participated in the malware attacks.
Classification markings on the Snowden documents indicate that NSA has shared many of its files on the use of implants with its counterparts in the so-called Five Eyes surveillance alliance – the United Kingdom, Canada, New Zealand, and Australia.
GCHQ, the British agency, has taken on a particularly important role in helping to develop the malware tactics. The Menwith Hill satellite eavesdropping base that is part of the TURMOIL network, located in a rural part of Northern England, is operated by the NSA in close cooperation with GCHQ.
Top-secret documents show that the British base – referred to by the NSA as “MHS” for Menwith Hill Station – is an integral component of the TURBINE malware infrastructure and has been used to experiment with implant “exploitation” attacks against users of Yahoo and Hotmail.
In one document dated 2010, at least five variants of the QUANTUM hacking method were listed as being “operational” at Menwith Hill. The same document also reveals that GCHQ helped integrate three of the QUANTUM malware capabilities – and test two others – as part of a surveillance system it operates codenamed INSENSER.
GCHQ cooperated with the hacking attacks despite having reservations about their legality. One of the Snowden files, previously disclosed by Swedish broadcaster SVT, revealed that as recently as April 2013, GCHQ was apparently reluctant to get involved in deploying the QUANTUM malware due to “legal/policy restrictions.” A representative from a unit of the British surveillance agency, meeting with an obscure telecommunications standards committee in 2010, separately voiced concerns that performing “active” hacking attacks for surveillance “may be illegal” under British law.
In response to questions from The Intercept, GCHQ refused to comment on its involvement in the covert hacking operations. Citing its boilerplate response to inquiries, the agency said in a statement that “all of GCHQ’s work is carried out in accordance with a strict legal and policy framework which ensures that our activities are authorized, necessary and proportionate, and that there is rigorous oversight.”
Whatever the legalities of the United Kingdom and United States infiltrating computer networks, the Snowden files bring into sharp focus the broader implications. Under cover of secrecy and without public debate, there has been an unprecedented proliferation of aggressive surveillance techniques. One of the NSA’s primary concerns, in fact, appears to be that its clandestine tactics are now being adopted by foreign rivals, too.
“Hacking routers has been good business for us and our 5-eyes partners for some time,” notes one NSA analyst in a top-secret document dated December 2012. “But it is becoming more apparent that other nation states are honing their skillz [sic] and joining the scene.”

Postmortem: NSA Exploits of the Day

When I decided to post an exploit a day from the TAO implant catalog, my goal was to highlight the myriad of capabilities of the NSA's Tailored Access Operations group, basically, its black bag teams. The catalog was published by Der Spiegel along with a pair of articles on the NSA's CNE -- that's Computer Network Exploitation -- operations, and it was just too much to digest. While the various nations' counterespionage groups certainly pored over the details, they largely washed over us in the academic and commercial communities. By republishing a single exploit a day, I hoped we would all read and digest each individual TAO capability.
It's important that we know the details of these attack tools. Not because we want to evade the NSA -- although some of us do -- but because the NSA doesn't have a monopoly on either technology or cleverness. The NSA might have a larger budget than every other intelligence agency in the world combined, but these tools are the sorts of things that any well-funded nation-state adversary would use. And as technology advances, they are the sorts of tools we're going to see cybercriminals use. So think of this less as what the NSA does, and more of a head start as to what everyone will be using.
Which means we need to figure out how to defend against them.
The NSA has put a lot of effort into designing software implants that evade antivirus and other detection tools, transmit data when they know they can't be detected, and survive reinstallation of the operating system. It has software implants designed to jump air gaps without being detected. It has an impressive array of hardware implants, also designed to evade detection. And it spends a lot of effort on hacking routers and switches. These sorts of observations should become a road map for anti-malware companies.
Anyone else have observations or comments, now that we've seen the entire catalog?
The TAO catalog isn't current; it's from 2008. So the NSA has had six years to improve all of the tools in this catalog, and to add a bunch more. Figuring out how to extrapolate to current capabilities is also important.

RAGEMASTER: NSA Exploit of the Day

Today's item -- and this is the final item -- from the NSA's Tailored Access Operations (TAO) group implant catalog:
RAGEMASTER (TS//SI//REL TO USA,FVEY) RF retro-reflector that provides an enhanced radar cross-section for VAGRANT collection. It's concealed in a standard computer video graphics array (VGA) cable between the video card and the video monitor. It's typically installed in the ferrite on the video cable.
(U) Capabilities
(TS//SI//REL TO USA,FVEY) RAGEMASTER provides a target for RF flooding and allows for easier collection of the VAGRANT video signal. The current RAGEMASTER unit taps the red video line on the VGA cable. It was found that, empirically, this provides the best video return and cleanest readout of the monitor contents.
(U) Concept of Operation
(TS//SI//REL TO USA,FVEY) The RAGEMASTER taps the red video line between the video card within the desktop unit and the computer monitor, typically an LCD. When the RAGEMASTER is illuminated by a radar unit, the illuminating signal is modulated with the red video information. This information is re-radiated, where it is picked up at the radar, demodulated, and passed onto the processing unit, such as a LFS-2 and an external monitor, NIGHTWATCH, GOTHAM, or (in the future) VIEWPLATE. The processor recreates the horizontal and vertical sync of the targeted monitor, thus allowing TAO personnel to see what is displayed on the targeted monitor.
Unit Cost: $30
Status: Operational. Manufactured on an as-needed basis. Contact POC for availability information.
Page, with graphics, is here. General information about TAO and the catalog is here.

Hacking facebook user "Access-Token" with Man-In the Middle Attack:


Facebook has several security measures to protect users' account, such as a user "access token" is granted to the Facebook application (like Candy Crush Saga, Lexulous Word Game), when the user authorizes it, it provides temporary and secure access to Facebook APIs.

To make this possible, users have to 'allow or accept' the application request so that an app can access your account information with the required permissions.
The Access Token stores information about permissions that have been granted as well as information about when the token will expire and which app generated it. Approved Facebook apps can publish or delete content on your behalf using the access tokens, rather than your Facebook password.
Access tokens are pretty sensitive, because anyone who knows the access token of a user can access the user's data and can perform any actions on behalf of the user, till the token is valid.

In Past years, Many Security Researchers reported various Oauth vulnerabilities to the Facebook Security team, but if the app traffic is not encrypted, you are not protected from the man-in-the middle attack and the attacker could steal your private information, using 'access token'.
Thus, access token is enough to allow a hacker to do all that the app authorized to do. The vulnerability is not new, it has already been known for a year, but Facebook is still vulnerable to hackers and surveillance specialized agencies like the NSA.

The Facebook Security team has acknowledged the vulnerability claimed by Ahmed Elsobky, a penetration tester from Egypt, "We'd actually received an earlier report from another researcher regarding this same issue. In response to that report, we've been working on limiting this behavior when it comes to our official apps, since they're pre-authorized. For other apps, unfortunately, fully preventing this would mean requiring any site integrating with Facebook to use HTTPS, which simply isn't practical for right now."

He demonstrated that 'How to hack a Facebook account by hijacking access token with Man-in-the-Middle attack', as shown:
hacking facebook account
Facebook apps must be protected from man-in-the middle attacks, and this can be done effectively by using HTTPS to encrypt any traffic that contains sensitive information or authentication credentials.
If You are a Facebook app developer, you should never send an 'access token' over unencrypted channels and Facebook users should only trust the encrypted apps and use "HTTPS Everywhere" Browser Extension for automated security.