Friday, February 28, 2014

Decoding the Voynich Manuscript

The Voynich Manuscript has been partially decoded. This seems not to be a hoax. And the manuscript seems not to be a hoax, either.
Here's the paper.

GENESIS: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
GENESIS (S//SI//REL) Commercial GSM handset that has been modified to include a Software Defined Radio (SDR) and additional system memory. The internal SDR allows a witting user to covertly perform network surveys, record RF spectrum, or perform handset location in hostile environments.
(S//SI//REL) The GENESIS systems are designed to support covert operations in hostile environments. A witting user would be able to survey the local environment with the spectrum analyzer tool, select spectrum of interest to record, and download the spectrum information via the integrated Ethernet to a laptop controller. The GENESIS system could also be used, in conjunction with an active interrogator, as the finishing tool when performing Find/Fix/Finish operations in unconventional environments.
(S//SI//REL) Features:
  • Concealed SDR with Handset Menu Interface
  • Spectrum Analyzer Capability
  • Find/Fix/Finish Capability
  • Integrated Ethernet
  • External Antenna Port
  • Internal 16 GB of storage
  • Multiple Integrated Antennas
(S//SI//REL) Future Enhancements:
  • 3G Handset Host Platform
  • Additional Host Platforms
  • Increased Memory Capacity
  • Additional Find/Fix/Finish Capabilities
  • Active Interrogation Capabilities
Status: Current GENESIS platform available. Future platforms available when developments are completed.
Unit Cost: $15K
Page, with graphics, is here. General information about TAO and the catalog is here.

Thursday, February 27, 2014

Was the iOS SSL Flaw Deliberate?

The best ways to go about designing and implementing a software backdoor. We suggested three characteristics of a good backdoor: low chance of discovery, high deniability if discovered, and minimal conspiracy to implement.

The critical iOS vulnerability that Apple patched last week is an excellent example. Look at the code. What caused the vulnerability is a single line of code: a second "goto fail;" statement. Since that statement isn't a conditional, it causes the whole procedure to terminate.
The flaw is subtle, and hard to spot while scanning the code. It's easy to imagine how this could have happened by error. And it would have been trivially easy for one person to add the vulnerability.
Was this done on purpose? I have no idea. But if I wanted to do something like this on purpose, this is exactly how I would do it.

 If the Apple auditing system is any good, they would be able to trace this errant goto line not just to the source-code check-in details, but to the specific login that made the change. And they would quickly know whether this was just an error, or a deliberate change by a bad actor. Does anyone know what's going on inside Apple?

ENTOURAGE: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
ENTOURAGE (S//SI//REL) Direction Finding application operating on the HOLLOWPOINT platform. The system is capable of providing line of bearing for GSM/UMTS/CDMA2000/FRS signals. A band-specific antenna and laptop controller is needed to compliment the HOLLOWPOINT system and completes the ground based system.
(S//SI) The ENTOURAGE application leverages the 4 Software Defined Radio (SDR) units in the HOLLOWPOINT platform. This capability provides an "Artemis-like" capability for waveforms of interest (2G,3G,others). The ENTOURAGE application works in conjunction with the NEBULA active interrogator as part of the Find/Fix/Finish capabilities of the GALAXY program.
(S//SI//REL) Features:
  • Software Defined Radio System
  • Operating range 10MHz - 4GHz
  • 4 Receive paths, all synchronized
  • 1 Transmit path
  • DF capability on GSM/UMTS/CDMA2000/FRS signals
  • Gigabit Ethernet
  • Integrated GPS
  • Highly Mobile and Deployable
(S//SI//REL) Enclosure:
  • 1.8"H x 8.0"W x 8.0"D
  • Approximately 3 lbs
  • 15 Watts
  • Passively cooled
(S//SI//REL) Future Developments:
  • WiMAX
  • WiFi
  • LTE
Status: The system is in the final testing stage and will be in production Spring 09.
Unit Cost: $70K
Page, with graphics, is here. General information about TAO and the catalog is here

Wednesday, February 26, 2014

DDoSing a Cell Phone Network

Interesting research:
Abstract: The HLR/AuC is considered to be one of the most important network elements of a 3G network. It can serve up to five million subscribers and at least one transaction with HLR/AuC is required for every single phone call or data session. This paper presents experimental results and observations that can be exploited to perform a novel distributed denial of service attack in 3G networks that targets the availability of the HLR/AuC. More specifically, first we present an experiment in which we identified and proved some zero-day vulnerabilities of the 3G network that can be exploited by malicious actors to mount various attacks. For the purpose of our experiment, we have used off-the-shelf infrastructure and software, without any specialized modification. Based on the observations of the experiment, we reveal an Advanced Persistent Threat (APT) in 3G networks that aims to flood an HLR/AuC of a mobile operator. We also prove that the discovered APT can be performed in a trivial manner using commodity hardware and software, which is widely and affordably available.
The attack involves cloning SIM cards, then making multiple calls from different handsets in different locations with the same SIM card. This confuses the network into thinking that the same phone is in multiple places at once.
Note that this has not been tested in the field, but there seems no reason why it wouldn't work.
There's a lot of insecurity in the fact that cell phones and towers largely trust each other. The NSA and FBI use that fact for eavesdropping, and here it's used for a denial-of-service attack.

EBSR: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
EBSR (S//SI//REL) Multi-purpose, Pico class, tri-band active GSM base station with internal 802.11/GPS/handset capability.
(S//SI//REL) Operational Restrictions exist for equipment deployment.
(S//SI//REL) Features:
  • LxT Model: 900/1800/1900MHz
  • LxU Model: 850/1800/1900MHz
  • Pico-class (1Watt) Base station
  • Optional Battery Kits
  • Highly Mobile and Deployable
  • Integrated GPS, MS, & 802.11
  • Voice & High-speed Data
  • SMS Capability
(S//SI//REL) Enclosure:
  • 1.9"H x 8.6"W x 6.3"D
  • Approximately 3 lbs
  • Actively cooled for extreme environments
(S//SI//REL) EBSR System Kit:
  • EBSR System
  • AC/DC power converter
  • Antenna to support MS, GPS, WIFI, & RF
  • LAN, RF, & USB cables
  • Pelican Case
  • (Field Kit only) Control Laptop and Accessories
(S//SI//REL) Separately Priced Options:
  • 90 WH LiIon Battery Kit
(S//SI//REL) Base Station Router Platform:
  • Multiple BSR units can be interconnected to form a macro network using 802.3 and 802.11 back-haul.
  • Supports Landshark/Candygram capabilities.
Status:
Unit Cost: $40K
Page, with graphics, is here. General information about TAO and the catalog is here.

The Beginner’s Guide to iptables, the Linux Firewall

Iptables is an extremely flexible firewall utility built for Linux operating systems. Whether you’re a novice Linux geek or a system administrator, there’s probably some way that iptables can be a great use to you. Read on as we show you how to configure the most versatile Linux firewall.

About iptables

iptables is a command-line firewall utility that uses policy chains to allow or block traffic. When a connection tries to establish itself on your system, iptables looks for a rule in its list to match it to. If it doesn’t find one, it resorts to the default action.
iptables almost always comes pre-installed on any Linux distribution. To update/install it, just retrieve the iptables package:
sudo apt-get install iptables
There are GUI alternatives to iptables like Firestarter, but iptables isn’t really that hard once you have a few commands down. You want to be extremely careful when configuring iptables rules, particularly if you’re SSH’d into a server, because one wrong command can permanently lock you out until it’s manually fixed at the physical machine.

Types of Chains

iptables uses three different chains: input, forward, and output.
Input – This chain is used to control the behavior for incoming connections. For example, if a user attempts to SSH into your PC/server, iptables will attempt to match the IP address and port to a rule in the input chain.
Forward – This chain is used for incoming connections that aren’t actually being delivered locally. Think of a router – data is always being sent to it but rarely actually destined for the router itself; the data is just forwarded to its target. Unless you’re doing some kind of routing, NATing, or something else on your system that requires forwarding, you won’t even use this chain.
There’s one sure-fire way to check whether or not your system uses/needs the forward chain.
iptables -L -v

The screenshot above is of a server that’s been running for a few weeks and has no restrictions on incoming or outgoing connections. As you can see, the input chain has processed 11GB of packets and the output chain has processed 17GB. The forward chain, on the other hand, has not needed to process a single packet. This is because the server isn’t doing any kind of forwarding or being used as a pass-through device.
Output – This chain is used for outgoing connections. For example, if you try to ping howtogeek.com, iptables will check its output chain to see what the rules are regarding ping and howtogeek.com before making a decision to allow or deny the connection attempt.
The caveat
Even though pinging an external host seems like something that would only need to traverse the output chain, keep in mind that to return the data, the input chain will be used as well. When using iptables to lock down your system, remember that a lot of protocols will require two-way communication, so both the input and output chains will need to be configured properly. SSH is a common protocol that people forget to allow on both chains.

Policy Chain Default Behavior

Before going in and configuring specific rules, you’ll want to decide what you want the default behavior of the three chains to be. In other words, what do you want iptables to do if the connection doesn’t match any existing rules?
To see what your policy chains are currently configured to do with unmatched traffic, run the iptables -L command.

As you can see, we also used the grep command to give us cleaner output. In that screenshot, our chains are currently figured to accept traffic.
More times than not, you’ll want your system to accept connections by default. Unless you’ve changed the policy chain rules previously, this setting should already be configured. Either way, here’s the command to accept connections by default:
iptables --policy INPUT ACCEPT
iptables --policy OUTPUT ACCEPT
iptables --policy FORWARD ACCEPT
By defaulting to the accept rule, you can then use iptables to deny specific IP addresses or port numbers, while continuing to accept all other connections. We’ll get to those commands in a minute.
If you would rather deny all connections and manually specify which ones you want to allow to connect, you should change the default policy of your chains to drop. Doing this would probably only be useful for servers that contain sensitive information and only ever have the same IP addresses connect to them.
iptables --policy INPUT DROP
iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP

Connection-specific Responses

With your default chain policies configured, you can start adding rules to iptables so it knows what to do when it encounters a connection from or to a particular IP address or port. In this guide, we’re going to go over the three most basic and commonly used “responses”.
Accept – Allow the connection.
Drop – Drop the connection, act like it never happened. This is best if you don’t want the source to realize your system exists.
Reject – Don’t allow the connection, but send back an error. This is best if you don’t want a particular source to connect to your system, but you want them to know that your firewall blocked them.
The best way to show the difference between these three rules is to show what it looks like when a PC tries to ping a Linux machine with iptables configured for each one of these settings.
Allowing the connection:

Dropping the connection:

Rejecting the connection:

Allowing or Blocking Specific Connections

With your policy chains configured, you can now configure iptables to allow or block specific addresses, address ranges, and ports. In these examples, we’ll set the connections to DROP, but you can switch them to ACCEPT or REJECT, depending on your needs and how you configured your policy chains.
Note: In these examples, we’re going to use iptables -A to append rules to the existing chain. iptables starts at the top of its list and goes through each rule until it finds one that it matches. If you need to insert a rule above another, you can use iptables -I [chain] [number] to specify the number it should be in the list.
Connections from a single IP address
This example shows how to block all connections from the IP address 10.10.10.10.
iptables -A INPUT -s 10.10.10.10 -j DROP
Connections from a range of IP addresses
This example shows how to block all of the IP addresses in the 10.10.10.0/24 network range. You can use a netmask or standard slash notation to specify the range of IP addresses.
iptables -A INPUT -s 10.10.10.0/24 -j DROP
or
iptables -A INPUT -s 10.10.10.0/255.255.255.0 -j DROP
Connections to a specific port
This example shows how to block SSH connections from 10.10.10.10.
iptables -A INPUT -p tcp --dport ssh -s 10.10.10.10 -j DROP
You can replace “ssh” with any protocol or port number. The -p tcp part of the code tells iptables what kind of connection the protocol uses.  If you were blocking a protocol that uses UDP rather than TCP, then -p udp would be necessary instead.
This example shows how to block SSH connections from any IP address.
iptables -A INPUT -p tcp --dport ssh -j DROP

Connection States

As we mentioned earlier, a lot of protocols are going to require two-way communication. For example, if you want to allow SSH connections to your system, the input and output chains are going to need a rule added to them. But, what if you only want SSH coming into your system to be allowed? Won’t adding a rule to the output chain also allow outgoing SSH attempts?
That’s where connection states come in, which give you the capability you’d need to allow two way communication but only allow one way connections to be established. Take a look at this example, where SSH connections FROM 10.10.10.10 are permitted, but SSH connections TO 10.10.10.10 are not. However, the system is permitted to send back information over SSH as long as the session has already been established, which makes SSH communication possible between these two hosts.
iptables -A INPUT -p tcp --dport ssh -s 10.10.10.10 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -d 10.10.10.10 -m state --state ESTABLISHED -j ACCEPT

Saving Changes

The changes that you make to your iptables rules will be scrapped the next time that the iptables service gets restarted unless you execute a command to save the changes.  This command can differ depending on your distribution:
Ubuntu:
sudo /sbin/iptables-save
Red Hat / CentOS:
/sbin/service iptables save
Or
/etc/init.d/iptables save

Other Commands

List the currently configured iptables rules:
iptables -L
Adding the -v option will give you packet and byte information, and adding -n will list everything numerically. In other words – hostnames, protocols, and networks are listed as numbers.
To clear all the currently configured rules, you can issue the flush command.
iptables -F

Tuesday, February 25, 2014

Breaking Up the NSA

The NSA has become too big and too powerful. What was supposed to be a single agency with a dual mission -- protecting the security of U.S. communications and eavesdropping on the communications of our enemies -- has become unbalanced in the post-Cold War, all-terrorism-all-the-time era.
Putting the U.S. Cyber Command, the military's cyberwar wing, in the same location and under the same commander, expanded the NSA's power. The result is an agency that prioritizes intelligence gathering over security, and that's increasingly putting us all at risk. It's time we thought about breaking up the National Security Agency.
Broadly speaking, three types of NSA surveillance programs were exposed by the documents released by Edward Snowden. And while the media tends to lump them together, understanding their differences is critical to understanding how to divide up the NSA's missions.
The first is targeted surveillance.
This is best illustrated by the work of the NSA's Tailored Access Operations (TAO) group, including its catalog of hardware and software "implants" designed to be surreptitiously installed onto the enemy's computers. This sort of thing represents the best of the NSA and is exactly what we want it to do. That the United States has these capabilities, as scary as they might be, is cause for gratification.
The second is bulk surveillance, the NSA's collection of everything it can obtain on every communications channel to which it can get access. This includes things such as the NSA's bulk collection of call records, location data, e-mail messages and text messages.
This is where the NSA overreaches: collecting data on innocent Americans either incidentally or deliberately, and data on foreign citizens indiscriminately. It doesn't make us any safer, and it is liable to be abused. Even the director of national intelligence, James Clapper, acknowledged that the collection and storage of data was kept a secret for too long.
The third is the deliberate sabotaging of security. The primary example we have of this is the NSA's BULLRUN program, which tries to "insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communication devices." This is the worst of the NSA's excesses, because it destroys our trust in the Internet, weakens the security all of us rely on and makes us more vulnerable to attackers worldwide.
That's the three: good, bad, very bad. Reorganizing the U.S. intelligence apparatus so it concentrates on our enemies requires breaking up the NSA along those functions.
First, TAO and its targeted surveillance mission should be moved under the control of U.S. Cyber Command, and Cyber Command should be completely separated from the NSA. Actively attacking enemy networks is an offensive military operation, and should be part of an offensive military unit.
Whatever rules of engagement Cyber Command operates under should apply equally to active operations such as sabotaging the Natanz nuclear enrichment facility in Iran and hacking a Belgian telephone company. If we're going to attack the infrastructure of a foreign nation, let it be a clear military operation.
Second, all surveillance of Americans should be moved to the FBI.
The FBI is charged with counterterrorism in the United States, and it needs to play that role. Any operations focused against U.S. citizens need to be subject to U.S. law, and the FBI is the best place to apply that law. That the NSA can, in the view of many, do an end-run around congressional oversight, legal due process and domestic laws is an affront to our Constitution and a danger to our society. The NSA's mission should be focused outside the United States -- for real, not just for show.
And third, the remainder of the NSA needs to be rebalanced so COMSEC (communications security) has priority over SIGINT (signals intelligence). Instead of working to deliberately weaken security for everyone, the NSA should work to improve security for everyone.
Computer and network security is hard, and we need the NSA's expertise to secure our social networks, business systems, computers, phones and critical infrastructure. Just recall the recent incidents of hacked accounts -- from Target to Kickstarter. What once seemed occasional now seems routine. Any NSA work to secure our networks and infrastructure can be done openly—no secrecy required.
This is a radical solution, but the NSA's many harms require radical thinking. It's not far off from what the President's Review Group on Intelligence and Communications Technologies, charged with evaluating the NSA's current programs, recommended. Its 24th recommendation was to put the NSA and U.S. Cyber Command under different generals, and the 29th recommendation was to put encryption ahead of exploitation.
I have no illusions that anything like this will happen anytime soon, but it might be the only way to tame the enormous beast that the NSA has become.

CYCLONE Hx9: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
CYCLONE Hx9 (S//SI//FVEY) EGSM (900MGz) macro-class Network-In-a-Box (NIB) system. Uses the existing Typhon GUI and supports the full Typhon feature base and applications.
(S//SI//REL) Operational Restrictions exist for equipment deployment.
(S//SI//REL) Features:
  • EGSM 900MHz
  • Macro-class (+43dBm)
  • 32+Km Range
  • Optional Battery Kits
  • Highly Mobile and Deployable
  • Integrated GPS, MS, & 802.11
  • Voice & High-speed Data
  • GSM Security & Encryption
(S//SI//REL) Advanced Features:
  • GPS -- Supporting Typhon applications
  • GSM Handset Module -- Supports auto-configuration and remote command and control features.
  • 802.11 -- Supports high speed wireless LAN remote command and control
(S//SI//REL) Enclosure:
  • 3.5"H x 8.5"W x 9"D
  • Approximately 8 lbs
  • Actively cooled for extreme environments
(S//SI//REL) Cyclone Hx9 System Kit:
  • Cyclone Hx9 System
  • AC/DC power converter
  • Antenna to support MS, GPS, WIFI, & RF
  • LAN, RF, & USB cables
  • Pelican Case
  • (Field Kit only) Control Laptop and Accessories
(S//SI//REL) Separately Priced Options:
  • 800 WH LiIon Battery Kit
(S//SI//REL) Base Station Router Platform:
  • Overlay GSM cellular communications supporting up to 32 Cyclone Mx9 systems providing full mobility and utilizing a VoIP back-haul.
  • GPRS data service and associated application
Unit Cost: $70K for two months
Status: Just out of development, first production runs ongoing.
Page, with graphics, is here. General information about TAO and the catalog is here.
In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Monday, February 24, 2014

New Results in Software Obfuscation

Amit Sahai and others have some new results in software obfuscation. The papers are here. An over-the top Wired.com story on the research is here. And Matthew Green has a great blog post explaining what's real and what's hype.

HIDS/NIDS (host intrusion detection systems and network intrusion detection systems)

Host intrusion detection systems (HIDS) and network intrusion detection systems (NIDS) are methods of security management for computers and networks. In HIDS, anti-threat applications such as firewalls, antivirus software and spyware-detection programs are installed on every network computer that has two-way access to the outside environment such as the Internet. In NIDS, anti-threat software is installed only at specific points such as servers that interface between the outside environment and the network segment to be protected.
All methods of intrusion detection (ID) involve the gathering and analysis of information from various areas within a computer or network to identify possible threats posed by hackers and crackers inside or outside the organization. Host-based and network-based ID systems have their respective advantages and limitations. The most effective protection for a proprietary network is provided by a combination of both technologies.

What is a Host Intrusion Detection System?

System?

General

Host based intrusion detection (HIDS) refers to intrusion detection that takes place on a single host system. Currently, HIDS involves installing an agent on the local host that monitors and reports on the system configuration and application activity. Some common abilities of HIDS systems include log analysis, event correlation, integrity checking, policy enforcement, rootkit detection, and alerting. They often also have the ability to baseline a host system to detect variations in system configuration. In specific vendor implementations these HIDS agents also allow connectivity to other security systems. For example, Cisco CSA has the ability to send host data upstream to Cisco network IPS devices, Checkpoint Integrity can be integrated with Checkpoint Secure Client (Client VPN), and IBM Proventia Desktop is Cisco NAC certified.

HIDS Intrusion Prevention

Most HIDS packages now have the ability to actively prevent malicious or anomalous activity on the host system. Due to the potential impact this can have on the end user, HIDS is frequently deployed in "monitor only" mode initially. This enables the administrator to create a baseline of the system configuration and activity. Active blocking of applications, system changes, and network activity is limited to only the most egregious activities. Administrators can then tune the system policy based on what is considered "normal activity".

Architecture General

Architecture Host Policy

To be effective in an environment with more than a few hosts, HIDS is generally deployed to be managed from a central location. On the management system a policy is configured for deployment to local agents. There can be a single policy for all computers, but more than likely there will be multiple policies for particular operating systems, machine types, physical locations, and user types. As an example, a policy may be specific for all Windows DNS servers, all Windows desktops in a remote office, or all Linux systems in an enterprise. These policies have configuration values unique to the local system requirements. On a Windows host it is common to monitor registry changes, access and changes to .dll files, and application activity. On a DNS server the policy may look to verify the integrity and report on changes to the DNS server configuration files.
Once a policy is configured, it is then applied and distributed to a group of hosts with the agent installed. Some benefits of this central management architecture are the ability to apply changes to many systems at once and create a "baseline" for known system types. Central authentication, alerting, and reporting are also benefits of the central management architecture.

How HIDS works

The HIDS agent monitors system integrity, application activity, file changes, host network traffic, and system logs. Using common hashing tools, file timestamps, system logs, and monitoring system calls and the local network interface gives the agent insight to the current state of the local host. If an unauthorized change or activity is detected, it will alert the user via a pop-up, alert the central management server, block the activity, or a combination of the three. The decision is based on the policy that is installed on the local system.

Software

There are many types of HIDS software available. Below is a list of some of the most common:

Free:


OSSEC - Open Source Host-based Intrusion Detection System
Tripwire
AIDE - Advanced Intrusion Detection Environment
Prelude Hybrid IDS

Not Free:


IBM Proventia Desktop
Cisco CSA
Checkpoint Integrity
Tripwire Enterprise
Symantec Endpoint Protection
McAfee Host Intrusion Prevention
          Matthew Berge
          Ernst & Young

 


 


Debating Snowden's Actions

It's the season. Here are two.

TOTEGHOSTLY 2.0: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
TOTEGHOSTLY 2.0 (TS//SI//REL) TOTEGHOSTLY 2.0 is STRAITBIZARRE based implant for the Windows Mobile embedded operating system and uses the CHIMNEYPOOL framework. TOTEGHOSTLY 2.0 is compliant with the FREEFLOW project, therefore it is supported in the TURBULENCE architecture.
(TS//SI//REL) TOTEGHOSTLY 2.0 is a software implant for the Windows Mobile operating system that utilizes modular mission applications to provide specific SIGINT functionality. This functionality includes the ability to remotely push/pull files from the device, SMS retrieval, contact list retrieval, voicemail, geolocation, hot mic, camera capture, cell tower location, etc. Command, control, and data exfiltration can occur over SMS messaging or a GPRS data connection. A FRIEZERAMP interface using HTTPSlink2 transport module handles encrypted communications.
(TS//SI//REL) The initial release of TOTEGHOSTLY 2.0 will focus on installing the implant via close access methods. A remote installation capability will be pursued for a future release.
(TS//SI//REL) TOTEGHOSTLY 2.0 will be controlled using an interface tasked through the NCC (Network Control Center) utilizing the XML based tasking and data forward scheme under the TURBULENCE architecture following the TAO GENIE Initiative.
Unit Cost: $0
Status: (U) In development
Page, with graphics, is here. General information about TAO and the catalog is here.

Brian Krebs

Nice profile of Brian Krebs, cybersecurity journalist:
Russian criminals routinely feed Mr. Krebs information about their rivals that they obtained through hacks. After one such episode, he began receiving daily calls from a major Russian cybercriminal seeking his files back. Mr. Krebs is writing a book about the ordeal, called "Spam Nation," to be published by Sourcebooks this year. In the meantime, hackers have been competing in a dangerous game of one-upmanship to see who can pull the worst prank on Mr. Krebs. They often steal his identity. One opened a $20,000 credit line in his name. Admirers have made more than $1,000 in bogus PayPal donations to his blog using hacked accounts. Others have paid his cable bill for three years with stolen credit cards.
The antics can be dangerous. In March, as Mr. Krebs was preparing to have his mother over for dinner, he opened his front door to find a police SWAT team pointing semiautomatic guns in his direction. Only after his wife returned home from the grocery store to find him handcuffed did the police realize Mr. Krebs had been the victim of "swatting." Someone had called the police and falsely reported a murder at their home.
Four months after that, someone sent packets of heroin to Mr. Krebs’s home, then spoofed a call from his neighbor to the police. But Mr. Krebs had already been tipped off to the prank. He was tracking the fraud in a private forum -- where a criminal had posted the shipment’s tracking number ­- and had alerted the local police and the F.B.I.

IT oppression

Whether it's Syria using Facebook to help identify and arrest dissidents or China using its "Great Firewall" to limit access to international news throughout the country, repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, propaganda, and control. They're getting really good at it, and the IT industry is helping. We're helping by creating business applications -- categories of applications, really -- that are being repurposed by oppressive governments for their own use:
  • What is called censorship when practiced by a government is content filtering when practiced by an organization. Many companies want to keep their employees from viewing porn or updating their Facebook pages while at work. In the other direction, data loss prevention software keeps employees from sending proprietary corporate information outside the network and also serves as a censorship tool. Governments can use these products for their own ends.
  • Propaganda is really just another name for marketing. All sorts of companies offer social media-based marketing services designed to fool consumers into believing there is "buzz" around a product or brand. The only thing different in a government propaganda campaign is the content of the messages.
  • Surveillance is necessary for personalized marketing, the primary profit stream of the Internet. Companies have built massive Internet surveillance systems designed to track users' behavior all over the Internet and closely monitor their habits. These systems track not only individuals but also relationships between individuals, to deduce their interests so as to advertise to them more effectively. It's a totalitarian's dream.
  • Control is how companies protect their business models by limiting what people can do with their computers. These same technologies can easily be co-opted by governments that want to ensure that only certain computer programs are run inside their countries or that their citizens never see particular news programs.
Technology magnifies power, and there's no technical difference between a government and a corporation wielding it. This is how commercial security equipment from companies like BlueCoat and Sophos end up being used by the Syrian and other oppressive governments to surveil -- in order to arrest -- and censor their citizens. This is how the same face-recognition technology that Disney uses in its theme parks ends up identifying protesters in China and Occupy Wall Street protesters in New York.
There are no easy technical solutions, especially because these four applications -- censorship, propaganda, surveillance, and control -- are intertwined; it can be hard to affect one without also affecting the others. Anonymity helps prevent surveillance, but it also makes propaganda easier. Systems that block propaganda can facilitate censorship. And giving users the ability to run untrusted software on their computers makes it easier for governments -- and criminals -- to install spyware.
We need more research into how to circumvent these technologies, but it's a hard sell to both the corporations and governments that rely on them. For example, law enforcement in the US wants drones that can identify and track people, even as we decry China's use of the same technology. Indeed, the battleground is often economic and political rather than technical; sometimes circumvention research is itself illegal.
The social issues are large. Power is using the Internet to increase its power, and we haven't yet figured out how to correct the imbalances among government, corporate, and individual interests in our digital world. Cyberspace is still waiting for its Gandhi, its Martin Luther King, and a convincing path from the present to a better future.

Building an Online Lie Detector

There's an interesting project to detect false rumors on the Internet.
The EU-funded project aims to classify online rumours into four types: speculation -- such as whether interest rates might rise; controversy -- as over the MMR vaccine; misinformation, where something untrue is spread unwittingly; and disinformation, where it's done with malicious intent. The system will also automatically categorise sources to assess their authority, such as news outlets, individual journalists, experts, potential eye witnesses, members of the public or automated 'bots'. It will also look for a history and background, to help spot where Twitter accounts have been created purely to spread false information.
It will search for sources that corroborate or deny the information, and plot how the conversations on social networks evolve, using all of this information to assess whether it is true or false. The results will be displayed to the user in a visual dashboard, to enable them to easily see whether a rumour is taking hold.
I have no idea how well it will work, or even whether it will work, but I like research in this direction. Of the three primary Internet mechanisms for social control, surveillance and censorship have received a lot more attention than propaganda. Anything that can potentially detect propaganda is a good thing.

CROSSBEAM: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
CROSSBEAM (TS//SI//REL) CROSSBEAM is a GSM module that mates a modified commercial cellular product with a WAGONBED controller board.
(TS//SI//REL) CROSSBEAM is a reusable CHIMNEYPOOL-compliant GSM communications module capable of collecting and compressing voice data. CROSSBEAM can receive GSM voice, record voice data, and transmit the received information via connected modules or 4 different GSM data modes (GPRS, Circuit Switched Data, Data Over Voice, and DTMF) back to a secure facility. The CROSSBEAM module consists of a standard ANT architecture embedded computer, a specialized phone component, a customized software controller suite and an optional DSP (ROCKYKNOB) of using Data Over Voice to transmit data.
Status: Limited Supply Available
Unit Cost: $4k
Delivery: 90 days for most configurations
Page, with graphics, is here. General information about TAO and the catalog is here.

Sunday, February 23, 2014

RCS Spyware and Citizen Lab

Remote-Controlled System (RCS) is a piece of spyware sold exclusively to governments by a Milan company called Hacking Team. Recently, Citizen Lab found this spyware being used by the Ethiopian government against journalists, including American journalists.
More recently, Citizen Lab mapped the software and who's using it:
Hacking Team advertises that their RCS spyware is "untraceable" to a specific government operator. However, we claim to identify a number of current or former government users of the spyware by pinpointing endpoints, and studying instances of RCS that we have observed. We suspect that agencies of these twenty-one governments are current or former users of RCS: Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Korea, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, Sudan, Thailand, Turkey, UAE, and Uzbekistan.
Both articles on the Citizen Lab website are worth reading; the details are fascinating. And more are coming.
Finally, congratulations to Citizen Lab for receiving a 2014 MacArthur Award for Creative and Effective Institutions, along with the $1M prize. This organization is one of the good guys, and I'm happy to see it get money to continue its work.

TOTEGHOSTLY 2.0: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
TOTEGHOSTLY 2.0 (TS//SI//REL) TOTEGHOSTLY 2.0 is STRAITBIZARRE based implant for the Windows Mobile embedded operating system and uses the CHIMNEYPOOL framework. TOTEGHOSTLY 2.0 is compliant with the FREEFLOW project, therefore it is supported in the TURBULENCE architecture.
(TS//SI//REL) TOTEGHOSTLY 2.0 is a software implant for the Windows Mobile operating system that utilizes modular mission applications to provide specific SIGINT functionality. This functionality includes the ability to remotely push/pull files from the device, SMS retrieval, contact list retrieval, voicemail, geolocation, hot mic, camera capture, cell tower location, etc. Command, control, and data exfiltration can occur over SMS messaging or a GPRS data connection. A FRIEZERAMP interface using HTTPSlink2 transport module handles encrypted communications.
(TS//SI//REL) The initial release of TOTEGHOSTLY 2.0 will focus on installing the implant via close access methods. A remote installation capability will be pursued for a future release.
(TS//SI//REL) TOTEGHOSTLY 2.0 will be controlled using an interface tasked through the NCC (Network Control Center) utilizing the XML based tasking and data forward scheme under the TURBULENCE architecture following the TAO GENIE Initiative.
Unit Cost: $0
Status: (U) In development
Page, with graphics, is here. General information about TAO and the catalog is here.

TOTECHASER: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
TOTECHASER (TS//SI//REL) TOTECHASER is a Windows CE implant targeting the Thuraya 2520 handset. The Thuraya is a dual mode phone that can operate either in SAT or GSM modes. The phone also supports a GPRS data connection for Web browsing, e-mail, and MMS messages. The initial software implant capabilities include providing GPS and GSM geo-location information. Call log, contact list, and other user information can also be retrieved from the phone. Additional capabilities are being investigated.
(TS//SI//REL) TOTECHASER will use SMS messaging for the command, control, and data exfiltration path. The initial capability will use covert SMS messages to communicate with the handset. These covert messages can be transmitted in either Thuraya Satellite mode or GMS mode and will not alert the user of this activity. An alternate command and control channel using the GPRS data connection based on the TOTEGHOSTLY impant is intended for a future version.
(TS//SI//REL) Prior to deployment, the TOTECHASER handsets must be modified. Details of how the phone is modified are being developed. A remotely deployable TOTECHASER implant is being investigated. The TOTECHASER system consists of the modified target handsets and a collection system.
(TS//SI//REL) TOTECHASER will accept configuration parameters to determine how the implant operates. Configuration parameters will determine what information is recorded, when to collect that information, and when the information is exfiltrated. The configuration parameters can be set upon initial deployment and updated remotely.
Unit Cost: $
Status:
Page, with graphics, is here. General information about TAO and the catalog is here.

What Information Are Stun Guns Recording?

In a story about a stolen Stradivarius violin, there's this:
Information from a stun gun company, an anonymous tip and hours of surveillance paved the way for authorities to find a stolen 300-year-old Stradivarius violin in the attic of a Milwaukee home, police said Thursday. [...]
Taser International, the maker of the stun gun used in the attack, "provided invaluable information" that the FBI tracked down in Texas and ultimately led police to Universal Allah, a Milwaukee resident, Police Chief Edward Flynn said Thursday.
The criminals stunned a musician as he was leaving a show at church, and drove off with his multimillion-dollar violin. What information could the stun gun company give the police that would be invaluable? Is it as simple as knowing who purchased the weapon, which was dropped at the scene? Or something weirder?

As the Milwaukee Police and the FBI began to conduct the investigation they reached out to us at TASER in order to identify possible suspects in the case. This was accomplished thanks to our Anti-Felon Identification tags (AFID). The AFID program enforces accountability for each use of a TASER device. This system releases dozens of confetti-sized markers upon discharge of a CEW cartridge. Each AFID contains a serial number that tracks back to the original purchaser of the cartridge. The large number of AFIDs and their small size makes it impractical to clean up. Therefore, law enforcement can pick up one AFID and contact TASER International for a complete trace on the serial number. At the time of purchase, we verify the identity and background of the prospective buyer with the understanding that we will not release the information and it will be kept confidential unless a TASER device is used in the commission of a crime. This information proved invaluable during the investigation on the Stradivarius violin. "We worked very closely with TASER International who provided us invaluable information that the FBI was able to track down for us in Texas," said Chief Flynn, "That information led us to an individual who had purchased this device."

PICASSO: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
PICASSO (S//SI//REL) Modified GSM (target) handset that collects user data, location information and room audio. Command and data exfil is done from a laptop and regular phone via SMS (Short Messaging Service), without alerting the target.
(S//SI) Target Data via SMS:
  • Incoming call numbers
  • Outgoing call numbers
  • Recently registered networks
  • Recent Location Area Codes (LAC)
  • Cell power and Timing Advance information (GEO)
  • Recently Assigned TMSI, IMSI
  • Recent network authentication challenge responses
  • Recent successful PINs entered into the phone during the power-on cycle
  • SW version of PICASSO implant
  • 'Hot-mic' to collect Room Audio
  • Panic Button sequence (sends location information to an LP Operator)
  • Send Targeting Information (i.e. current IMSI and phone number when it is turned on -- in case the SIM has just been switched).
  • Block call to deny target service.
(S//SI//REL) Handset Options
  • Eastcom 760c+
  • Samsung E600, X450
  • Samsung C140
  • (with Arabic keypad/language option)
(S//SI) PICASSO Operational Concept
(S//SI//REL) Uses include asset validation and tracking and target templating. Phone can be hot mic'd and has a "Panic Button" key sequence for the witting user.
Status: 2 weeks ARO (10 or less)
Unit Cost: approx $2000
Page, with graphics, is here. General information about TAO and the catalog is here.

Who Should Store NSA Surveillance Data

One of the recommendations by the president's Review Group on Intelligence and Communications Technologies on reforming the National Security Agency—No. 5, if you're counting—is that the government should not collect and store telephone metadata. Instead, a private company -- either the phone companies themselves or some other third party -- should store the metadata and provide it to the government only upon a court order.
This isn't a new idea. Over the past decade, several countries have enacted mandatory data retention laws, in which companies are required to save Internet or telephony data about customers for a specified period of time, in case the government needs it for an investigation. But does it make sense? In December, Harvard Law professor Jack Goldsmith asked: "I understand the Report's concerns about the storage of bulk meta-data by the government. But I do not understand the Report's implicit assumption that the storage of bulk meta-data by private entities is an improvement from the perspective of privacy, or data security, or potential abuse."
It's a good question, and in the almost two months since the report was released, it hasn't received enough attention. I think the proposal makes things worse in several respects.
First, the NSA is going to do a better job at database security than corporations are. I say this not because the NSA has any magic computer security powers, but because it has more experience at it and is better funded. (And, yes, that's true even though Edward Snowden was able to copy so many of their documents.) The difference is of degree, not of kind. Both options leave the data vulnerable to insider attacks—more so in the case of a third-party data repository because there will be more insiders. And although neither will be perfect, I would trust the NSA to protect my data against unauthorized access more than I would trust a private corporation to do the same.
Second, there's the greater risk of authorized access. This is the risk that the Review Group is most concerned about. The thought is that if the data were in private hands, and the only legal way at the data was a court order, then it would be less likely for the NSA to exceed its authority by making bulk queries on the data or accessing more of it than it is allowed to. I don't believe that this is true. Any system that has the data outside of the NSA's control is going to include provisions for emergency access, because ... well, because the word terrorism will scare any lawmaker enough to give the NSA that capability. Already the NSA goes through whatever legal processes it and the secret FISA court have agreed to. Adding another party into this process doesn't slow things down, provide more oversight, or in any way make it better. I don't trust a corporate employee not to turn data over for NSA analysis any more than I trust an NSA employee.
On the corporate side, the corresponding risk is that the data will be used for all sorts of things that wouldn't be possible otherwise. If corporations are forced by governments to hold on to customer data, they're going to start thinking things like: "We're already storing this personal data on all of our customers for the government. Why don't we mine it for interesting tidbits, use it for marketing purposes, sell it to data brokers, and on and on and on?" At least the NSA isn't going to use our personal data for large-scale individual psychological manipulation designed to separate us from as much money as possible -- which is the business model of companies like Google and Facebook.
The final claimed benefit -- and this one is from the president's Review Group -- is that putting the data in private hands will make us all feel better. They write: "Knowing that the government has ready access to one's phone call records can seriously chill 'associational and expressive freedoms,' and knowing that the government is one flick of a switch away from such information can profoundly 'alter the relationship between citizen and government in a way that is inimical to society.'" Those quotes within the quote are from Justice Sonia Sotomayor's opinion in the U.S. v. Jones GPS monitoring case.
The Review Group believes that moving the data to some other organization, either the companies that generate it in the first place or some third-party data repository, fixes that problem. But is that something we really want fixed? The fact that a government has us all under constant and ubiquitous surveillance should be chilling. It should limit freedom of expression. It is inimical to society, and to the extent we hide what we're doing from the people or do things that only pretend to fix the problem, we do ourselves a disservice.
Where does this leave us? If the corporations are storing the data already -- for some business purpose --- then the answer is easy: Only they should store it. If the corporations are not already storing the data, then -- on balance -- it's safer for the NSA to store the data. And in many cases, the right answer is for no one to store the data. It should be deleted because keeping it makes us all less secure.
This question is much bigger than the NSA. There are going to be data -- medical data, movement data, transactional data -- that are both valuable to us all in aggregate and private to us individually. And in every one of those instances, we're going to be faced with the same question: How do we extract that societal value, while at the same protecting its personal nature? This is one of the key challenges of the Information Age, and figuring out where to store the data is a major part of that challenge. There certainly isn't going to be one solution for all instances of this problem, but learning how to weigh the costs and benefits of different solutions will be a key component to harnessing the power of big data without suffering the societal harms.

MONKEYCALENDAR: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
MONKEYCALENDAR (TS//SI//REL) MONKEYCALENDAR is a software implant for GSM (Global System for Mobile communication) subscriber identity module (SIM) cards. This implant pulls geolocation information from a target handset and exfiltrates it to a user-defined phone number via short message service (SMS).
(TS//SI//REL) Modern SIM cards (Phase 2+) have an application program interface known as the SIM Toolkit (STK). The STK has a suite of proactive commands that allow the SIM card to issue commands and make requests to the handset. MONKEYCALENDAR uses STK commands to retrieve location information and to exfiltrate data via SMS. After the MONKEYCALENDAR file is compiled, the program is loaded onto the SIM card using either a Universal Serial Bus (USB) smartcard reader or via over-the-air provisioning. In both cases, keys to the card may be required to install the application depending on the service provider's security configuration.
Unit Cost: $0
Status: Released, not deployed.
Page, with graphics, is here. General information about TAO and the catalog is here.

Saturday, February 22, 2014

Why Look at WhatsApp’s Security?

Facebook’s acquisition announcement coincided with the starting week of Project Neptune’s beta program. Project Neptune is Praetorian’s new mobile application security testing platform that allows companies to keep pace with rapid mobile development cycles by incorporating continuous, on-demand security testing. And what’s a better way to properly kick off our beta program than to test a publicly available mobile app worth $19 billion?
Within minutes, Project Neptune picked up on several SSL-related security issues affecting the confidentiality of WhatsApp user data that passes in transit to back-end servers. This is the kind of stuff the NSA would love. It basically allows them—or an attacker—to man-in-the-middle the connection and then downgrade the encryption so they can break it and sniff the traffic. These security issues put WhatsApp user information and communications at risk.
The security test cases selected in Project Neptune were nonintrusive and limited in scope. Praetorian would need authorization from Facebook and WhatsApp to conduct a more thorough security evaluation of the mobile applications and back-end infrastructure. Despite the limitations in scope, the following were among the security issues that Neptune identified:

SSL Pinning Not Enforced

WhatsApp does not perform SSL pinning when establishing a trusted connection between the mobile applications and back-end web services. Without SSL pinning enforced, an attacker could man-in-the-middle the connection between the mobile applications and back-end web services. This would allow the attacker to sniff user credentials, session identifiers, or other sensitive information.
Update 02/21/2014: WhatsApp is actively working on adding SSL Pinning now

SSL Export Ciphers Support Enabled

WhatsApp’s back-end servers allow the use of weak 40-bit and 56-bit encryption schemes. Without malicious intervention this may not be an issue, because the mobile application and server will negotiate the encryption and settle on the strongest encryption they both support. However, an attacker could intercept the communication and forcefully downgrade it to 40-bit or 56-bit DES encryption, which would make brute-force attacks against the encryption feasible.
Update 02/21/2014: We no longer find evidence of export cipher support.

SSL Null Ciphers Support Enabled

It gets worse. WhatsApp even supports Null Ciphers, which is data that is supposed to be encrypted, but in reality is not. Null Ciphers do not perform any encryption. That is, it simply copies the input stream to the output stream without any changes. With Null Ciphers supported, if the client mobile application attempts to communicate to the server using SSL and both parties do not support any common cipher suites—as a result of a malicious intercept—then it would fall back to sending the data in clear, plain text. Supporting Null Ciphers is not something we come across often—it’s quite rare.
Update 02/21/2014: We no longer find evidence of null cipher support.

SSLv2 Protocol Support Enabled

WhatsApp was also found to support SSL version 2 (v2), which has been found to contain several weaknesses. SSLv2 is vulnerable to several specific attacks which require sniffing and man-in-the-middling. In addition, SSLv2 utilizes MAC post-encryption and 40-bit MACs, which are both considered design flaw weaknesses. Depending on the time and resources of an attacker, any communication protected by SSLv2 may be vulnerable to man-in-the-middle attacks that could allow data tampering or disclosure.
Update 02/21/2014: We no longer find evidence of SSLv2 support.

How Difficult Would it be for WhatsApp to Fix These Issues?

Not very difficult. The biggest challenge most developers have with security is understanding how their design decisions impact the integrity of an application. Mobile is still a new frontier for many developers. Unfortunately, security considerations often take a backseat when there is still uncharted space to explore with new technologies.

In the case of implementing certificate pinning, for example, there are a few things to consider. Pinning the certificate itself is the simpler way to do it, but it requires more maintenance overtime because developers will have to make changes to the application whenever the cert changes. Another way to do it is by pinning the public key, which can be more difficult. Choosing the best way to go often depends on the frequency in which the certificate itself may change. More details can be found in OWASP technical guide to certificate and public key pinning.
Surprisingly, it’s extremely common to see mobile apps without certificate pinning. This security control is used to counter the ability of an attacker to view and modify all traffic passing between the mobile device and backend server. It can also help protect against certificate authority trust failures during client and server negotiation, which coupled with the support of weak and null (plain text) ciphers—as found to be the case in WhatsApp—is an even bigger red flag.
Developers need a partner, with a deep understanding of application security, who can keep pace with the rapid speed of mobile development. One that can be with them throughout the mobile development lifecycle—from start to finish. We believe technology will fill this gap and we believe Project Neptune is the answer.

Mobile Has Fundamentally Changed the Security Landscape Forever

Organizations need to keep pace with rapid mobile development cycles by incorporating continuous, on-demand security testing. With Project Neptune, we are evolving the way mobile development teams address security challenges they encounter while building and maintaining mobile apps. Mobile moves too fast for security to still remain as an afterthought. It’s time for a fundamental change in the way developers build secure mobile applications. Our team is dedicated to helping the world’s leading companies deliver security mobile apps faster and more efficiently.

Saturday, February 15, 2014

The Insecurity of Secret IT Systems

We now know a lot about the security of the Rapiscan 522 B x-ray system used to scan carry-on baggage in airports worldwide. Billy Rios, director of threat intelligence at Qualys, got himself one and analyzed it. And he presented his results at the Kaspersky Security Analyst Summit this week.
It’s worse than you might have expected:
It runs on the outdated Windows 98 operating system, stores user credentials in plain text, and includes a feature called Threat Image Projection used to train screeners by injecting .bmp images of contraband, such as a gun or knife, into a passenger carry-on in order to test the screener's reaction during training sessions. The weak logins could allow a bad guy to project phony images on the X-ray display.
While this is all surprising, it shouldn’t be. These are the same sort of problems we saw in proprietary electronic voting machines, or computerized medical equipment, or computers in automobiles. Basically, whenever an IT system is designed and used in secret – either actual secret or simply away from public scrutiny – the results are pretty awful.
I used to decry secret security systems as "security by obscurity." I now say it more strongly: "obscurity means insecurity."
Security is a process. For software, that process is iterative. It involves defenders trying to build a secure system, attackers -- criminals, hackers, and researchers -- defeating the security, and defenders improving their system. This is how all mass-market software improves its security. It’s the best system we have. And for systems that are kept out of the hands of the public, that process stalls. The result looks like the Rapiscan 522 B x-ray system.
Smart security engineers open their systems to public scrutiny, because that’s how they improve. The truly awful engineers will not only hide their bad designs behind secrecy, but try to belittle any negative security results. Get ready for Rapiscan to claim that the researchers had old software, and the new software has fixed all these problems. Or that they’re only theoretical. Or that the researchers themselves are the problem. We’ve seen it all before.

The Process of Security


If we've learned anything from the past couple of years, it's that computer security flaws are inevitable. Systems break, vulnerabilities are reported in the press, and still many people put their faith in the next product, or the next upgrade, or the next patch. "This time it's secure." So far, it hasn't been.
Security is a process, not a product. Products provide some protection, but the only way to effectively do business in an insecure world is to put processes in place that recognize the inherent insecurity in the products. The trick is to reduce your risk of exposure regardless of the products or patches.

Will We Ever Learn?

Consider denial-of-service attacks. DoS attacks are some of the oldest and easiest attacks in the book. Even so, in February coordinated, distributed DoS attacks easily brought down several high-traffic Web sites, including Yahoo, eBay, Amazon.com and CNN.
Consider buffer overflow attacks. They were first talked about as early as the 1960s time-sharing systems suffered from the problem and were known by the security literati even earlier than that. In the 1970s, they were often used as a point of attack against early networked computers. In 1988, the Morris Worm exploited a buffer overflow in the Unix fingerd command: a very public use of this type of attack.
Today, over a decade after Morris and 35 years after these attacks were first discovered, you'd think the security community would have solved the problem of security vulnerabilities based on buffer overflows. Think again. Over two-thirds of all CERT advisories in 1998 were for vulnerabilities caused by buffer overflows. During an average week in 1999, buffer-overflow vulnerabilities were found in the RSAREF cryptographic toolkit (oops); HP's operating system; the Solaris operating system; and Microsoft IIS 4.0, Site Server 3.0, Windows NT and Internet Explorer. A recent study called buffer overflows the most common security problem.
Consider encryption algorithms. Proprietary secret algorithms are regularly published and broken. Again and again, the marketplace learns that proprietary secret algorithms are a bad idea. But companies and industries-such as Microsoft, the DVD consortium, cellular phone providers and so on-continue to choose proprietary algorithms over public, free alternatives.

Is Anyone Paying Attention?

Sadly, the answer to this question is, not really. Or at least, there are far fewer people paying attention than should be. And the enormous need for digital security products necessitates an enormous need for people to design, develop and implement them. This enormous need for people will be even greater than the number of skilled people, and so the percentage of people paying attention will get even smaller.
Most products that use security are not designed by anyone with security expertise. Even security-specific products are generally designed and implemented by people who have only limited security expertise. Security cannot be functionality-tested-no amount of beta testing will uncover security flaws--so the flaws end up in fielded products.
I'm constantly amazed by the kinds of things that break security products. I've seen a file encryption product with a user interface that accidentally saves the key in the clear. I've seen VPNs where the telephone configuration file accidentally allows a random person to authenticate himself to the server, or that allows one remote client to view the files of another remote client. There are a zillion ways to make a product insecure, and manufacturers manage to stumble on a lot of those ways again and again.
No one is paying attention because no one has to.

Fashionable Model

Computer security products, like software in general, have a very odd product quality model. It's completely unlike the quality-control process for an automobile or skyscraper...or even for a box of fried chicken. If you buy a product and get harmed because of a manufacturer's defect, you can sue...and you'll win. Car-makers can't get away with building cars that explode on impact; chicken shops can't get away with selling buckets of fried chicken with the odd rat mixed in. It just wouldn't do for building contractors to say things like, "Whoops. There goes another one. Sorry. But just wait for Skyscraper 1.1; it'll be 100 percent collapse-free!"
Software is different. It's sold without any claims whatsoever. Your accounts-receivable database can crash, taking your company down with it, and you have no claim against the software company. Your word processor can accidentally corrupt your files, and you have no recourse. Your firewall can turn out to be completely ineffective—hardly better than having nothing at all-and yet it's your fault. Microsoft fielded Hotmail with a bug that allowed anyone to read the accounts of 40 or so million subscribers, password or no password, and never bothered to apologize.
Software manufacturers don't have to produce a quality product because there is no liability if they don't. And the effect of this for security products is that manufacturers don't have to produce products that are actually secure, because no one can sue them if they make a bunch of false claims of security.
The upshot of this is that the marketplace doesn't reward real security. Real security is harder, slower and more expensive, both to design and to implement. Since the buying public has no way to differentiate real security from bad security, the way to win in this marketplace is to design software that is as insecure as you can possibly get away with.
Microsoft knows that reliable software is not cost-effective. According to studies, 90 to 95 percent of all bugs are harmless. They're never discovered by users, and they don't affect performance. It's much cheaper to release buggy software and fix the 5 to 10 percent of bugs people find and complain about.
Microsoft also knows that real security is not cost-effective. They get whacked with a new security vulnerability several times a week. They fix the ones they can, write misleading press releases about the ones they can't, and wait for the press fervor to die down (which it always does). And six months later, they issue the next software version with new features and all sorts of new insecurities, because users prefer cool features to security.

Fear Products; Embrace Process

There's no such thing as perfect security. Interestingly enough, that's not necessarily a problem. The credit card industry loses $10 billion to fraud per year in the U.S. alone, yet neither Visa nor MasterCard is showing any sign of going out of business. Shoplifting estimates in the U.S. are currently between $9.5 billion and $11 billion per year, but you never see "shrinkage" (as it's called) cited as the cause when a store goes out of business. Recently, I needed to notarize a document. That is about the stupidest security protocol I've seen in a long time. Still, it works fine for what it is.
Security does not have to be perfect, but the risks have to be manageable. The credit card industry understands this. They know how to estimate the losses due to fraud. Their problem is that losses from phone credit card transactions are about five times the losses from face-to-face transactions (when the card is presented). Losses from Internet transactions are about 10 times those of face-to-face transactions. Visa and MasterCard are pushing for Internet payment mechanisms precisely because the risks are getting worse.
My primary fear about cyberspace is that people don't understand the risks, and they're putting too much faith in technology's ability to obviate them. Products alone can't solve security problems.

Forward-Thinking

The digital security industry is in desperate need of perceptual shift. Countermea- sures are sold as ways to avoid threats. Good encryption is sold as a way to prevent eavesdropping. A good firewall is marketed as a way to prevent network attacks. PKI is sold as trust management, so you can avoid mistakenly trusting people you really don't. And so on.
This type of thinking is completely backward. Security is old, older than computers. And the old-guard security industry thinks of countermeasures as ways to avoid risk. This distinction is enormous. Avoiding threats is black and white: either you avoid the threat, or you don't. Avoiding risk is continuous: there is some amount of risk you can accept, and some amount you can't.
Security processes are how you avoid risk. Just as businesses use the processes of double-entry bookkeeping, businesses need to use a series of security processes to protect their networks.
Security processes are not a replacement for products. Rather, they're a way of using security products effectively. They're a way to mitigate the risks. Network security products will have flaws; processes are necessary to catch attackers exploiting those flaws, and to fix the flaws once they become public. Insider attacks will occur; processes are necessary to detect the attacks, repair the damages and prosecute the attackers. Large systemwide flaws will compromise entire products and services (think digital cellphones, Microsoft Windows NT password protocols or DVD); processes are necessary to recover from the compromise and stay in business.
Here are two examples of how to focus on process in enterprise network security:
1. Watch for known vulnerabilities. Most successful network-security attacks target known vulnerabilities for which patches already exist. Why? Because network administrators either didn't install the patches, or because users reinstalled the vulnerable systems. It's easy to be smart about the former, but just as important to be vigilant about the latter. There are many ways to check for known vulnerabilities. Network vulnerability scanners such as Netect and SATAN test for them. Phone scanners like PhoneSweep check for rogue modems inside your corporation. Other scanners look for Web site vulnerabilities. Use these sorts of products regularly, and pay attention to the results.
2. Continuously monitor your network products. Almost everything on your network produces a continuous stream of audit information: firewalls, intrusion detection systems, routers, servers, printers, etc. Most of it is irrelevant, but some of it contains footprints from successful attacks. Watching it all is vital for security, because an attack that bypassed one product might be picked up by another. For example, an attacker might exploit a flaw in a firewall and bypass an IDS, but his attempts to get root access on an internal server will appear in that server's audit logs. If you have a process in place to watch those logs, you'll catch the intrusion in progress.
In these pages and elsewhere I have written pessimistically about the future of computer security. The future of computers is complexity, and complexity is anathema to security. The only reasonable thing to do is to reduce your risk as much as possible. We can't avoid threats, but we can reduce risk.
Nowhere else in society do we put so much faith in technology. No one has ever said, "This door lock is so effective that we don't need police protection, or breaking-and-entering laws." Products work to a certain extent, but you need processes in place to leverage their effectiveness.

Security Processes

PREVENTION

Limit Privilege. Don't give any user more privileges than he absolutely needs to do his job. Just as you wouldn't give a random employee the keys to the CEO's office, don't give him a password to the CEO's files.
Secure the Weakest Link. Spend your security budget securing the biggest problems and the largest vulnerabilities. Too often, computer security measures are like planting an enormous stake in the ground and hoping the enemy runs right into it. Try to build a broad palisade.
Use Choke Points. By funneling users through choke points (think firewalls), you can more carefully secure those few points. Systems that bypass these choke points, like desktop modems, make security much harder.
Provide Defense in Depth. Don't rely on single solutions. Use multiple complementary security products, so that a failure in one does not mean total insecurity. This might mean a firewall, an intrusion detection system and strong authentication on important servers.
Fail Securely. Design your networks so that when products fail, they fail in a secure manner. When an ATM fails, it shuts down; it doesn't spew money out its slot.
Leverage Unpredictability. You know your network; your attacker doesn't. This is your big advantage. Make his job harder by disguising things, adding honey pots and booby traps, etc.
Enlist the Users. Security can't work if the users aren't on your side. Social engineering attacks are often the most damaging of any attack, and can only be defended against with user education.
Embrace Simplicity. Keep things as simple as absolutely possible. Security is a chain; the weakest link breaks it. Simplicity means fewer links.

DETECTION AND RESPONSE

Detect Attacks. Watch the security products. Look for signs of attack. Too often, valuable alerts from firewalls, servers and even IDSes are simply ignored.
Respond to Attackers. It's not enough to simply detect attacks. You need to close vulnerabilities when attackers find them, investigate incidents and prosecute attackers. We need to build a world where criminals are treated as such.
Be Vigilant. Security requires continuous monitoring; it's not enough to read a weekly report. Read about new attacks as soon as possible. Install all security patches and upgrades immediately.
Watch the Watchers. Audit your own processes. Regularly.

GOPHERSET: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
GOPHERSET (TS//SI//REL) GOPHERSET is a software implant for GSM (Global System for Mobile communication) subscriber identity module (SIM) cards. This implant pulls Phonebook, SMS, and call log information from a target handset and exfiltrates it to a user-defined phone number via short message service (SMS).
(TS//SI//REL) Modern SIM cards (Phase 2+) have an application program interface known as the SIM Toolkit (STK). The STK has a suite of proactive commands that allow the SIM card to issue commands and make requests to the handset. GOPHERSET uses STK commands to retrieve the requested information and to exfiltrate data via SMS. After the GOPHERSET file is compiled, the program is loaded onto the SIM card using either a Universal Serial Bus (USB) smartcard reader or via over-the-air provisioning. In both cases, keys to the card may be required to install the application depending on the service provider's security configuration.
Unit Cost: $0
Status: (U//FOUO) Released. Has not been deployed.
Page, with graphics, is here. General information about TAO and the catalog is here.
In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Friday Squid Blogging: Giant Squid TED Talk

Interesting.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

MONKEYCALENDAR: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:
MONKEYCALENDAR (TS//SI//REL) MONKEYCALENDAR is a software implant for GSM (Global System for Mobile communication) subscriber identity module (SIM) cards. This implant pulls geolocation information from a target handset and exfiltrates it to a user-defined phone number via short message service (SMS).
(TS//SI//REL) Modern SIM cards (Phase 2+) have an application program interface known as the SIM Toolkit (STK). The STK has a suite of proactive commands that allow the SIM card to issue commands and make requests to the handset. MONKEYCALENDAR uses STK commands to retrieve location information and to exfiltrate data via SMS. After the MONKEYCALENDAR file is compiled, the program is loaded onto the SIM card using either a Universal Serial Bus (USB) smartcard reader or via over-the-air provisioning. In both cases, keys to the card may be required to install the application depending on the service provider's security configuration.
Unit Cost: $0
Status: Released, not deployed.
Page, with graphics, is here. General information about TAO and the catalog is here.

Thursday, February 13, 2014

Finding People's Location Based on Their Activities in Cyberspace

Glenn Greenwald is back reporting about the NSA, now with Pierre Omidyar's news organization FirstLook and its introductory publication, The Intercept. Writing with national security reporter Jeremy Scahill, his first article covers how the NSA helps target individuals for assassination by drone.
Leaving aside the extensive political implications of the story, the article and the NSA source documents reveal additional information about how the agency's programs work. From this and other articles, we can now piece together how the NSA tracks individuals in the real world through their actions in cyberspace.
Its techniques to locate someone based on their electronic activities are straightforward, although they require an enormous capability to monitor data networks. One set of techniques involves the cell phone network, and the other the Internet.

Tracking Locations With Cell Towers

Every cell-phone network knows the approximate location of all phones capable of receiving calls. This is necessary to make the system work; if the system doesn't know what cell you're in, it isn't able to route calls to your phone. We already know that the NSA conducts physical surveillance on a massive scale using this technique.
By triangulating location information from different cell phone towers, cell phone providers can geolocate phones more accurately. This is often done to direct emergency services to a particular person, such as someone who has made a 911 call. The NSA can get this data either by network eavesdropping with the cooperation of the carrier, or by intercepting communications between the cell phones and the towers. A previously released a Top Secret NSA document says this: "GSM Cell Towers can be used as a physical-geolocation point in relation to a GSM handset of interest."
This technique becomes even more powerful if you can employ a drone. Greenwald and Scahill write:
The agency also equips drones and other aircraft with devices known as "virtual base-tower transceivers"—creating, in effect, a fake cell phone tower that can force a targeted person's device to lock onto the NSA's receiver without their knowledge.
The drone can do this multiple times as it flies around the area, measuring the signal strength—and inferring distance—each time. Again from the Intercept article:
The NSA geolocation system used by JSOC is known by the code name GILGAMESH. Under the program, a specially constructed device is attached to the drone. As the drone circles, the device locates the SIM card or handset that the military believes is used by the target.
The Top Secret source document associated with the Intercept story says:
As part of the GILGAMESH (PREDATOR-based active geolocation) effort, this team used some advanced mathematics to develop a new geolocation algorithm intended for operational use on unmanned aerial vehicle (UAV) flights.
This is at least part of that advanced mathematics.
None of this works if the target turns his phone off or exchanges SMS cards often with his colleagues, which Greenwald and Scahill write is routine. It won't work in much of Yemen, which isn't on any cell phone network. Because of this, the NSA also tracks people based on their actions on the Internet.

Finding You From Your Web Connection

A surprisingly large number of Internet applications leak location data. Applications on your smart phone can transmit location data from your GPS receiver over the Internet. We already know that the NSA collects this data to determine location. Also, many applications transmit the IP address of the network the computer is connected to. If the NSA has a database of IP addresses and locations, it can use that to locate users.
According to a previously released Top Secret NSA document, that program is code named HAPPYFOOT: "The HAPPYFOOT analytic aggregated leaked location-based service / location-aware application data to infer IP geo-locations."
Another way to get this data is to collect it from the geographical area you're interested in. Greenwald and Scahill talk about exactly this:
In addition to the GILGAMESH system used by JSOC, the CIA uses a similar NSA platform known as SHENANIGANS. The operation—previously undisclosed—utilizes a pod on aircraft that vacuums up massive amounts of data from any wireless routers, computers, smart phones or other electronic devices that are within range.
And again from an NSA document associated with the FirstLook story: "Our mission (VICTORYDANCE) mapped the Wi-Fi fingerprint of nearly every major town in Yemen." In the hacker world, this is known as war-driving, and has even been demonstrated from drones.
Another story from the Snowden documents describes a research effort to locate individuals based on the location of wifi networks they log into.
This is how the NSA can find someone, even when their cell phone is turned off and their SIM card is removed. If they're at an Internet cafĂ©, and they log into an account that identifies them, the NSA can locate them—because the NSA already knows where that wifi network is.
This also explains the drone assassination of Hassan Guhl, also reported in the Washington Post last October. In the story, Guhl was at an Internet cafe when he read an email from his wife. Although the article doesn't describe how that email was intercepted by the NSA, the NSA was able to use it to determine his location.
There's almost certainly more. NSA surveillance is robust, and they almost certainly have several different ways of identifying individuals on cell phone and Internet connections.
As fascinating as the technology is, the critical policy question—and the one discussed extensively in the FirstLook article—is how reliable all this information is. While much of the NSA's capabilities to locate someone in the real world by their network activity piggy-backs on corporate surveillance capabilities, there's a critical difference: False positives are much more expensive. If Google or Facebook get a physical location wrong, they show someone an ad for a restaurant they're nowhere near. If the NSA gets a physical location wrong, they call a drone strike on innocent people.
As we move to a world where all of us are tracked 24/7, these are the sorts of trade-offs we need to keep in mind.