Sunday, December 21, 2014

Boeing and BlackBerry Partnership for Self-Destructing Phone

It sounds like a Mission: Impossible scenario, but aerospace company Boeing is teaming with Canadian phone maker BlackBerry to produce an ultra-secure mobile phone that "self-destructs." The phone uses encryption on calls and is intended to serve the high-security needs of government and industry. As Blackberry CEO John Chen said, "We're pleased to announce that Boeing is collaborating with BlackBerry to provide a secure mobile solution for Android devices utilizing our BES 12 platform.



Staples: Breach May Have Affected 1.16 Million Customers' Cards

Staples said Friday afternoon that nearly 1.16 million customer payment cards may have been affected in a data breach under investigation since October. The office-supply retailer said two months ago that it was working with law enforcement officials to look into a possible hacking of its customers' credit card data. Staples said in October that it had learned of a potential data theft at several of its U.S. stores after multiple banks noticed a pattern of payment card fraud suggesting the company computer systems had been breached. Now, Staples believes that point-of-sale systems at 115 Staples locations were infected with malware that thieves may have used to steal customers' names, payment card numbers, expiration dates and card verification codes, Staples said on Friday. At all but two of those stores, the malware would have had access to customer data for purchases made between August 10 and September 16 of this year. At the remaining two stores, the malware was active from July 20 through September 16, the company said. 

Hackers Used Nasty "SMB Worm" Attack Toolkit Against Sony

Just hours after the FBI and President Obama called out North Korea as being responsible for the destructive cyber attack against Sony Pictures, US-CERT issued an alert describing the primary malware used by the attackers, along with indicators of compromise. While not mentioning Sony by name in its advisory, instead referring to the victim as a "major entertainment company," US-CERT said that the attackers used a Server Message Block (SMB) Worm Tool to conduct the attacks. According to the advisory, the SMB Worm Tool is equipped with five components, including a Listening Implant, Lightweight Backdoor, Proxy Tool, Destructive Hard Drive Tool, and Destructive Target Cleaning Tool. US-CERT also provided a list of the Indicators of Compromise (IOCs), which include C2 IP addresses, Snort signatures for the various components, host based Indicators, potential YARA signatures to detect malware binaries on host machines, and recommended security practices and tactical mitigations.

North Korea Denies Responsibility for SPE Hack

A North Korean official said that the secretive regime wants to mount a joint investigation with the United States to identify who was behind the cyber attack against Sony Pictures. An unnamed spokesman of the North Korean foreign ministry was quoted by the country's state news agency, KCNA, describing U.S. claims they were behind the hack as "slander." "As the United States is spreading groundless allegations and slandering us, we propose a joint investigation with it into this incident," the official said, according to Agence France-Presse. Both the FBI and President Barack Obama have said evidence was uncovered linking the hack to to North Korea, but some experts have questioned the evidence tying the attack to Pyongyang. Meanwhile, reader hessian notes that 2600: The Hacker Quarterly has offered to let the hacker community distribute The Interview for Sony. It's an offer Sony may actually find useful, since the company is now considering releasing the movie on a "different platform." Reader Nicola Hahn warns that we shouldn't be too quick to accept North Korea as the bad guy in this situation: Most of the media has accepted North Korea's culpability with little visible skepticism. There is one exception: Kim Zetter at Wired has decried the evidence as flimsy and vocally warns about the danger of jumping to conclusions. Surely we all remember high-ranking, ostensibly credible, officials warning about the smoking gun that comes in the form of a mushroom cloud? This underscores the ability of the agenda-setting elements of the press to frame issues and control the acceptable limits of debate. Some would even say that what's happening reveals tools of modern social control (PDF). Whether or not they're responsible for the attack, North Korea has now warned of "serious consequences" if the U.S. takes action against them for it.
Who Is Behind the SPE Hack Exposed By FBI

As a result of the investigation, and in close collaboration with other U.S. government departments and agencies, the FBI now has enough information to conclude that the North Korean government is responsible for these actions. While the need to protect sensitive sources and methods precludes us from sharing all of this information, our conclusion is based, in part, on the following:
  • Technical analysis of the data deletion malware used in this attack revealed links to other malware that the FBI knows North Korean actors previously developed. For example, there were similarities in specific lines of code, encryption algorithms, data deletion methods, and compromised networks.
  • The FBI also observed significant overlap between the infrastructure used in this attack and other malicious cyber activity the U.S. government has previously linked directly to North Korea. For example, the FBI discovered that several Internet protocol (IP) addresses associated with known North Korean infrastructure communicated with IP addresses that were hardcoded into the data deletion malware used in this attack.
  • Separately, the tools used in the SPE attack have similarities to a cyber attack in March of last year against South Korean banks and media outlets, which was carried out by North Korea.

Thursday, December 18, 2014

Unfolding The Hidden Tor Users

Kevin Poulson has a good article up on Wired about how the FBI used a Metasploit variant to identity Tor users.

Words About Sony Hack

I don't have a lot to say about the Sony hack, which seems to still be ongoing. I want to highlight a few points, though.
  1. At this point, the attacks seem to be a few hackers and not the North Korean government. (My guess is that it's not an insider, either.) That we live in the world where we aren't sure if any given cyberattack is the work of a foreign government or a couple of guys should be scary to us all.
  2. Sony is a company that hackers have loved to hate for years now. (Remember their rootkit from 2005?) We've learned previously that putting yourself in this position can be disastrous. (Remember HBGary.) We're learning that again.
  3. I don't see how Sony launching a DDoS attack against the attackers is going to help at all.
  4. The most sensitive information that's being leaked as a result of this attack isn't the unreleased movies, the executive emails, or the celebrity gossip. It's the minutiae from random employees:
    The most painful stuff in the Sony cache is a doctor shopping for Ritalin. It's an email about trying to get pregnant. It's shit-talking coworkers behind their backs, and people's credit card log-ins. It's literally thousands of Social Security numbers laid bare. It's even the harmless, mundane, trivial stuff that makes up any day's email load that suddenly feels ugly and raw out in the open, a digital Babadook brought to life by a scorched earth cyberattack.
    These people didn't have anything to hide. They aren't public figures. Their details aren't going to be news anywhere in the world. But their privacy has been violated, and there are literally thousands of personal tragedies unfolding right now as these people deal with their friends and relatives who have searched and read this stuff.
    These are people who did nothing wrong. They didn't click on phishing links, or use dumb passwords (or even if they did, they didn't cause this). They just showed up. They sent the same banal workplace emails you send every day, some personal, some not, some thoughtful, some dumb. Even if they didn't have the expectation of full privacy, at most they may have assumed that an IT creeper might flip through their inbox, or that it was being crunched in an NSA server somewhere. For better or worse, we've become inured to small, anonymous violations. What happened to Sony Pictures employees, though, is public. And it is total.
    Gizmodo got this 100% correct. And this is why privacy is so important for everyone.
I'm sure there'll be more information as this continues to unfold.

Limited Number Of CISO & Hard to Find

This article is reporting that the demand for Chief Information Security Officers far exceeds supply:
Sony and every other company that realizes the need for a strong, senior-level security officer are scrambling to find talent, said Kris Lovejoy, general manager of IBM's security service and former IBM chief security officer.
CISOs are "almost impossible to find these days," she said. "It's a bit like musical chairs; there's a finite number of CISOs and they tend to go from job to job in similar industries."
I'm not surprised, really. This is a tough job: never enough budget, and you're the one blamed when the inevitable attacks occur. And it's a tough skill set: enough technical ability to understand cybersecurity, and sufficient management skill to navigate senior management. I would never want a job like that in a million years.
Here's a tip: if you want to make your CISO happy, here's her holiday wish list.
"My first wish is for companies to thoroughly test software releases before release to customers...."
Can we get that gift wrapped?

Wednesday, December 10, 2014

Version 2.0 of Heart Bleed The Poodlebleed Bug

Poodlebleed is a vulnerability in the design of SSL version 3.0. Poodle is actually an acronym for Padding Oracle On Downgraded Legacy Encryption. The vulnerability allows the decryption to plaintext of secure connections. The bug was discovered by Google Security Team researcher Bodo Möller in collaboration with Thai Duong and Krzysztof Kotowicz.

This bug has been found in the Secure Sockets Layer (SSL) 3.0 cryptography protocol (SSLv3) which could be exploited to intercept data that’s supposed to be encrypted between computers and servers. 

Although SSL 3.0 is almost 15 years old, many servers and web browsers still use it today. When web browsers fail at connecting on a newer SSL version (i.e. TLS 1.0, 1.1, or 1.2), they may fall back to a SSL 3.0 connection. This is where the trouble begins.
Because a network attacker can cause connection failures, including the failure of TLS 1.0/1.1/1.2 connections, they can force the use of SSL 3.0 and then exploit the poodle bug in order to decrypt secure content transmitted between a server and a browser. For nitty-gritty details on what exactly the poodlebleed bug is, please see the pdf announcement under resources.

Clients and Browsers

For the best client-end browser security, it is recommended to completely disable SSL 3.0. Disabling SSL 3.0 support, or CBC-mode ciphers with SSL 3.0, is sufficient to mitigate this issue, however, this presents significant compatibility problems for servers running old encryption protocols. Therefore the recommended response is to support TLS_FALLBACK_SCSV. Most major browsers will support TLS_FALLBACK_SCSV in the coming months. Until then, you can protect yourself by disabling SSL 3.0 support in your browser.
In firefox, this can be done by going to about:config and setting security.tls.version.min to 1

This browser test by Qualys, Inc. can provide further details on the TLS and SSL methods supported by your browser. If your browser currently supports SSL 3.0 or SSL 2.0 and does not support TLS_FALLBACK_SCSV, you are vulnerable to the poodle bug and need to upgrade to Google Chrome or disable SSL 2/3 support. Currently, only Google Chrome version 33.0.1750 (February 2014 Build) and newer supports TLS_FALLBACK_SCSV, all other browsers are safest disabling SSL 3.0.

Servers

The below form can be used to test if your server is running with SSL 3.0 enabled. Although disabling SSL 3.0 may cause failed connections to your ssl service for small portion of users running older browsers, this action prevents the large portion of modern browsers from being eavesdropped while attempting to access your services in a secure manner. Here is a great resource on disabling SSL 3.0 on your server running apache or nginx.

It is important to note that this is NOT a flaw in SSL certificates, their private keys, or their design but in the old SSLv3 protocol.  SSL Certificates are not affected and customers with certificates on servers supporting SSL 3.0 do not need to replace them.
It’s believed to not be as serious as the Heartbleed bug in OpenSSL, since the attacker needs to have a privileged position in the network to exploit the latest.  The usage of Hotspots, public Wi-Fi, makes this attack a real problem. This type of attack falls into the “Man-in-the-middle” category.
Background
brook-4.png
While SSL 3.0 was introduced in 1996, it is currently supported by nearly 95% of Web browsers according to Netcraft’s latest report.  Many Transport Layer Socket (TLS) clients downgrade their cryptography protocol to SSL 3.0 when working with legacy servers. According to Google, an attacker that controls the network between the computer and server could interfere with the handshake process used to verify which cryptography protocol the server can accept using a “protocol downgrade dance”. This will force computers to use the older SSL 3.0 protocol to protect data that is being sent. Attackers can then exploit the bug by carrying out a man-in-the-middle (MITM) attack to decrypt secure HTTP cookies, which could let them steal information or take control of the victim’s online accounts.  Although, at the time to writing, webmasters have been disabling moving to TLSv1 and above and a rapid pace, there still remains a lot of work to be done.  If Heartbleed taught us anything, it’s that the largest companies act fast while many small companies drag their heels in patching critical vulnerabilities.
What Businesses Need to Do
In order to mitigate the bug there are a few courses of action:
  1. Check to see if your webservers are vulnerable using our free SSL Toolbox.
  2. Use tools that support TLS_FALLBACK_SCSV, a mechanism that prevents attackers from forcing Web browsers to use SSL 3.0.
  3. Disable SSL 3.0 altogether, or disable SSL 3.0 CBC-mode ciphers
  4. A cloud-based Web Application Firewall can help protect against this kind of vulnerability.  For more information please visit the website.
  5. Be leery of any spam messages from scammers trying to capitalize on uncertainty and a lack of technical knowledge.
 Christoffer Olausson gives a few tips on how to fix this on Apache:
> SSLProtocol All -SSLv2 -SSLv3                   <- Removes SSLv2 and SSLv3
> apachectl configtest                                   <- Test your configuration
> sudo service apache restart                      <- Restart server
Google added that it will remove SSL 3.0 support from all of its products in the next few months. Mozilla also said it would disable SSL 3.0 in FireFox 34, which will be released at the end of November.


Monday, December 8, 2014

Eavesdrop on Any Conversation?

Interesting essay on the future of speech recognition, microphone miniaturization, and the future ubiquity of auditory surveillance.

Why do HTTPS Encryption Fails

Interesting paper: "Security Collapse of the HTTPS Market." From the conclusion:
Recent breaches at CAs have exposed several systemic vulnerabilities and market failures inherent in the current HTTPS authentication model: the security of the entire ecosystem suffers if any of the hundreds of CAs is compromised (weakest link); browsers are unable to revoke trust in major CAs ("too big to fail"); CAs manage to conceal security incidents (information asymmetry); and ultimately customers and end users bear the liability and damages of security incidents (negative externalities).
Understanding the market and value chain for HTTPS is essential to address these systemic vulnerabilities. The market is highly concentrated, with very large price differences among suppliers and limited price competition. Paradoxically, the current vulnerabilities benefit rather than hurt the dominant CAs, because among others, they are too big to fail.

Uber Tracks Users Our Data Is Our Rights

In the Internet age, we have no choice but to entrust our data with private companies: e-mail providers, service providers, retailers, and so on.
We realize that this data is at risk from hackers. But there's another risk as well: the employees of the companies who are holding our data for us.
In the early years of Facebook, employees had a master password that enabled them to view anything they wanted in any account. NSA employees occasionally snoop on their friends and partners. The agency even has a name for it: LOVEINT. And well before the Internet, people with access to police or medical records occasionally used that power to look up either famous people or people they knew.
The latest company accused of allowing this sort of thing is Uber, the Internet car-ride service. The company is under investigation for spying on riders without their permission. Called the "god view," some Uber employees are able to see who is using the service and where they're going -- and used this at least once in 2011 as a party trick to show off the service. A senior executive also suggested the company should hire people to dig up dirt on their critics, making their database of people's rides even more "useful."
None of us wants to be stalked -- whether it's from looking at our location data, our medical data, our emails and texts, or anything else -- by friends or strangers who have access due to their jobs. Unfortunately, there are few rules protecting us.
Government employees are prohibited from looking at our data, although none of the NSA LOVEINT creeps were ever prosecuted. The HIPAA law protects the privacy of our medical records, but we have nothing to protect most of our other information.
Your Facebook and Uber data are only protected by company culture. There's nothing in their license agreements that you clicked "agree" to but didn't read that prevents those companies from violating your privacy.
This needs to change. Corporate databases containing our data should be secured from everyone who doesn't need access for their work. Voyeurs who peek at our data without a legitimate reason should be punished.
There are audit technologies that can detect this sort of thing, and they should be required. As long as we have to give our data to companies and government agencies, we need assurances that our privacy will be protected.

Thoughtful Cartoon

Funny.

Misuse Of Our Data

Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It's more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there's substantial evidence that it was built and operated by the United States.
This isn't the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.
I dislike the "cyberwar" metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.
For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.
This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.
So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?
To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.
Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin's existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom's website, didn't say anything about what it discovered because it "didn't want to interfere with NSA/GCHQ operations."
My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It's much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules -- Fox IT called it "a full framework of a lot of species of malware" -- making it even harder to figure out what's going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.
That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.
Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn't. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

Saturday, December 6, 2014

Sony Kept Thousands of Passwords in a Folder Named "Password"

 

It's been a rough week for Sony execs (million-dollar salaries notwithstanding). And things are only going to get worse. Which would almost be enough to make you feel bad for the poor schmucks in IT—that is, until you realize that they hid their most sensitive password data under the label "Passwords." Go ahead and slam your head against something hard. We'll wait.
The second trove of data snuck out sometime yesterday, and it didn't take long for Buzzfeed to stumble upon the Facebook, MySpace (an ancient form of Facebook), YouTube, and Twitter "usernames and passwords for major motion picture social accounts." Likely due to the fact that they were saved in a huge file called "Password." Which contained even more passwords called things like "Facebook login password." So they would know that that was the password. Because who needs encryption or security or common sense or even the vaguest attempt at grade-school level online safety.
Yep, "Password" should do just fine. Maybe stick a "1" on the end. That'll throw 'em off.
Of course and unfortunately, it shouldn't really be a surprise that humans are lazy and unimaginative and just generally the worst when it comes to protecting our precious online data. And yet—goddammit, Sony IT. You had one job. [Fusion, Buzzfeed]