Follow by Email

Friday, August 29, 2014

FBI Investigates 'Sophisticated' Cyber Attack On JP Morgan

The attack on JP Morgan reportedly resulted in the loss of “gigabytes of sensitive data” that could have involved customer and employee information. It is said to have been of a level of sophistication beyond ordinary criminals, leading to speculation of a state link. The FBI is thought to be investigating whether there is a connection to Russia. American-Russian relations continue to be fraught amid the crisis in Ukraine, with sanctions ramped up. Bruce66423 asks "The quality of the attack, which appears to have led to 'gigabytes' of data being lost, is raising the prospect of a state being the source. The present culprit suggested is Russia... why the assumption it's not China — just because China isn't invading the Ukraine at the moment?" News of the attack is also at the New York Times, which notes Earlier this year, iSight Partners, a security firm in Dallas that provides intelligence on online threats, warned companies that they should be prepared for cyberattacks from Russia in retaliation for Western economic sanctions. But Adam Meyers, the head of threat intelligence at CrowdStrike, a security firm that works with banks, said that it would be “premature” to suggest the attacks were motivated by sanctions.

Wednesday, August 27, 2014

Security Flaws in advanced imaging technology

Security researchers have finally gotten their hands on a Rapiscan backscatter full-body scanner. The results aren't very good.
Website with paper and images. News articles and commentary.
Note that these machines have been replaced in US airports with millimeter wave full-body scanners.

Physical Side-Channel Key-Extraction Attacks On PCs

A team of computer security experts at Tel Aviv University (Israel) has come up with a new potentially much simpler method that lets you steal data from computers — Just Touch it — literally.
There are different ways of attacking encryption systems. On one side, there are security vulnerabilities and weakness in the encryption algorithms themselves that make it possible to figure out the cryptographic keys.
On the other side, there are flaws and weaknesses in the people themselves that make it easier than it should be to force them to offer up the keys to decrypt something. But, Flaws and weaknesses in neither of which is necessarily quick or easy to find out, as there are several dependencies.
According to Eran Tromer, Daniel Genkin, and Itamar Pipman, computer security experts at Tel Aviv University, using a simple electrical trick is enough for sophisticated hackers to gain access to thousands of encrypted keys through solely touching the chassis of the computer.
Access to encrypted keys could be used to make hundreds of digital signatures used all the time by people when creating passwords, signing contracts, or perhaps most importantly, using credit and debit cards online.
In order to victimize any computer, all you need to do is wear a special digitizer wristband and touch the exposed part of the system. The wristband will measure all the tiny changes in the ground electrical potential that can reveal even stronger encryption keys, such as a 4,096-bit RSA key.
In fact, in some cases, you don't even have to touch the system directly with your bare hands. You can intercept encryption keys from attached network and video cables as well. Researchers called it a side-channel attack.
"Our attacks use novel side channels and are based on the observation that the 'ground' electric potential in many computers fluctuates in a computation-dependent way," the researchers wrote their finding on a paper [PDF]. "An attacker can measure this signal by touching exposed metal on the computer's chassis with a plain wire, or even with a bare hand. The signal can also be measured at the remote end of Ethernet, VGA or USB cables."
The researchers also note that this attack works better in hot weather, due to the lower resistance of sweaty fingers. The team will present their research in a talk titled Get Your Hands Off My Laptop: Physical Side-Channel Key-Extraction Attacks On PCs, at Workshop on Cryptographic Hardware and Embedded Systems 2014 (CHES 2014) in Korea, on September 23th.
The actual attack can be performed quickly. According to the research, "despite the GHz-scale clock rate of the laptops and numerous noise sources, the full attacks require a few seconds of measurements using medium frequency signals (around 2 MHz), or one hour using low frequency signals (up to 40 kHz)."
The team could retrieve keys from multiple test machines running a popular open source encryption software called GnuPG, which implements the OpenPGP standard. The end results are mind-blowing, as the researchers write:
Using GnuPG as our study case, we can, on some machines:
  • distinguish between the spectral signatures of different RSA secret keys (signing or decryption), and
  • fully extract decryption keys, by measuring the laptop's chassis potential during decryption of a chosen ciphertext.
Although, the information retrieval was better when used with high-end lab equipment. The researchers also have successfully executed this attack by using a smartphone connected to Ethernet shielding via its headphone port, which they found sufficient in some scenarios.
The good news is that there is nothing to worry about overly grabby strangers stealing your data just yet, because the technique primarily focuses on GnuPG's encryption software, which already got a patch ready to fix the problem to limit the effects. Attackers also have to monitor the electricity changes during the decryption process, so they get hold of your data, which isn't quite easy.


Tuesday, August 26, 2014

850 Billion NSA Surveillance Records are Searchable

The Intercept reported today on classified documents revealing that the NSA has built its own "Google-like" search engine to provide over 850 billion collected records directly to law enforcement agencies, including the FBI and the DEA. Reporter Ryan Gallagher explains, "The documents provide the first definitive evidence that the NSA has for years made massive amounts of surveillance data directly accessible to domestic law enforcement agencies." The search engine, called ICREACH, allows analysts to search an array of databases, some of which contain metadata collected on innocent American citizens, for the purposes of "foreign intelligence." However, questions have been raised over its potential for abuse in what is known as "parallel construction," a process in which agencies use surveillance resources in domestic investigations, and then later cover it up by creating a different evidence trail to use in court.

Recognizing Speech from Smart Phone

The gyroscopes are sensitive enough to pick up acoustic vibrations. It's crude, but it works. Paper. Wired article

Obscurity at Site

The White House is refusing to release details about the security of because it might help hackers. What this really means is that the security details would embarrass the White House.

Surveillance as search engine

The National Security Agency is secretly providing data to nearly two dozen U.S. government agencies with a “Google-like” search engine built to share more than 850 billion records about phone calls, emails, cellphone locations, and internet chats, according to classified documents obtained by The Intercept.
The documents provide the first definitive evidence that the NSA has for years made massive amounts of surveillance data directly accessible to domestic law enforcement agencies. Planning documents for ICREACH, as the search engine is called, cite the Federal Bureau of Investigation and the Drug Enforcement Administration as key participants.
ICREACH contains information on the private communications of foreigners and, it appears, millions of records on American citizens who have not been accused of any wrongdoing. Details about its existence are contained in the archive of materials provided to The Intercept by NSA whistleblower Edward Snowden.
Earlier revelations sourced to the Snowden documents have exposed a multitude of NSA programs for collecting large volumes of communications. The NSA has acknowledged that it shares some of its collected data with domestic agencies like the FBI, but details about the method and scope of its sharing have remained shrouded in secrecy.
ICREACH has been accessible to more than 1,000 analysts at 23 U.S. government agencies that perform intelligence work, according to a 2010 memo. A planning document from 2007 lists the DEA, FBI, Central Intelligence Agency, and the Defense Intelligence Agency as core members. Information shared through ICREACH can be used to track people’s movements, map out their networks of associates, help predict future actions, and potentially reveal religious affiliations or political beliefs.
The creation of ICREACH represented a landmark moment in the history of classified U.S. government surveillance, according to the NSA documents.
“The ICREACH team delivered the first-ever wholesale sharing of communications metadata within the U.S. Intelligence Community,” noted a top-secret memo dated December 2007. “This team began over two years ago with a basic concept compelled by the IC’s increasing need for communications metadata and NSA’s ability to collect, process and store vast amounts of communications metadata related to worldwide intelligence targets.”
The search tool was designed to be the largest system for internally sharing secret surveillance records in the United States, capable of handling two to five billion new records every day, including more than 30 different kinds of metadata on emails, phone calls, faxes, internet chats, and text messages, as well as location information collected from cellphones. Metadata reveals information about a communication—such as the “to” and “from” parts of an email, and the time and date it was sent, or the phone numbers someone called and when they called—but not the content of the message or audio of the call.
ICREACH does not appear to have a direct relationship to the large NSA database, previously reported by The Guardian, that stores information on millions of ordinary Americans’ phone calls under Section 215 of the Patriot Act. Unlike the 215 database, which is accessible to a small number of NSA employees and can be searched only in terrorism-related investigations, ICREACH grants access to a vast pool of data that can be mined by analysts from across the intelligence community for “foreign intelligence”—a vague term that is far broader than counterterrorism.
Data available through ICREACH appears to be primarily derived from surveillance of foreigners’ communications, and planning documents show that it draws on a variety of different sources of data maintained by the NSA. Though one 2010 internal paper clearly calls it “the ICREACH database,” a U.S. official familiar with the system disputed that, telling The Intercept that while “it enables the sharing of certain foreign intelligence metadata,” ICREACH is “not a repository [and] does not store events or records.” Instead, it appears to provide analysts with the ability to perform a one-stop search of information from a wide variety of separate databases.
In a statement to The Intercept, the Office of the Director of National Intelligence confirmed that the system shares data that is swept up by programs authorized under Executive Order 12333, a controversial Reagan-era presidential directive that underpins several NSA bulk surveillance operations that monitor communications overseas. The 12333 surveillance takes place with no court oversight and has received minimal Congressional scrutiny because it is targeted at foreign, not domestic, communication networks. But the broad scale of 12333 surveillance means that some Americans’ communications get caught in the dragnet as they transit international cables or satellites—and documents contained in the Snowden archive indicate that ICREACH taps into some of that data.
Legal experts told The Intercept they were shocked to learn about the scale of the ICREACH system and are concerned that law enforcement authorities might use it for domestic investigations that are not related to terrorism.
“To me, this is extremely troublesome,” said Elizabeth Goitein, co-director of the Liberty and National Security Program at the New York University School of Law’s Brennan Center for Justice. “The myth that metadata is just a bunch of numbers and is not as revealing as actual communications content was exploded long ago—this is a trove of incredibly sensitive information.”
Brian Owsley, a federal magistrate judge between 2005 and 2013, said he was alarmed that traditional law enforcement agencies such as the FBI and the DEA were among those with access to the NSA’s surveillance troves.
“This is not something that I think the government should be doing,” said Owsley, an assistant professor of law at Indiana Tech Law School. “Perhaps if information is useful in a specific case, they can get judicial authority to provide it to another agency. But there shouldn’t be this buddy-buddy system back-and-forth.”
Jeffrey Anchukaitis, an ODNI spokesman, declined to comment on a series of questions from The Intercept about the size and scope of ICREACH, but said that sharing information had become “a pillar of the post-9/11 intelligence community” as part of an effort to prevent valuable intelligence from being “stove-piped in any single office or agency.”
Using ICREACH to query the surveillance data, “analysts can develop vital intelligence leads without requiring access to raw intelligence collected by other IC [Intelligence Community] agencies,” Anchukaitis said. “In the case of NSA, access to raw signals intelligence is strictly limited to those with the training and authority to handle it appropriately. The highest priority of the intelligence community is to work within the constraints of law to collect, analyze and understand information related to potential threats to our national security.”

One-Stop Shopping

The mastermind behind ICREACH was recently retired NSA director Gen. Keith Alexander, who outlined his vision for the system in a classified 2006 letter to the then-Director of National Intelligence John Negroponte. The search tool, Alexander wrote, would “allow unprecedented volumes of communications metadata to be shared and analyzed,” opening up a “vast, rich source of information” for other agencies to exploit. By late 2007 the NSA reported to its employees that the system had gone live as a pilot program.
The NSA described ICREACH as a “one-stop shopping tool” for analyzing communications. The system would enable at least a 12-fold increase in the volume of metadata being shared between intelligence community agencies, the documents stated. Using ICREACH, the NSA planned to boost the amount of communications “events” it shared with other U.S. government agencies from 50 billion to more than 850 billion, bolstering an older top-secret data sharing system named CRISSCROSS/PROTON, which was launched in the 1990s and managed by the CIA.
To allow government agents to sift through the masses of records on ICREACH, engineers designed a simple “Google-like” search interface. This enabled analysts to run searches against particular “selectors” associated with a person of interest—such as an email address or phone number—and receive a page of results displaying, for instance, a list of phone calls made and received by a suspect over a month-long period. The documents suggest these results can be used reveal the “social network” of the person of interest—in other words, those that they communicate with, such as friends, family, and other associates.
The purpose of ICREACH, projected initially to cost between $2.5 million and $4.5 million per year, was to allow government agents to comb through the NSA’s metadata troves to identify new leads for investigations, to predict potential future threats against the U.S., and to keep tabs on what the NSA calls “worldwide intelligence targets.”
However, the documents make clear that it is not only data about foreigners’ communications that are available on the system. Alexander’s memo states that “many millions of…minimized communications metadata records” would be available through ICREACH, a reference to the process of “minimization,” whereby identifying information—such as part of a phone number or email address—is removed so it is not visible to the analyst. NSA documents define minimization as “specific procedures to minimize the acquisition and retention [of] information concerning unconsenting U.S. persons”—making it a near certainty that ICREACH gives analysts access to millions of records about Americans. The “minimized” information can still be retained under NSA rules for up to five years and “unmasked” at any point during that period if it is ever deemed necessary for an investigation.
The Brennan Center’s Goitein said it appeared that with ICREACH, the government “drove a truck” through loopholes that allowed it to circumvent restrictions on retaining data about Americans. This raises a variety of legal and constitutional issues, according to Goitein, particularly if the data can be easily searched on a large scale by agencies like the FBI and DEA for their domestic investigations.
“The idea with minimization is that the government is basically supposed to pretend this information doesn’t exist, unless it falls under certain narrow categories,” Goitein said. “But functionally speaking, what we’re seeing here is that minimization means, ‘we’ll hold on to the data as long as we want to, and if we see anything that interests us then we can use it.’”
A key question, according to several experts consulted by The Intercept, is whether the FBI, DEA or other domestic agencies have used their access to ICREACH to secretly trigger investigations of Americans through a controversial process known as “parallel construction.”
Parallel construction involves law enforcement agents using information gleaned from covert surveillance, but later covering up their use of that data by creating a new evidence trail that excludes it. This hides the true origin of the investigation from defense lawyers and, on occasion, prosecutors and judges—which means the legality of the evidence that triggered the investigation cannot be challenged in court.
In practice, this could mean that a DEA agent identifies an individual he believes is involved in drug trafficking in the United States on the basis of information stored on ICREACH. The agent begins an investigation but pretends, in his records of the investigation, that the original tip did not come from the secret trove. Last year, Reuters first reported details of parallel construction based on NSA data, linking the practice to a unit known as the Special Operations Division, which Reuters said distributes tips from NSA intercepts and a DEA database known as DICE.
Tampa attorney James Felman, chair of the American Bar Association’s criminal justice section, told The Intercept that parallel construction is a “tremendously problematic” tactic because law enforcement agencies “must be honest with courts about where they are getting their information.” The ICREACH revelations, he said, “raise the question of whether parallel construction is present in more cases than we had thought. And if that’s true, it is deeply disturbing and disappointing.”
Anchukaitis, the ODNI spokesman, declined to say whether ICREACH has been used to aid domestic investigations, and he would not name all of the agencies with access to the data. “Access to information-sharing tools is restricted to users conducting foreign intelligence analysis who have the appropriate training to handle the data,” he said.
CIA headquarters in Langley, Virginia, 2001.
CIA headquarters in Langley, Virginia, 2001.


The roots of ICREACH can be traced back more than two decades.
In the early 1990s, the CIA and the DEA embarked on a secret initiative called Project CRISSCROSS. The agencies built a database system to analyze phone billing records and phone directories, in order to identify links between intelligence targets and other persons of interest. At first, CRISSCROSS was used in Latin America and was “extremely successful” at identifying narcotics-related suspects. It stored only five kinds of metadata on phone calls: date, time, duration, called number, and calling number, according to an NSA memo.
The program rapidly grew in size and scope. By 1999, the NSA, the Defense Intelligence Agency, and the FBI had gained access to CRISSCROSS and were contributing information to it. As CRISSCROSS continued to expand, it was supplemented with a system called PROTON that enabled analysts to store and examine additional types of data. These included unique codes used to identify individual cellphones, location data, text messages, passport and flight records, visa application information, as well as excerpts culled from CIA intelligence reports.
An NSA memo noted that PROTON could identify people based on whether they behaved in a “similar manner to a specific target.” The memo also said the system “identifies correspondents in common with two or more targets, identifies potential new phone numbers when a target switches phones, and identifies networks of organizations based on communications within the group.” In July 2006, the NSA estimated that it was storing 149 billion phone records on PROTON.
According to the NSA documents, PROTON was used to track down “High Value Individuals” in the United States and Iraq, investigate front companies, and discover information about foreign government operatives. CRISSCROSS enabled major narcotics arrests and was integral to the CIA’s rendition program during the Bush Administration, which involved abducting terror suspects and flying them to secret “black site” prisons where they were brutally interrogated and sometimes tortured. One NSA document on the system, dated from July 2005, noted that the use of communications metadata “has been a contribution to virtually every successful rendition of suspects and often, the deciding factor.”
However, the NSA came to view CRISSCROSS/PROTON as insufficient, in part due to the aging standard of its technology. The intelligence community was sensitive to criticism that it had failed to share information that could potentially have helped prevent the 9/11 attacks, and it had been strongly criticized for intelligence failures before the invasion of Iraq in 2003. For the NSA, it was time to build a new and more advanced system to radically increase metadata sharing.

A New Standard

In 2006, NSA director Alexander drafted his secret proposal to then-Director of National Intelligence Negroponte.
Alexander laid out his vision for what he described as a “communications metadata coalition” that would be led by the NSA. His idea was to build a sophisticated new tool that would grant other federal agencies access to “more than 50 existing NSA/CSS metadata fields contained in trillions of records” and handle “many millions” of new minimized records every day—indicating that a large number of Americans’ communications would be included.
The NSA’s contributions to the ICREACH system, Alexander wrote, “would dwarf the volume of NSA’s present contributions to PROTON, as well as the input of all other [intelligence community] contributors.”
Alexander explained in the memo that NSA was already collecting “vast amounts of communications metadata” and was preparing to share some of it on a system called GLOBALREACH with its counterparts in the so-called Five Eyes surveillance alliance: the United Kingdom, Australia, Canada, and New Zealand.
ICREACH, he proposed, could be designed like GLOBALREACH and accessible only to U.S. agencies in the intelligence community, or IC.
A top-secret PowerPoint presentation from May 2007 illustrated how ICREACH would work—revealing its “Google-like” search interface and showing how the NSA planned to link it to the DEA, DIA, CIA, and the FBI. Each agency would access and input data through a secret data “broker”—a sort of digital letterbox—linked to the central NSA system. ICREACH, according to the presentation, would also receive metadata from the Five Eyes allies.
The aim was not necessarily for ICREACH to completely replace CRISSCROSS/PROTON, but rather to complement it. The NSA planned to use the new system to perform more advanced kinds of surveillance—such as “pattern of life analysis,” which involves monitoring who individuals communicate with and the places they visit over a period of several months, in order to observe their habits and predict future behavior.
The NSA agreed to train other U.S. government agencies to use ICREACH. Intelligence analysts could be “certified” for access to the massive database if they required access in support of a given mission, worked as an analyst within the U.S. intelligence community, and had top-secret security clearance. (According to the latest government figures, there are more than 1.2 million government employees and contractors with top-secret clearance.)
In November 2006, according to the documents, the Director of National Intelligence approved the proposal. ICREACH was rolled out as a test program by late 2007. It’s not clear when it became fully operational, but a September 2010 NSA memo referred to it as the primary tool for sharing data in the intelligence community. “ICREACH has been identified by the Office of the Director of National Intelligence as the U.S. Intelligence Community’s standard architecture for sharing communications metadata,” the memo states, adding that it provides “telephony metadata events” from the NSA and its Five Eyes partners “to over 1000 analysts across 23 U.S. Intelligence Community agencies.” It does not name all of the 23 agencies, however.
The limitations placed on analysts authorized to sift through the vast data troves are not outlined in the Snowden files, with only scant references to oversight mechanisms. According to the documents, searches performed by analysts are subject to auditing by the agencies for which they work. The documents also say the NSA would conduct random audits of the system to check for any government agents abusing their access to the data. The Intercept asked the NSA and the ODNI whether any analysts had been found to have conducted improper searches, but the agencies declined to comment.
While the NSA initially estimated making upwards of 850 billion records available on ICREACH, the documents indicate that target could have been surpassed, and that the number of personnel accessing the system may have increased since the 2010 reference to more than 1,000 analysts. The intelligence community’s top-secret “Black Budget” for 2013, also obtained by Snowden, shows that the NSA recently sought new funding to upgrade ICREACH to “provide IC analysts with access to a wider set of shareable data.”
In December last year, a surveillance review group appointed by President Obama recommended that as a general rule “the government should not be permitted to collect and store all mass, undigested, non-public personal information about individuals to enable future queries and data-mining for foreign intelligence purposes.” It also recommended that any information about United States persons should be “purged upon detection unless it either has foreign intelligence value or is necessary to prevent serious harm to others.”
Peter Swire, one of the five members of the review panel, told The Intercept he could not comment on whether the group was briefed on specific programs such as ICREACH, but noted that the review group raised concerns that “the need to share had gone too far among multiple agencies.”

How PGP should not  be

Matthew Green has a good post on what's wrong with PGP and what should be done about it.

 Vulnerability In Matching Photographs to People

          We have an error rate of about 15%:
Professor Mike Burton, Sixth Century Chair in Psychology at the University of Aberdeen said: "Psychologists identified around a decade ago that in general people are not very good at matching a person to an image on a security document.
"Familiar faces trigger special processes in our brain -- we would recognise a member of our family, a friend or a famous face within a crowd, in a multitude of guises, venues, angles or lighting conditions. But when it comes to identifying a stranger it's another story.
"The question we asked was does this fundamental brain process that occurs have any real importance for situations such as controlling passport issuing ­ and we found that it does."
The ability of Australian passport officers, for whom accurate face matching is central to their job and vital to border security, was tested in the latest study, which involved researchers from the Universities of Aberdeen, York and New South Wales Australia.
In one test, passport officers had to decide whether or not a photograph of an individual presented on their computer screen matched the face of a person standing in front of their desk.
It was found that on 15% of trials the officers decided that the photograph on their screen matched the face of the person standing in front of them, when in fact, the photograph showed an entirely different person.

Saturday, August 23, 2014

Eager to know source code of FinSpy

FinSpy to say in simpler terms its a GOVERNMENTAL IT INTRUSION AND REMOTE MONITORING SOLUTION.surveillance software, marketed worldwide to law enforcement agencies as a way to monitor criminals, is widely used by repressive governments to spy on human rights groups. It is currently used in around 25 countries including US.

What FinSpy Does?
FinSpy, is a surveillance tool that infects computers to capture screenshots, log keystrokes, record Skype conversations and activate cameras and microphones. Gamma Group, a British company, makes the software and markets it to law enforcement agencies as a lawful way to monitor criminals.

How it Infect Internet Users:
The surveillance suite is installed after the target accepts installation of a fake update to commonly used software. Code which will install the malware has also been detected in emails. The software, which is designed to evade detection by anti-virus software, has versions which work on mobile phones of all major brands.
A security flaw in Apple’s iTunes allowed unauthorized third parties to use iTunes online update procedures to install unauthorized programs. Gamma International offered presentations to government security officials at security software trade shows where they described how to covertly install the FinFisher spy software on suspect’s computers using iTunes’ update procedures.

Firefox Masquerading
FinFisher is capable of masquerading as other more legitimate programs, such as Mozilla Firefox. On April 30, 2013, Mozilla announced that they had sent Gamma a cease-and-desist letter for trademark infringement. Gamma had created an espionage program that was entitled firefox.exe and even provided a version number and trademark claims to appear to be legitimate Firefox software.

Martin J. Muench, a Gamma Group managing director, told The New York Times that FinSpy was used mostly “against pedophiles, terrorists, organized crime, kidnapping and human trafficking.” He declined to disclose which countries had bought the software.
But security researchers say FinSpy is used by governments around the world for broader purposes. Last year, Citizen Lab researchers found that the government in Bahrain had used FinSpy to target activists in that country.

Still more interested to explore the code here it is. Github debug it yourself and explore what it does.

Discovery of bugs in tor

We've known for a while that NSA specifically targets Tor, because they want to disrupt one of the last remaining communication methods they aren't able to tap or demand access to. However, not everybody at the NSA is on board with this strategy. Tor developer Andrew Lewman says even as flaws in Tor are rooted out by the NSA and British counterpart GCHQ, other agents from the two organizations leak those flaws directly to the developers, so they can be fixed quickly. He said, "You have to think about the type of people who would be able to do this and have the expertise and time to read Tor source code from scratch for hours, for weeks, for months, and find and elucidate these super-subtle bugs or other things that they probably don't get to see in most commercial software." Lewman estimates the Tor Project receives these reports on a monthly basis. He also spoke about how a growing amount of users will affect Tor. He suggests a massive company like Google or Facebook will eventually have to take up the task of making Tor scale up to millions of users.

Article from onion


Friday, August 22, 2014

Hackers steal Exfiltrated Data

 There's an interesting article on a data exfiltration technique.
What was unique about the attackers was how they disguised traffic between the malware and command-and-control servers using Google Developers and the public Domain Name System (DNS) service of Hurricane Electric, based in Fremont, Calif.
In both cases, the services were used as a kind of switching station to redirect traffic that appeared to be headed toward legitimate domains, such as,, and
The malware disguised its traffic by including forged HTTP headers of legitimate domains. FireEye identified 21 legitimate domain names used by the attackers.
In addition, the attackers signed the Kaba malware with a legitimate certificate from a group listed as the "Police Mutual Aid Association" and with an expired certificate from an organization called "MOCOMSYS INC."
In the case of Google Developers, the attackers used the service to host code that decoded the malware traffic to determine the IP address of the real destination and redirect the traffic to that location.
Google Developers, formerly called Google Code, is the search engine's website for software development tools, APIs, and documentation on working with Google developer products. Developers can also use the site to share code.
With Hurricane Electric, the attacker took advantage of the fact that its domain name servers were configured, so anyone could register for a free account with the company's hosted DNS service.
The service allowed anyone to register a DNS zone, which is a distinct, contiguous portion of the domain name space in the DNS. The registrant could then create A records for the zone and point them to any IP address.
Honestly, this looks like a government exfiltration technique, although it could be evidence that the criminals are getting even more sophisticated.

Personal Data Protected By Reference


Thursday, August 21, 2014

OS Still Required In  Containerized World

With the rise of Docker containers as an alternative for deploying complex server-based applications, one might wonder, does the operating system even matter anymore? Certainly the question gets asked periodically. Gordon Haff makes the argument on that the operating system is still very much alive and kicking, and that a hardened, tuned, reliable operating system is just as important to the success of applications as it was in the pre-container data center.

Hacking Traffic lights to green forever

Hacking Internet of Things (IoTs) have become an amazing practice for cyber criminals out there, but messing with Traffic lights would be something more crazy for them.

The hacking scenes in hollywood movies has just been a source of entertainment for the technology industry, like we've seen traffic lights hacked in Die Hard and The Italian Job, but these movies always inspire hackers to perform similar hacking attacks in day-to-day life.

Security researchers at the University of Michigan have not only hacked traffic light signals in real life, but also claimed that it’s actually shockingly easy to perform by anyone with a laptop and the right kind of radio. If we compare the traffic light hacks in movies and real life, the reality is much easier.
In a paper study published this month, the security researchers describe how a series of major security vulnerabilities in traffic light systems allowed them to very easily and very quickly seized control of the whole system of at least 100 traffic signals in an unnamed Michigan city from a single point of access.

Researchers took permission from a local road agency before performing the hack, but they did not disclose exactly where in Michigan they did their research.
Our attacks show that an adversary can control traffic infrastructure to cause disruption, degrade safety, or gain an unfair advantage,” the paper explained.
The team, led by University of Michigan computer scientist J. Alex Halderman, said that the networked traffic systems are left vulnerable to three major weaknesses:
  • unencrypted radio signals,
  • the use of factory-default usernames and passwords, and
  • a debugging port that is easy to attack
This left the network accessible to everyone from cyber criminals to young hackers.
traffic light hacking
The vulnerabilities we discover in the infrastructure are not a fault of any one device or design choice, but rather show a systemic lack of security consciousness,” the researchers report in a paper.
In an effort to save on installation costs and increase flexibility, the traffic light system makes use of wireless radio signals rather than dedicated physical networking links for its communication infrastructure - this hole was exploited by the researchers. Surprisingly, more than 40 states currently use such systems to keep traffic flowing as efficiently as possible.
The safety critical nature of traffic infrastructure requires that it be secure against computer-based attacks, but this is not always the case,” the team said. “We investigate a networked traffic signal system currently deployed in the United States and discover a number of security flaws that exist due to systemic failures by the designers. We leveraged these flaws to create attacks which gain control of the system, and we successfully demonstrate them on the deployment.
The Traffic light systems use a combination of 5.8GHz and 900MHz radio signals, depending on the conditions at each intersection, for wireless communication in point-to-point or point-to-multipoint configurations. The 900MHz links use "a proprietary protocol with frequency hopping spread-spectrum (FHSS)," but the 5.8GHz version of the proprietary protocol isn't terribly different from 802.11n.

The researchers says that anyone with a laptop and a wireless card operating on the same frequency as the wirelessly networked traffic light — in this case, 5.8 gigahertz — could access the entire unencrypted network.

Now, after gaining access, next was to communicate with one of the controllers in their target network. This was done very easily due to the fact that this system’s the control boxes run VxWorks 5.5, a version which by default gets built from source with a debug port left accessible for testing.
By sniffing packets sent between the controller and this program, we discovered that communication to the controller is not encrypted, requires no authentication, and is replayable. Using this information, we were then able to reverse engineer parts of the communication structure,” the paper reads.
Various command packets only differ in the last byte, allowing an attacker to easily determine remaining commands once one has been discovered. We created a program that allows a user to activate any button on the controller and then displays the results to the user. We also created a library of commands which enable scriptable attacks. We tested this code in the field and were able to access the controller remotely.
This debug port allowed researchers to successfully turned all lights red or alter the timing of neighboring intersections — for example, to make sure someone hit all green lights on a given route.

More worrying part is the ability of a cyber criminal to perform denial-of-service (DoS) attack on controlled intersections by triggering each intersection’s malfunction management unit by attempting invalid configurations, which would put the lights into a failure mode.

At last, the team called for manufacturers and operators to improve the security of traffic infrastructure. It recommended that the traffic-system administrators should not use default usernames and passwords, as well as they should stop broadcasting communications unencrypted for “casual observers and curious teenagers” to see.
"While traffic control systems may be built to fail into a safe state, we have shown that they are not safe from attacks by a determined adversary," the paper concluded.
Moreover, they also warned that devices like voting machines and even connected cars could suffer similar attacks.

Wednesday, August 20, 2014

What Would Chinese Hackers do with US Hospital Patient Data

In a follow-up to yesterday's story about the Chinese hackers who stole hospital data of 4.5 million patients, IDG News Service's Martyn Williams set out to learn why the data, which didn't include credit card information, was so valuable. The answer is depressingly simple: people without health insurance can potentially get treatment by using medical data of one of the hacking victims. John Halamka, chief information officer of the Beth Israel Deaconess Medical Center and chairman of the New England Healthcare Exchange Network, said a medical record can be worth between $50 and $250 to the right customer — many times more than the amount typically paid for a credit card number, or the cents paid for a user name and password. "If I am one of the 50 million Americans who are uninsured ... and I need a million-dollar heart transplant, for $250 I can get a complete medical record including insurance company details," he said.

Does Xiomi serves as Apple Of  The East ?

Looks like Chinese device maker, Xiaomi, is taking its "Apple of the East" tag too literally. First, their CEO brazenly copies Steve Jobs' signature look, sitting cross-legged on the floor. And now, Xiaomi's latest version of Android shamelessly rips off iOS 7. MIUI 6, which is Xiaomi's upcoming edition of Android for its latest phones and tablets, looks almost exactly like Apple's operating system for iPhones, iPads and the iPod Touch. It features the same bright color palette and a flat design. Heck, it even does away with Google's "app drawer" and puts all apps on your home screen. It's like the CEO handed iPhones to the design team and barked.

Tuesday, August 19, 2014

Behind al Qaeda Encryption Software

The web intelligence firm Recorded Future has posted two stories about how al Qaeda is using new encryption software in response to the Snowden disclosures. NPR picked up the story a week later.
Former NSA Chief Council Stewart Baker uses this as evidence that Snowden has harmed America. Glenn Greenwald calls this "CIA talking points" and shows that al Qaeda was using encryption well before Snowden. Both quote me heavily, Baker casting me as somehow disingenuous on this topic.
Baker is conflating my stating of two cryptography truisms. The first is that cryptography is hard, and you're much better off using well-tested public algorithms than trying to roll your own. The second is that cryptographic implementation is hard, and you're much better off using well-tested open-source encryption software than you are trying to roll your own. Admittedly, they're very similar, and sometimes I'm not as precise as I should be when talking to reporters.
This is what I wrote in May:
I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight. Last fall, Matt Blaze said to me that he thought that the Snowden documents will usher in a new dark age of cryptography, as people abandon good algorithms and software for  of their own devising. My guess is that this an example of that.
Note the phrase "good algorithms and software." My intention was to invoke both truisms in the same sentence. That paragraph is true if al Qaeda is rolling their own encryption algorithms, as Recorded Future reported in May. And it remains true if al Qaeda is using algorithms like my own Twofish and rolling their own software, as Recorded Future reported earlier this month. Everything we know about how the NSA breaks cryptography is that they attack the implementations far more successfully than the algorithms.
My guess is that in this case they don't even bother with the encryption software; they just attack the users' computers. There's nothing that screams "hack me" more than using specially designed al Qaeda encryption software. There's probably a QUANTUMINSERT attack and FOXACID exploit already set on automatic fire.
I don't want to get into an argument about whether al Qaeda is altering its security in response to the Snowden documents. Its members would be idiots if they did not, but it's also clear that they were designing their own cryptographic software long before Snowden. My guess is that the smart ones are using public tools like OTR and PGP and the paranoid dumb ones are using their own stuff, and that the split was the same both pre- and post-Snowden.

Monday, August 18, 2014

Oops! MOMENT Internet Blackout in Syria

 Before ensuing about the story, lets first look back two years and figure out what exactly went wrong when Syria became totally blacked out – a sudden disengage from the rest of the world, which resulted in a 3 day Internet blackout in 2012. Well now, the question was! Was it the Syrian Government behind this blackout?

The dim-out took place exactly during the period of intense and profound ongoing civil wars in Syria. So, as it is supposed, it was assumed that it was the fault of the Syrian government. But, the Syrian Minister stood his ground and made it clear that the government had not disabled the internet, instead the outage was caused by the cable cut by the terrorists to emphasize their existence. Although, when investigated it didn’t seem likely to be a terrorist’s work.

And so, the question came up again on who exactly was the reason for this Syrian Internet Blackout? Well, the answer to that was NSA. The very agency responsible for number of other major happenings.

National Security Agency (NSA) – the agency powered with the strength of God, had ruled over the confidentiality and privacy of the entire world from countries to personages. This organization had the access to read anyone’s data, infringe into large fibre networks, and can narrow out anyone, at any place and at any time.

The latest revelation from the global reconnaissance whistle-blower Edward Snowden removed the mask from yet another Agency’s secret- a 2012 incident that took Syria’s internet status from “Online” to “Offline” was caused by the NSA was nothing but a blunder.

In an interview with esteemed journalist James Bamford from Wired magazine published today, Snowden stated that their elite hacking unit in the NSA, very well known as Tailored Access Operations(TAO), inadvertently took most of Syria of the Internet World Map, while make an attempt to insinuate the country’s traffic.

The TAO unit had supposedly been making an effort to mount a malware by exploiting a liability in the router of the main Syrian ISP that would have allowed the agency to readdress the traffic from the central router to the system tapped by NSA’s turmoil packet capture system and the Xkeyscore packet processing system.

Instead, the renowned TAO hackers successfully executed it, but to their dismay the plan backfired, as the unit accidently rendered the router, the very equipment it was trying to tap, causing the Syrian Internet to go under the sea. Snowden describes that instant as an “oh shit” moment at the TAO centre, where the NSA feared the moment when the Syrian Government would realize what they had done.

The incident proved to be really ironic, where the other parts of the American government was trying to keep Syria Connected, the TAO was trying to revert the damage done by them and cover their tracks, to no avail.

“Fortunately for the NSA, the Syrians were apparently more focused on restoring the nation’s Internet than on tracking down the cause of the outage,” Bamford inked. Snowden told him that someone joked, “If we get caught, we can always point the finger at Israel”.

It had been largely misunderstood to that the dim-out was caused by one of the opposing parties within Syria, be it the government itself or its rebels. This thought occurred to them since, there have been numerous occasions rooting to similar outages. While the Syrian government stood by its ground blaming the “Terrorists” for the outage caused.

This recent Snowden’s confession pronounces a humiliating part of US intelligence agency, but at the same time, the disclosure also made the list of perpetrators that people will consider, as such incidents are prone to occur there.


Sunday, August 17, 2014

Leaked Documents: GCHQ Made Port-Scanning Entire Countries a Standard Spy Tool

 Since the early days of TCP, port scanning has been used by computer saboteurs to locate vulnerable systems. In a new set of top secret documents seen by Heise, it is revealed that in 2009, the British spy agency GCHQ made port scans a "standard tool" to be applied against entire nations. Twenty-seven countries are listed as targets of the HACIENDA program in the presentation, which comes with a promotional offer: readers desiring to do reconnaissance against another country need simply send an e-mail. Also from the article: The list of targeted services includes ubiquitous public services such as HTTP and FTP, as well as common administrative protocols such as SSH (Secure SHell protocol – used for remote access to systems) and SNMP (Simple Network Management Protocol – used for network administration). Given that in the meantime, port scanning tools like Zmap have been developed which allow anyone to do comprehensive scans, it is not the technology used that is shocking, but rather the gargantuan scale and pervasiveness of the operation.

Break Down the Great Firewall of China

The FOSS project Lantern is having great success in unblocking the internet for many users in oppressive regimes, like China and Iran. Much like Tor and BitTorrent, Lantern is using peer-to-peer networking to overcome firewalls, but with the additional security of a trusted network of friends. "If you download Lantern in an uncensored region, you can connect with someone in a censored region, who can then access whatever content they want through you. What makes the system so unique is that it operates on the basis of trust. ... Through a process called consistent routing, the amount of information any single Lantern user can learn about other users is limited to a small subset, making infiltration significantly more difficult." The network of peers is growing, but we need more friends in uncensored countries to join us.

Saturday, August 16, 2014

The Billion-Dollar Website

The Government Accountability Office has investigated the cost blowouts associated with how the Centers for Medicare & Medicaid Services (CMS) handled the project. It has released a 60-page report entitled Ineffective Planning and Oversight Practices Underscore the Need for Improved Contract Management, with a 5 page summary. The key takeaway messages are:
  • CMS undertook the development of and its related systems without effective planning or oversight practices...
  • [The task] was a complex effort with compressed time frames. To be expedient, CMS issued task orders ... when key technical requirements were unknown...
  • CMS identified major performance issues ... but took only limited steps to hold the contractor accountable.
  • CMS awarded a new contract to another firm [and the new contract's cost has doubled] due to changes such as new requirements and other enhancements...

Facebook scammers exploit robin williams death :

Scammers spare no incident to target as many victims as possible, and this time they are exploiting the tragic death of comic actor Robin Williams by offering the fake Facebook videos proclaiming a Goodbye video message that Williams made before his death.
According to Symantec, this fake Facebook post, which you may see on your walls shared by your Facebook friends, was created by scammers looking to profit on the actor’s death. The bogus post claims to be a Goodbye video of Robin Williams making his last phone call before committing suicide earlier this week.

Scammers and cyber criminals often use major headline news stories to lure in victims. You may fall victim to this video as the news claims to have come from the most popular and reputed BBC News website.
There is no video. Users that click on the link to the supposed video are taken to a fake BBC News website. As with many social scams, users are required to perform actions before they can view the content. In this case, users are instructed to share the video on Facebook before watching,” Symantec security response manager Satnam Narang said in a blog post.
Once clicked on the link, users are asked to share the fake video link and either fill out a survey – or install a fake Facebook media plugin – before they can watch the video. The video states:
When you fill out the survey, it generates revenue for the scammers for every answered surveys and file downloads. According to Symantec, the scam started within 48 hours after the news broke out about Williams’ death and the video has been shared over 24 million times on Facebook.
Scammers have used simple social engineering trick – the technical term for manipulating people into clicking malicious links – in order to infect large Facebook users.
Over the years, scammers have used both real and fake celebrity deaths as a way to convince users to click on links and perform actions. From Amy Winehouse and Paul Walker to the fake deaths of Miley Cyrus and Will Smith, scammers are opportunistic and always looking for ways to capitalize,” Narang said.
One example of it is the incident when the former “Saturday Night Live” and “30 Rock” star Tracy Morgan was critically hurt in a six-vehicle fatal accident on the New Jersey Turnpike, that was announced dead by the scammers.
Symantec advised the Internet users to be “vigilant and skeptical” when reading sensational stories on social media websites. They have also alerted Facebook about the scam campaign and they are taking steps to block it.
The Better Business Bureau (BBB) has also issued a warning about the similar scam. ”Basically someone clicks on a link to view information or video about Williams, but instead it tells you to download a player, which is really a virus,” it said.
BBB also suggested some useful steps in order to protect yourself from scams shared through email and social media:
  • Don’t take the bait. Stay away from promotions of ‘exclusive,’ ‘shocking’ or ‘sensational’ footage. If it sounds too outlandish to be true, it is probably a scam.
  • Hover over a link to see its true destination. Before you click, mouse over the link to see where it will take you. Don’t click on links leading to unfamiliar websites.
  • Don’t trust your friends online. It might not actually be your friends who are liking or sharing scam links to photos. Their account may have been hacked and scammers could be using another tactic called clickjacking. Clickjacking is a technique that scammers use to trick you into clicking on social media links that you would not usually click on.

Why should we use Ruby on Rails framework?

Let’s talk, why should we use Ruby on Rails when dozen of language and frameworks are available.

1. Ruby on Rails provides a consistent approach to build your web applications. If you have already experienced with other web development language or framework, you can understand, how tough it was process, you typically need to survey and ask with expert to choose your various software components to solve the common architectural problems of persistence,  build scripts, testing, logging, database setup, application configuration, web tier components and rest of software life cycle. Using the Ruby on Rails framework these decisions are already made for you, now you can spend more time to understanding software business problem and quickly build a working prototype. By using Rails you become productive in minutes not weeks or months.

2. Ruby on Rails follow CONVENTION OVER CONFIGURATION pragmatic philosophy, this apply in all layers of your software development architecture with the highest productivity gains noticeable in the relationship between the model and the database(ORB- Object Relationship Model). Once the developer understands the rules and constraints, Rails MVC magically connects your view to your controller and model, and your model to the database. That’s awesome .!! You do not have to write any script or tool to manage all this ==> MVC – Model View and Controller.

3. When we talk about OOP’s, Ruby is a fully featured object-oriented language and in addition Ruby come with mix-ins modules which contain independent code to inject into your classes and block. Ruby is an extremely dynamic/run time language, which means that code can be generated and executed on the fly. It’s dynamic nature includes the definition of new classes & methods, as well as the overloading of operators etc. Apart from this you can re-open, redefine and removed existing classes on fly.

4. Unlike other web language and framework, rails has build in templating support, json/xml document and email service.

5. Rails comes with ActiveRecord, Association and ActiveModel(object relationship mapping), which provides seamlessly persisted interface to database like transactions, inheritance, validation, scope and caching.

6. Rails includes support for a variety of web technologies like JavaScript libraries, JQuery, REST, Embedded ruby, HAML, CoffeeScript etc. with configuration in development, test and production environments. If you are planing your application into Web 2.0 space, Rails provides a rich abstracted interface to implementing AJAX operations.

7. Most important rails gives us flexibilities to perform database migration and rollbacks between environments and across development until project at mature stage. However with the Rails framework you will be delighted with the implementation of database migrations for applying and rolling back database changes. You only need to enter your update and rollback scripts in Ruby, framework understands the current version & can move forwards or backwards to any database version.

8. Getting started with Rails is easy as generators/migration will propel you along. DRY(Don’t Repeat Yourself) principle of Rails encourage experience developer to write less code by using the meta programming features of Ruby. Overall less code to write means higher productivity, less bugs and minimum complexity.

Thursday, August 14, 2014

Automatic Scanning Highly Stressed People

This borders on ridiculous:
Chinese scientists are developing a mini-camera to scan crowds for highly stressed individuals, offering law-enforcement officers a potential tool to spot would-be suicide bombers.
"They all looked and behaved as ordinary people but their level of mental stress must have been extremely high before they launched their attacks. Our technology can detect such people, so law enforcement officers can take precautions and prevent these tragedies," Chen said.
Officers looking through the device at a crowd would see a mental "stress bar" above each person's head, and the suspects highlighted with a red face.
The researchers said they were able to use the technology to tell the difference between high-blood oxygen levels produced by stress rather than just physical exertion.
I'm not optimistic about this technology.

Security As Class

This is a smart and interesting blog post:
I prefer to think of security as a class of interface guarantee. In particular, security guarantees are a kind of correctness guarantee. At every interface of every kind ­ user interface, programming language syntax and semantics, in-process APIs, kernel APIs, RPC and network protocols, ceremonies ­-- explicit and implicit design guarantees (promises, contracts) are in place, and determine the degree of “security” (however defined) the system can possibly achieve.
Design guarantees might or might not actually hold in the implementation ­-- software tends to have bugs, after all. Callers and callees can sometimes (but not always) defend themselves against untrustworthy callees and callers (respectively) in various ways that depend on the circumstances and on the nature of caller and callee. In this sense an interface is an attack surface --­ but properly constructed, it can also be a defense surface.
But also it’s an attempt to re-frame security engineering in a way that allows us to imagine more and better solutions to security problems. For example, when you frame your interface as an attack surface, you find yourself ever-so-slightly in a panic mode, and focus on how to make the surface as small as possible. Inevitably, this tends to lead to cat-and-mouseism and poor usability, seeming to reinforce the false dichotomy. If the panic is acute, it can even lead to nonsensical and undefendable interfaces, and a proliferation of false boundaries (as we saw with Windows UAC).
If instead we frame an interface as a defense surface, we are in a mindset that allows us to treat the interface as a shield: built for defense, testable, tested, covering the body; but also light-weight enough to carry and use effectively. It might seem like a semantic game; but in my experience, thinking of a boundary as a place to build a point of strength rather than thinking of it as something that must inevitably fall to attack leads to solutions that in fact withstand attack better while also functioning better for friendly callers.

Monday, August 11, 2014

NSA Partially Funded Code

Rolls All Programming Languages Into One

Hey, web developer dudes and dudettes: What's your favorite programming language? Is it CSS? Is it JavaScript? Is it PHP, HTML5, or something else? Why choose? A new programming language developed by researchers at Carnegie Mellon University is all of those and more—one of the world's first "polyglot" programming languages.
Sound cool? It is, except its development is partially funded by the National Security Agency, so let's look at it with a skeptical eye.
It's called Wyvern—named after a mythical dragon-like thing that only has two legs instead of four—and it's supposed to help programmers design apps and websites without having to rely on a whole bunch of different stylesheets and different amalgamations spread across different files:
"Web applications today are written as a poorly-coordinated mishmash of artifacts written in different languages, file formats, and technologies. For example, a web application may consist of JavaScript code on the client, HTML for structure, CSS for presentation, XML for AJAX-style communication, and a mixture of Java, plain text configuration files, and database software on the server," Jonathan Aldrich, the researcher developing the language, wrote. "This diversity increases the cost of developers learning these technologies. It also means that ensuring system-wide safety and security properties in this setting is difficult."
That system-wide safety and security properties bit is important, and perhaps might explain why the project is backed by the NSA.
See, beyond all its standard spying and data collection, the NSA also has to protect its own systems from hackers, and, ostensibly has some sort of obligation to help American companies do the same. By confining everything you need for a web app or a mobile app or a webpage in one place, it'd theoretically be easier to lock it down, as Aldrich explained.
He also notes that it's "designed to help developers be highly productive when writing high-assurance applications."
A "high-assurance application" is code for "one you really don't want to screw up." It's usually used to refer to military code and applications that the armed forces uses for communications, missile systems, radar, medical devices, and that sort of thing.
I could be totally wrong—the NSA does fund all sorts of things in all sorts of fields, and it may have other plans for it. But security seems the most obvious, at this point.
In any case, Aldrich says that Wyvern can automatically tell what language a person is programming in, based solely on the type of data that's being manipulated. That means that if the language detects you're editing a database, for instance, it'll automatically assume you're using SQL.
The language is working in a prototype mode at the moment, and, as with most new programmery stuff these days, it's all open source, so you can see how it works exactly over at GitHub. Please let us know if you can think of other reasons why the NSA might be interested in its development.

Saturday, August 9, 2014

Billion Passwords Breached ?

I've been seeing too many media interviews over this weird New York Times story that a Russian criminal gang has stolen over 1.2 billion passwords.
As expected, the hype is pretty high over this. But from the beginning, the story didn't make sense to me. There are obvious details missing: are the passwords in plaintext or encrypted, what sites are they for, how did they end up with a single criminal gang? The Milwaukee company that pushed this story, Hold Security, isn't a company that I had ever heard of before. The New York Times writes that "a security expert not affiliated with Hold Security analyzed the database of stolen credentials and confirmed it was authentic," but we're not given any details. This felt more like a PR story from the company than anything real.
Yesterday, Forbes wrote that Hold Security is charging people $120 to tell them if they're in the stolen-password database:
"In addition to continuous monitoring, we will also check to see if your company has been a victim of the latest CyberVor breach," says the site's description of the service using its pet name for the most recent breach. "The service starts from as low as 120$/month and comes with a 2-week money back guarantee, unless we provide any data right away."
Shortly after Wall Street Journal reporter Danny Yadron linked to the page on Twitter and asked questions about it, the firm replaced the description of the service with a "coming soon" message.
Holden says by email that the service will actually be $10/month and $120/year. "We are charging this symbolical fee to recover our expense to verify the domain or website ownership," he says by email. "While we do not anticipate any fraud, we need to be cognizant of its potential. The other thing to consider, the cost that our company must undertake to proactively reach out to a company to identify the right individual(s) to inform of a breach, prove to them that we are the 'good guys'. Believe it or not, it is a hard and often thankless task."
This story is getting squrrelier and squrrelier. Yes, security companies love to hype the threat to sell their products and services. But this goes further: single-handedly trying to create a panic, and then profiting off that panic.
I don't know how much of this story is true, but what I was saying to reporters over the past two days is that it's evidence of how secure the Internet actually is. We're not seeing massive fraud or theft. We're not seeing massive account hijacking. A gang of Russian hackers has 1.2 billion passwords -- they've probably had most of them for a year or more -- and everything is still working normally. This sort of thing is pretty much universally true. You probably have a credit card in your wallet right now whose number has been stolen. There are zero-day vulnerabilities being discovered right now that can be used to hack your computer. Security is terrible everywhere, and it it's all okay. This is a weird paradox that we're used to by now.

Brian Krebs vouches for Hold Security. On the other hand, they had no web presence until this story hit. Despite Krebs, I'm skeptical.
Here's an article about Hold Security from February with suspiciously similar numbers.


Social Engineering Telemarketer

Okay, this is funny.

Extract info from Audio

Researchers are able to recover sound through soundproof glass by recording the vibrations of a plastic bag.
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.
In other experiments, they extracted useful audio signals from videos of aluminum foil, the surface of a glass of water, and even the leaves of a potted plant.
This isn't a new idea. I remember military security policies requiring people to close the window blinds to prevent someone from shining a laser on the window and recovering the sound from the vibrations. But both the camera and processing technologies are getting better.
News story.

Thursday, August 7, 2014

Indian Telcos Want WhatsApp & Viber To Pay Them

The Telecom Regulatory Authority of India (TRAI) has floated a proposal which could see developers of apps like WhatsApp and Viber paying telecom companies connectivity charges and a share of revenues going to the government as well. The move is aimed at curbing losses incurred by telcos as more users are making calls and sending messages over the internet, as opposed to using traditional mobile networks.  While a Business Today report states that TRAI is unlikely to pose any major restrictions on free messaging and calling services, it does want in on the action. Telecom industry estimates suggest that network providers lose Rs 5,000 crore annually due to customers making use of free apps for calling and messaging.  That number is expected to grow Rs 16,400 crore in the next two years as the number of users accessing the Internet via mobile is growing at a rapid pace. The other major concern the Indian government faces is in terms of security, which might lead to the concerned services needing to setup services in the country.  "We may be asking them to put their servers in India as they get connected to any telecom network in India without getting themselves registered that is something which is also a security concern for us," said a senior Department of Telecommunications official.  What's surprising is that none of the telecom companies, or TRAI, has mentioned revenues generated through mobile data subscription that they sell to customers. While using WhatsApp's (isn't really free) and Viber's services may appear to be free, users are paying for gaining access to the Internet on their devices.  Moreover, use of such services isn't exempt from data charges in most cases, which ultimately points to telcos and the government already earning huge revenues from customers making use of 'free' messaging and calling clients. What the move ultimately hilights is telcos reliance on customers making calls and sending messages to earn profits.  It does seem odd that rather than promoting use of services like WhatsApp and Viber, which will in turn bump up the mobile data usage in the country, TRAI and Indian telcos are going after a share of the revenues generated by these companies.

Wednesday, August 6, 2014

SynoLocker Ransomware Demanding 0.6 Bitcoin to decrypt Synology NAS devices

Network-attached storage devices made by Synology are being attacked, and their data encrypted, by ransomware that demands $350 in bitcoins (payable anonymously via Tor) for the decryption key. As of this moment, there's no patch.

Former NSA Director Patenting Computer Security Techniques

Former NSA Director Keith Alexander is patenting a variety of techniques to protect computer networks. We're supposed to believe that he developed these on his own time and they have nothing to do with the work he did at the NSA, except for the parts where they obviously did and therefore are worth $1 million per month for companies to license.
No, nothing fishy here.

Surveillance in Singapore

Good essay.

Saturday, August 2, 2014


President Barack Obama signed a bill into law Friday that aims to make it legal for consumers to “unlock” their cell phones in order to change their cell phone service providers without paying for a new phone.
The bill is known as the Unlocking Consumer Choice and Wireless Competition Act, which orders the U.S. Library of Congress (LoC) to allow cell phone owners to "unlock" their devices – typically “locked,” to a specific service provider like AT&T or Verizon – for its use on other networks without the permission of their service provider.
"As long as their phone is compatible and they have complied with their contracts, consumers will now be able to enjoy the freedom of taking their mobile service - and a phone they already own - to the carrier that best fits their needs," the White House said in a statement.
Unlocking means the device can only access the network of a particular telecomm, like AT&T or Verizon.

When consumers buy a cell phone or tablet, the cost is often subsidized in exchange for a two-year contract with a network carrier. The operating system inbuilt in the phone encrypts or locks devices so that they cannot be used with any other network.
Jailbreaking, on the other hand, is the process of removing limitations on iOS devices. Jailbreaking permits root access to the iOS file system and manager, allowing a cell phone owner to install tweaks, hacks, and mods that aren’t allowed in the App Store. Rooting Android is almost the same thing as Jailbreaking iOS.
After Congress approved of legislation to make the action legal again, President Obama praised the move and is looking forward to sign it.
"The bill Congress passed today is another step toward giving ordinary Americans more flexibility and choice, so that they can find a cell phone carrier that meets their needs and their budget," the president said last week, when the bill cleared Congress. "I commend Chairmen Leahy and Goodlatte, and Ranking Members Grassley and Conyers for their leadership on this important consumer issue and look forward to signing this bill into law."
Under an amendment to the Digital Millennium Copyright Act (DMCA), the Librarian of Congress (LoC) made ‘unlocking’ illegal last year.
This new bill is made totally against the last year decision made by the LoC that said it was legal for customers to jailbreak a phone but illegal to unlock any devices, whether a smartphone or tablet. This gives legal authority to service providers to keep a consumer's phone "locked," in which the cell phone owner faces large costs in switching carriers or attempting to link to other carriers overseas while traveling.
Many politicians and consumer groups have said the new bill into law will help promote consumer choice in the wireless market, since it now freely allows consumers to take their mobile phones to other carriers.
"This bill ensures that consumers will be able to do what they rightfully expect to be able to do with phones they have purchased: use them on whatever network they like," Laura Moy, an attorney at advocacy group Public Knowledge, stated after last week. Moy added that the bill will also "keep millions of devices out of landfills."
Though the law makes unlocking legal, it does not order wireless carriers to provide customers the unlock codes without a valid reason. Consumers who are still under a previously-signed service contract with their service providers, have to first satisfy the terms and period of that ongoing contract before being allowed to unlock their device.
[This] effort began with a digital petition on the White House’s We the People site, an online platform where citizens can offer ideas for the Administration to take action on important issues facing our country,” Senator Patrick Leahy and President Obama’s economic advisor Jeff Zients wrote on the White House blog.
The petition drew an extraordinary response, with more than 114,000 citizens making their voices heard. And within two weeks, the White House answered with a call to action, laying out steps the Federal Communications Commission (FCC), industry, and Congress could take to ensure copyright law would not undermine wireless competition.” he added.